Digital Ocean’s Kubernetes service is now generally available

Like any ambitious cloud infrastructure player, Digital Ocean also recently announced a solution for running Kubernetes clusters on its platform. At KubeCon + CloudNativeCon Europe in Barcelona, the company today announced that Digital Ocean Kubernetes is now generally available.

With this release, the company is also bringing the latest Kubernetes release (1.14) to the platform and developers who use the service will able to schedule automatic patch version upgrades, too.

Now that it’s generally available, Digital Ocean is also bringing the service to all of its data centers around the world and introducing a few new features, too. These include a new guided configuration experience, for example, which moves users from provisioning to deploying clusters. The company is also introducing new advanced health metrics so developers can see what’s happening in their clusters. These include data about pod deployment status, CPU and memory usage, and more.

It’s also launching new open APIs so that third-party tools can more easily integrate support for Digital Ocean Kubernetes into their own solutions.

Soon, the company will also launch a marketplace for 1-click apps for Kubernetes, that will make it far easier for its users to deploy applications into a Kubernetes cluster. This feature will be based on the open-source Helm project, which is already the de facto standard for Kubernetes package management.

Talk key takeaways from KubeCon 2019 with TechCrunch writers

The Linux Foundation’s annual KubeCon conference is going down at the Fira Gran Via exhibition center in Barcelona, Spain this week and TechCrunch is on the scene covering all the latest announcements.

The KubeCon/CloudNativeCon conference is the world’s largest gathering for the topics of Kubernetes, DevOps and cloud-native applications. TechCrunch’s Frederic Lardinois and Ron Miller will be on the ground at the event. Wednesday at 9:00 am PT, Frederic and Ron will be sharing with Extra Crunch members via a conference call what they saw and what it all means.

Tune in to dig into what happened onstage and off and ask Frederic and Ron any and all things Kubernetes, open-source development or dev tools.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

Solo.io wants to bring order to service meshes with centralized management hub

As containers and microservices have proliferated, a new kind of tool called the service mesh has developed to help manage and understand interactions between services. While Kubernetes has emerged as the clear container orchestration tool of choice, there is much less certainty in the service mesh market. Solo.io announced a new open source tool called Service Mesh Hub today, designed to help companies manage multiple service meshes in a single interface.

It is early days for the service mesh concept, but there are already multiple offerings including Istio, Linkerd (pronounced Linker-Dee) and Convoy. While the market sorts itself it out, it requires a new set of tools, a management layer, so that developers and operations can monitor and understand what’s happening inside the various service meshes they are running.

Idit Levine, founder and CEO at Solo, say she formed the company because she saw an opportunity to develop a set of tooling for a nascent market. Since founding the company in 2017, it has developed several open source tools to fill that service mesh tool vacuum.

Levine says that she recognized that companies would be using multiple service meshes for multiple situations and that not every company would have the technical capabilities to manage this. That is where the idea for the Service Mesh Hub was born.

It’s a centralized place for companies to add the different service mesh tools they are using, understand the interactions happening within the mesh and add extensions to each one from a kind of extension app store. Solo wants to make adding these tools a simple matter of pointing and clicking. While it obviously still requires a certain level of knowledge about how these tools work, it removes some of the complexity around managing them.

Solo.io Service Mesh Hub

Solo.io Service Mesh Hub. Screenshot: Solo.io

“The reason we created this is because we believe service mesh is something big, and we want people to use it, and we feel it’s hard to adopt right now. We believe by creating that kind of framework or platform, it will make it easier for people to actually use it,” Levine told TechCrunch.

The vision is that eventually companies will be able to add extensions to the store for free, or even at some point for a fee, and it is through these paid extensions that the company will be able to make money. She recognized that some companies will be creating extensions for internal use only, and in those cases, they can add them to the hub and mark them as private and only that company can see them.

For every abstraction it seems, there is a new set of problems to solve. The service mesh is a response to the problem of managing multiple services. It solves three key issues, according to Levine. It allows a company to route the microservices, have visibility into them to see logs and metrics of the mesh and to provide security to manage which services can talk to each other.

Levine’s company is a response to the issues that have developed around understanding and managing the service meshes themselves. She says she doesn’t worry about a big company coming in and undermining her mission because she says that they are too focused on their own tools to create a set of uber-management tool like these (but that doesn’t mean the company wouldn’t be an attractive acquisition target).

So far, the company has taken over $13 million in funding, according to Crunchbase data.

Cisco open sources MindMeld conversational AI platform

Cisco announced today that it was open sourcing the MindMeld conversation AI platform, making it available to anyone who wants to use it under the Apache 2.0 license.

MindMeld is the conversational AI company that Cisco bought in 2017. The company put the technology to use in Cisco Spark Assistant later that year to help bring voice commands to meeting hardware, which was just beginning to emerge at the time.

Today, there is a concerted effort to bring voice to enterprise use cases, and Cisco is offering the means for developers to do that with the MindMeld tool set. “Today, Cisco is taking a big step towards empowering developers with more comprehensive and practical tools for building conversational applications by open-sourcing the MindMeld Conversational AI Platform,” Cisco’s head of machine learning Karthik Raghunathanw wrote in a blog post.

The company also wants to make it easier for developers to get going with the platform, so it is releasing the Conversational AI Playbook, a step-by-step guide book to help developers get started with conversation-driven applications. Cisco says this is about empowering developers, and that’s probably a big part of the reason.

But it would also be in Cisco’s best interest to have developers outside of Cisco working with and on this set of tools. By open sourcing them, the hope is that a community of developers, whether Cisco customers or others, will begin using, testing and improving the tools; helping it to develop the platform faster and more broadly than it could, even inside an organization as large as Cisco.

Of course, just because they offer it doesn’t necessarily automatically mean the community of interested developers will emerge, but given the growing popularity of voice-enabled used cases, chances are some will give it a look. It will be up to Cisco to keep them engaged.

Cisco is making all of this available on its own DevNet platform starting today.

With Kata Containers and Zuul, OpenStack graduates its first infrastructure projects

Over the course of the last year and a half, the OpenStack Foundation made the switch from purely focusing on the core OpenStack project to opening itself up to other infrastructure-related projects as well. The first two of these projects, Kata Containers and the Zuul project gating system, have now exited their pilot phase and have become the first top-level Open Infrastructure Projects at the OpenStack Foundation.

The Foundation made the announcement at its first Open Infrastructure Summit (previously known as the OpenStack Summit) in Denver today after the organization’s board voted to graduate them ahead of this week’s conference. “It’s an awesome milestone for the projects themselves,” OpenStack Foundation executive direction Jonathan Bryce told me. “It’s a validation of the fact that in the last 18 months, they have created sustainable and productive communities.”

It’s also a milestone for the OpenStack Foundation itself, though, which is still in the process of reinventing itself in many ways. It can now point at two successful projects that are under its stewardship, which will surely help it as it goes out an tries to attract others who are looking to bring their open-source projects under the aegis of a foundation.

In addition to graduating these first two projects, Airship — a collection of open-source tools for provisioning private clouds that is currently a pilot project — hit version 1.0 today. “Airship originated within AT&T,” Bryce said. “They built it from their need to bring a bunch of open-source tools together to deliver on their use case. And that’s why, from the beginning, it’s been really well aligned with what we would love to see more of in the open source world and why we’ve been super excited to be able to support their efforts there.”

With Airship, developers use YAML documents to describe what the final environment should like like and the result of that is a production-ready Kubernetes cluster that was deployed by OpenStack’s Helm tool – though without any other dependencies on OpenStack.

AT&T’s assistant vice president, Network Cloud Software Engineering, Ryan van
Wyk, told me that a lot of enterprises want to use certain open-source components, but that the interplay between them is often difficult and that while it’s relatively easy to manage the lifecycle of a single tool, it’s hard to do so when you bring in multiple open-source tools, all with their own lifecycles. “What we found over the last five years working in this space is that you can go and get all the different open-source solutions that you need,” he said. “But then the operator has to invest a lot of engineering time and build extensions and wrappers and perhaps some orchestration to manage the lifecycle of the various pieces of software required to deliver the infrastructure.”

It’s worth noting that nothing about Airship is specific to the telco world, though it’s no secret that OpenStack is quite popular in the telco world and unsurprisingly, the Foundation is using this week’s event to highlight the OpenStack project’s role in the upcoming 5G rollouts of various carriers.

In addition, the event will also showcase OpenStack’s bare metal capabilities, an area the project has also focused on in recent releases. Indeed, the Foundation today announced that its bare metal tools now manage over a million cores of compute. To codify these efforts, the Foundation also today launched the OpenStack Ironic Bare Metal program, which brings together some of the project’s biggest users like Verizon Media (home of TechCrunch, though we don’t run on the Verizon cloud), 99Cloud, China Mobile, China Telecom, China Unicom, Mirantis, OVH, Red Hat, SUSE, Vexxhost and ZTE.

Chef goes 100% open source

Chef, the popular automation service, today announced that it is open sourcing all of its software under the Apache 2 license. Until now, Chef used an open core model with a number of proprietary products that complemented its open-source tools. Most of these proprietary tools focused on enterprise users and their security and deployment needs. Now, all of these tools, which represent somewhere between a third and half of Chef’s total code base, are open source, too.

“We’re moving away from our open core model,” Chef SVP of products and engineering Corey Scobie told me. “We’re now moving to exclusively open source software development.”

He added that this also includes open product development. Going forward, the company plans to share far more details about its roadmap, feature backlogs and other product development details. All of Chef’s commercial offerings will also be built from the same open source code that everybody now has access to.

Scobie noted that there are a number of reasons why the company is doing this. He believes, for example, that the best way to build software is to collaborate in public with those who are actually using it.

“With that philosophy in mind, it was really easy to justify how we’d take the remainder of the software that we product and make it open source,” Scobie said. “We believe that that’s the best way to build software that works for people — real people in the real world.”

Another reason, Scobie said, is that it was becoming increasingly difficult for Chef to explain which parts of the software were open source and which were not. “We wanted to make that conversation easier, to be perfectly honest.”

Chef’s decision comes during a bit of a tumultuous time in the open source world. A number of companies like Redis, MongoDB and Elasic have recently moved to licenses that explicitly disallow the commercial use of their open source products by large cloud vendors like AWS unless they also buy a commercial license.

But here is Chef, open sourcing everything. Chef co-founder and board member Adam Jacob doesn’t think that’s a problem. “In the open core model, you’re saying that the value is in this proprietary sliver. The part you pay me for is this sliver of its value. And I think that’s incorrect,” he said. “I think, in fact, the value was always in the totality of the product.”

Jacob also argues that those companies that are moving to these new, more restrictive licenses, are only hurting themselves. “It turns out that the product was what mattered in the first place,” he said. “They continue to produce great enterprise software for their customers and their customers continue to be happy and continue to buy it, which is what they always would’ve done.” He also noted that he doesn’t think AWS will ever be better at running Elasticsearch than Elastic or, for that matter, at running Chef better than Chef.

It’s worth noting that Chef also today announced the launch of its Enterprise Automation Stack, which brings together all of Chef’s tools (Chef Automate, Infra, InSpec, Habitat and Workstation) under a unified umbrella.

“Chef is fully committed to enabling organizations to eliminate friction across the lifecycle of all of their applications, ensuring that, whether they build their solutions from our open source code or license our commercial distribution, they can benefit from collaboration as code,” said Chef CEO Barry Crist. “Chef Enterprise Automation Stack lets teams establish and maintain a consistent path to production for any application, in order to increase velocity and improve efficiency, so deployment and updates of mission-critical software become easier, move faster and work flawlessly.”

Microsoft open sources its data compression algorithm and hardware for the cloud

The amount of data that the big cloud computing providers now store is staggering, so it’s no surprise that most store all of this information as compressed data in some form or another — just like you used to zip your files back in the days of floppy disks, CD-ROMs and low-bandwidth connections. Typically, those systems are closely guarded secrets, but today, Microsoft open sourced the algorithm, hardware specification and Verilog source code for how it compresses data in its Azure cloud. The company is contributing all of this to the Open Compute Project (OCP).

Project Zipline, as Microsoft calls this project, can achieve 2x higher compression ratios compared to the standard Zlib-L4 64KB model. To do this, the algorithm — and its hardware implementation — were specifically tuned for the kind of large datasets Microsoft sees in its cloud. Because the system works at the systems level, there is virtually no overhead and Microsoft says that it is actually able to manage higher throughput rates and lower latency than other algorithms are currently able to achieve.

Microsoft stresses that it is also contributing the Verilog source code for register transfer language (RTL) — that is, the low-level code that makes this all work. “Contributing RTL at this level of detail as open source to OCP is industry leading,” Kushagra Vaid, the general manager for Azure hardware infrastructure, writes. “It sets a new precedent for driving frictionless collaboration in the OCP ecosystem for new technologies and opening the doors for hardware innovation at the silicon level.”

Microsoft is currently using this system in its own Azure cloud, but it is now also partnering with others in the Open Compute Project. Among these partners are Intel, AMD, Ampere, Arm, Marvell, SiFive, Broadcom, Fungible, Mellanox, NGD Systems, Pure Storage, Synopsys and Cadence.

“Over time, we anticipate Project Zipline compression technology will make its way into several market segments and usage models such as network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices,” writes Vaid.

Skyrim mod drama gets ugly with allegations of stolen code and misappropriated donations

The people who volunteer their time modifying and updating old games are among the most generous of developers. So when drama erupts there’s not just irritation and testy emails but a sense of a community being betrayed or taken advantage of. A recent conflict over work on the perennially renewed classic Skyrim may seem small but for those involved, it’s a huge upset.

I don’t mean to make a bigger deal out of this niche issue than it is; I feel though that sometimes it’s important to elevate things not because they are highly important in and of themselves, but because they represent a class of small injustices or conflicts that are rife on the modern web.

The example today comes from the Skyrim modding community, which creates all kinds of improvements for the classic fantasy adventure, from new items and better maps to complete overhauls. It’s one of the most active out there, as Bethesda not only is highly tolerant of modders but tends to ship games, if we’re honest, in pretty poor shape. Modders have taken to filling in the gaps left by Bethesda and making the original game far better than how it shipped.

One of the more useful of these mods, for developers but indirectly for players, is the Skyrim Script Extender, or SKSE. It basically allows for more complex behaviors for objects, locations, and NPCs. How do you have a character seek shelter from the rain if there’s no weather-based behaviors in their original AI? That sort of thing (though that’s an invented example). SKSE goes back a long way and the creators provide much of the code for others to use under a free license, while declining donations themselves.

Another project is Skyrim Together (ST), a small team which since 2013 has (among others) been working on adding multiplayer functionality to the game — their Patreon account, in contrast, is pulling in more than $30,000 a month. The main dev there allegedly independently distributed a modified version of SKSE several years ago against the terms of the license, and was henceforth specifically banned from using SKSE code in the future.

Guess what SKSE’s lead found in a bit of code inspection the other day?

Yes, unfortunately, it seems that SKSE code is in the ST app, not only in violation of the license as far as not giving credit, but in that the dev himself has been barred from using it, and furthermore that — although there is some debate here — the ST team is essentially charging for access to a “closed beta.” Some say that it’s just a donation they ask for, but requiring a donation is really indistinguishable from charging for something.

A response from the devs downplayed the issue; they say it’s just a bit of old junk in the codebase:

There might be some leftover code from them in there that was overlooked when we removed it, it isn’t as simple as just deleting a folder, mainly our fault because we rushed some parts of the code. Anyway we are going to make sure to remove what might have slipped through the cracks for the next patch.

Instead of SKSE, one developer said, they had substituted other code, for instance from the project libSkyrim. But as others quickly pointed out, libSkyrim is based on SKSE and there’s no way they could be ignorant of that fact. So the assertion that they weren’t using the forbidden code doesn’t really hold water. Not only that, but ST doesn’t even credit libSkyrim at all, a standard practice when you reuse code.

This wouldn’t really be as big of a problem if ST was not only making quite a bit of scratch off their project via donations, but required donations for access to the code. That arguably makes it a commercial project, putting it even farther outside the bounds of code reuse.

Now, taking the hard work of open and semi-open source developers and using it in other projects is encouraged — in fact, it’s kind of the point. But it’s meant to be a collaboration, and the rules are there to make sure credit goes where it’s due.

I don’t think the ST people are villains; they’re working on something many players are interesting in using — and paying for, if the Patreon is any indication. That’s great, and it’s what the mod community is all about. But the other side of the community, as in any group of developers, is respectful and mutual acknowledgement.

Honesty is important here because it’s not always possible to audit someone else’s code. And honesty is also important because users want to be able to trust developers for a variety of reasons — not least of which that they are donating to a project working in good faith. That trust was shaken here.

As I said at the beginning, I don’t mean to make this a huge deal. No one is getting rich (though even split ten ways, $33,000 a month is nothing to sniff at), and no one is getting hurt. But I imagine there’s hardly an open source project out there that hasn’t had to police others’ use of their code or live in fear of someone cashing in on something they’ve donated their time to for years.

Here’s hoping this particular tempest in a teapot resolves happily, but don’t forget there’s a lot more teapots where this one came from.

Skyrim mod drama gets ugly with allegations of stolen code and misappropriated donations

The people who volunteer their time modifying and updating old games are among the most generous of developers. So when drama erupts there’s not just irritation and testy emails but a sense of a community being betrayed or taken advantage of. A recent conflict over work on the perennially renewed classic Skyrim may seem small but for those involved, it’s a huge upset.

I don’t mean to make a bigger deal out of this niche issue than it is; I feel though that sometimes it’s important to elevate things not because they are highly important in and of themselves, but because they represent a class of small injustices or conflicts that are rife on the modern web.

The example today comes from the Skyrim modding community, which creates all kinds of improvements for the classic fantasy adventure, from new items and better maps to complete overhauls. It’s one of the most active out there, as Bethesda not only is highly tolerant of modders but tends to ship games, if we’re honest, in pretty poor shape. Modders have taken to filling in the gaps left by Bethesda and making the original game far better than how it shipped.

One of the more useful of these mods, for developers but indirectly for players, is the Skyrim Script Extender, or SKSE. It basically allows for more complex behaviors for objects, locations, and NPCs. How do you have a character seek shelter from the rain if there’s no weather-based behaviors in their original AI? That sort of thing (though that’s an invented example). SKSE goes back a long way and the creators provide much of the code for others to use under a free license, while declining donations themselves.

Another project is Skyrim Together (ST), a small team which since 2013 has (among others) been working on adding multiplayer functionality to the game — their Patreon account, in contrast, is pulling in more than $30,000 a month. The main dev there allegedly independently distributed a modified version of SKSE several years ago against the terms of the license, and was henceforth specifically banned from using SKSE code in the future.

Guess what SKSE’s lead found in a bit of code inspection the other day?

Yes, unfortunately, it seems that SKSE code is in the ST app, not only in violation of the license as far as not giving credit, but in that the dev himself has been barred from using it, and furthermore that — although there is some debate here — the ST team is essentially charging for access to a “closed beta.” Some say that it’s just a donation they ask for, but requiring a donation is really indistinguishable from charging for something.

A response from the devs downplayed the issue; they say it’s just a bit of old junk in the codebase:

There might be some leftover code from them in there that was overlooked when we removed it, it isn’t as simple as just deleting a folder, mainly our fault because we rushed some parts of the code. Anyway we are going to make sure to remove what might have slipped through the cracks for the next patch.

Instead of SKSE, one developer said, they had substituted other code, for instance from the project libSkyrim. But as others quickly pointed out, libSkyrim is based on SKSE and there’s no way they could be ignorant of that fact. So the assertion that they weren’t using the forbidden code doesn’t really hold water. Not only that, but ST doesn’t even credit libSkyrim at all, a standard practice when you reuse code.

This wouldn’t really be as big of a problem if ST was not only making quite a bit of scratch off their project via donations, but required donations for access to the code. That arguably makes it a commercial project, putting it even farther outside the bounds of code reuse.

Now, taking the hard work of open and semi-open source developers and using it in other projects is encouraged — in fact, it’s kind of the point. But it’s meant to be a collaboration, and the rules are there to make sure credit goes where it’s due.

I don’t think the ST people are villains; they’re working on something many players are interesting in using — and paying for, if the Patreon is any indication. That’s great, and it’s what the mod community is all about. But the other side of the community, as in any group of developers, is respectful and mutual acknowledgement.

Honesty is important here because it’s not always possible to audit someone else’s code. And honesty is also important because users want to be able to trust developers for a variety of reasons — not least of which that they are donating to a project working in good faith. That trust was shaken here.

As I said at the beginning, I don’t mean to make this a huge deal. No one is getting rich (though even split ten ways, $33,000 a month is nothing to sniff at), and no one is getting hurt. But I imagine there’s hardly an open source project out there that hasn’t had to police others’ use of their code or live in fear of someone cashing in on something they’ve donated their time to for years.

Here’s hoping this particular tempest in a teapot resolves happily, but don’t forget there’s a lot more teapots where this one came from.

Open-source communities fight over telco market

When you think of MWC Barcelona, chances are you’re thinking about the newest smartphones and other mobile gadgets, but that’s only half the story. Actually, it’s probably far less than half the story because the majority of the business that’s done at MWC is enterprise telco business. Not too long ago, that business was all about selling expensive proprietary hardware. Today, it’s about moving all of that into software — and a lot of that software is open source.

It’s maybe no surprise then that this year, the Linux Foundation (LF) has its own booth at MWC. It’s not massive, but it’s big enough to have its own meeting space. The booth is shared by the three LF projects: the Cloud Native Computing Foundation (CNCF), Hyperleger and Linux Foundation Networking, the home of many of the foundational projects like ONAP and the Open Platform for NFV (OPNFV) that power many a modern network. And with the advent of 5G, there’s a lot of new market share to grab here.

To discuss the CNCF’s role at the event, I sat down with Dan Kohn, the executive director of the CNCF.

At MWC, the CNCF launched its testbed for comparing the performance of virtual network functions on OpenStack and what the CNCF calls cloud-native network functions, using Kubernetes (with the help of bare-metal host Packet). The project’s results — at least so far — show that the cloud-native container-based stack can handle far more network functions per second than the competing OpenStack code.

“The message that we are sending is that Kubernetes as a universal platform that runs on top of bare metal or any cloud, most of your virtual network functions can be ported over to cloud-native network functions,” Kohn said. “All of your operating support system, all of your business support system software can also run on Kubernetes on the same cluster.”

OpenStack, in case you are not familiar with it, is another massive open-source project that helps enterprises manage their own data center software infrastructure. One of OpenStack’s biggest markets has long been the telco industry. There has always been a bit of friction between the two foundations, especially now that the OpenStack Foundation has opened up its organizations to projects that aren’t directly related to the core OpenStack projects.

I asked Kohn if he is explicitly positioning the CNCF/Kubernetes stack as an OpenStack competitor. “Yes, our view is that people should be running Kubernetes on bare metal and that there’s no need for a middle layer,” he said — and that’s something the CNCF has never stated quite as explicitly before but that was always playing in the background. He also acknowledged that some of this friction stems from the fact that the CNCF and the OpenStack foundation now compete for projects.

OpenStack Foundation, unsurprisingly, doesn’t agree. “Pitting Kubernetes against OpenStack is extremely counterproductive and ignores the fact that OpenStack is already powering 5G networks, in many cases in combination with Kubernetes,” OpenStack COO Mark Collier told me. “It also reflects a lack of understanding about what OpenStack actually does, by suggesting that it’s simply a virtual machine orchestrator. That description is several years out of date. Moving away from VMs, which makes sense for many workloads, does not mean moving away from OpenStack, which manages bare metal, networking and authentication in these environments through the Ironic, Neutron and Keystone services.”

Similarly, OpenStack Foundation board member (and Mirantis co-founder) Boris Renski told me that “just because containers can replace VMs, this doesn’t mean that Kubernetes replaces OpenStack. Kubernetes’ fundamental design assumes that something else is there that abstracts away low-level infrastructure, and is meant to be an application-aware container scheduler. OpenStack, on the other hand, is specifically designed to abstract away low-level infrastructure constructs like bare metal, storage, etc.”

This overall theme continued with Kohn and the CNCF taking a swipe at Kata Containers, the first project the OpenStack Foundation took on after it opened itself up to other projects. Kata Containers promises to offer a combination of the flexibility of containers with the additional security of traditional virtual machines.

“We’ve got this FUD out there around Kata and saying: telco’s will need to use Kata, a) because of the noisy neighbor problem and b) because of the security,” said Kohn. “First of all, that’s FUD and second, micro-VMs are a really interesting space.”

He believes it’s an interesting space for situations where you are running third-party code (think AWS Lambda running Firecracker) — but telcos don’t typically run that kind of code. He also argues that Kubernetes handles noisy neighbors just fine because you can constrain how many resources each container gets.

It seems both organizations have a fair argument here. On the one hand, Kubernetes may be able to handle some use cases better and provide higher throughput than OpenStack. On the other hand, OpenStack handles plenty of other use cases, too, and this is a very specific use case. What’s clear, though, is that there’s quite a bit of friction here, which is a shame.