Google Cloud gets a new family of cheaper general-purpose compute instances

Google Cloud today announced the launch of its new E2 family of compute instances. These new instances, which are meant for general-purpose workloads, offer a significant cost benefit, with saving of around 31% compared to the current N1 general-purpose instances.

The E2 family runs on standard Intel and AMD chips, but as Google notes, they also use a custom CPU scheduler “that dynamically maps virtual CPU and memory to physical CPU and memory to maximize utilization.” In addition, the new system is also smarter about where it places VMs, with the added flexibility to move them to other hosts as necessary. To achieve all of this, Google built a custom CPU scheduler “with significantly better latency guarantees and co-scheduling behavior than Linux’s default scheduler.” The new scheduler promises sub-microsecond wake-up latencies and faster context switching.

That gives Google efficiency gains that it then passes on to users in the form of these savings. Chances are, we will see similar updates to Google’s other instances families over time.

It’s interesting to note that Google is clearly willing to put this offering against that of its competitors. “Unlike comparable options from other cloud providers, E2 VMs can sustain high CPU load without artificial throttling or complicated pricing,” the company writes in today’s announcement. “This performance is the result of years of investment in the Compute Engine virtualization stack and dynamic resource management capabilities.” It’ll be interesting to see some benchmarks that pit the E2 family against similar offerings from AWS and Azure.

As usual, Google offers a set of predefined instance configurations, ranging from 2 vCPUs with 8 GB of memory to 16 vCPUs and 128 GB of memory. For very small workloads, Google Cloud is also launching a set of E2-based instances that are similar to the existing f1-micro and g1-small machine types. These feature 2 vCPUs, 1 to 4 GB of RAM and a baseline CPU performance that ranges from the equivalent of 0.125 vCPUs to 0.5 vCPUs.

AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outpost, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.

Cloud Foundry’s Kubernetes bet with Project Eirini hits 1.0

Cloud Foundry, the open-source platform-as-a-service that, with the help of lots of commercial backers, is currently in use by the majority of Fortune 500 companies, launched well before containers, and especially the Kubernetes orchestrator, were a thing. Instead, the project built its own container service, but the rise of Kubernetes obviously created a lot of interest in using it for managing Cloud Foundry’s container implementation. To do so, the organization launched Project Eirini last year; today, it’s officially launching version 1.0, which means it’s ready for production usage.

Eirini/Kubernetes doesn’t replace the old architecture. Instead, for the foreseeable future, they will operate side-by-side, with the operators deciding on which one to use.

The team working on this project shipped a first technical preview earlier this year and a number of commercial vendors, too, started to build their own commercial products around it and shipped it as a beta product.

“It’s one of the things where I think Cloud Foundry sometimes comes at things from a different angle,” IBM’s Julz Friedman told me. “Because it’s not about having a piece of technology that other people can build on in order to build a platform. We’re shipping the end thing that people use. So 1.0 for us — we have to have a thing that ticks all those boxes.”

He also noted that Diego, Cloud Foundry’s existing container management system, had been battle-tested over the years and had always been designed to be scalable to run massive multi-tenant clusters.

“If you look at people doing similar things with Kubernetes at the moment,” said Friedman, “they tend to run lots of Kubernetes clusters to scale to that kind of level. And Kubernetes, although it’s going to get there, right now, there are challenges around multi-tenancy, and super big multi-tenant scale”

But even without being able to get to this massive scale, Friedman argues that you can already get a lot of value even out of a small Kubernetes cluster. Most companies don’t need to run enormous clusters, after all, and they still get the value of Cloud Foundry with the power of Kubernetes underneath it (all without having to write YAML files for their applications).

As Cloud Foundry CTO Chip Childers also noted, once the transition to Eirini gets to the point where the Cloud Foundry community can start applying less effort to its old container engine, those resources can go back to fulfilling the project’s overall mission, which is about providing the best possible developer experience for enterprise developers.

“We’re in this phase in the industry where Kubernetes is the new infrastructure and [Cloud Foundry] has a very battle-tested developer experience around it,” said Childers. “But there’s also really interesting ideas that are out there that are coming from our community, so one of the things that I’ve suggested to the community writ large is, let’s use this time as an opportunity to not just evolve what we have, but also make sure that we’re paying attention to new workflows, new models, and figure out what’s going to provide benefit to that enterprise developer that we’re so focused on — and bring those types of capabilities in.”

Those new capabilities may be around technologies like functions and serverless, for example, though Friedman at least is more focused on Eirini 1.1 for the time being, which will include closing the gaps with what’s currently available in Cloud Foundry’s old scheduler, like Docker image support and support for the Cloud Foundry v3 API.

Google Cloud launches Bare Metal Solution

Google Cloud today announced the launch of a new bare metal service, dubbed the Bare Metal Solution. We aren’t talking about bare metal servers offered directly by Google Cloud here, though. Instead, we’re talking about a solution that enterprises can use to run their specialized workloads on certified hardware that’s co-located in the Google Cloud data centers and directly connect them to Google Cloud’s suite of other services. The main workload that makes sense for this kind of setup is databases, Google notes, and specifically Oracle Database.

Bare Metal Solution is, as the name implies, a fully integrated and fully managed solution for setting up this kind of infrastructure. It involves a completely managed hardware infrastructure that includes servers and the rest of the data center facilities like power and cooling; support contracts with Google Cloud and billing are handled through Google’s systems, as well as an SLA. The software that’s deployed on those machines is managed by the customer — not Google.

The overall idea, though, is clearly to make it easier for enterprises with specialized workloads that can’t easily be migrated to the cloud to still benefit from the cloud-based services that need access to the data from these systems. Machine learning is an obvious example, but Google also notes that this provides these companies with a bridge to slowly modernize their tech infrastructure in general (where “modernize” tends to mean “move to the cloud”).

“These specialized workloads often require certified hardware and complicated licensing and support agreements,” Google writes. “This solution provides a path to modernize your application infrastructure landscape, while maintaining your existing investments and architecture. With Bare Metal Solution, you can bring your specialized workloads to Google Cloud, allowing you access and integration with GCP services with minimal latency.”

Because this service is co-located with Google Cloud, there are no separate ingress and egress charges for data that moves between Bare Metal Solution and Google Cloud in the same region.

The servers for this solution, which are certified to run a wide range of applications (including Oracle Database) range from dual-socket 16-core systems with 384 GB of RAM to quad-socket servers with 112 cores and 3072 GB of RAM. Pricing is on a monthly basis, with a preferred term length of 36 months.

Obviously, this isn’t the kind of solution that you self-provision, so the only way to get started — and get pricing information — is to talk to Google’s sales team. But this is clearly the kind of service that we should expect from Google Cloud, which is heavily focused on providing as many enterprise-ready services as possible.

Google makes converting VMs to containers easier with the GA of Migrate for Anthos

At its Cloud Next event in London, Google today announced a number of product updates around its managed Anthos platform, as well as Apigee and its Cloud Code tools for building modern applications that can then be deployed to Google Cloud or any Kubernetes cluster.

Anthos is one of the most important recent launches for Google, as it expands the company’s reach outside of Google Cloud and into its customers’ data centers and, increasingly, edge deployments. At today’s event, the company announced that it is taking Anthos Migrate out of beta and into general availability. The overall idea behind Migrate is that it allows enterprises to take their existing, VM-based workloads and convert them into containers. Those machines could come from on-prem environments, AWS, Azure or Google’s Compute Engine, and — once converted — can then run in Anthos GKE, the Kubernetes service that’s part of the platform.

“That really helps customers think about a leapfrog strategy, where they can maintain the existing VMs but benefit from the operational model of Kubernetes,” Google Engineering Director Jennifer Lin told me. “So even though you may not get all of the benefits of a cloud-native container day one, what you do get is consistency in the operational paradigm.”

As for Anthos itself, Lin tells me that Google is seeing some good momentum. The company is highlighting a number of customers at today’s event, including Germany’s Kaeser Kompressoren and Turkey’s Denizbank.

Lin noted that a lot of financial institutions are interested in Anthos. “A lot of the need to do data-driven applications, that’s where Kubernetes has really hit that sweet spot because now you have a number of distributed datasets and you need to put a web or mobile front end on [them],” she explained. “You can’t do it as a monolithic app, you really do need to tap into a number of datasets — you need to do real-time analytics and then present it through a web or mobile front end. This really is a sweet spot for us.”

Also new today is the general availability of Cloud Code, Google’s set of extensions for IDEs like Visual Studio Code and IntelliJ that helps developers build, deploy and debug their cloud-native applications more quickly. The idea, here, of course, is to remove friction from building containers and deploying them to Kubernetes.

In addition, Apigee hybrid is now also generally available. This tool makes it easier for developers and operators to manage their APIs across hybrid and multi-cloud environments, a challenge that is becoming increasingly common for enterprises. This makes it easier to deploy Apigee’s API runtimes in hybrid environments and still get the benefits of Apigees monitoring and analytics tools in the cloud. Apigee hybrid, of course, can also be deployed to Anthos.

6 tips founders need to know about securing their startup

If you’ve read anything of mine in the past year, you know just how complicated security can be.

Every day it seems there’s a new security lapse, a breach, a hack, or an inadvertent exposure, such as leaving a cloud storage server unprotected without a password. These things happen, but they don’t have to; aecurity isn’t as difficult as it sounds, but there’s no one-size-fits-all solution.

We sat down with three experts on the Extra Crunch stage at TechCrunch’s Disrupt SF earlier this month to help startups and founders understand what they need to do, when, and why.

We asked Google’s Heather Adkins, Duo’s Dug Song, and IOActive’s Jennifer Sunshine Steffens for their best advice. Here’s what they had to say.

Quotes have been edited and condensed for clarity.

1. Don’t put off the security conversation

The one resounding message from the panel: don’t put security off.

“There are basically three areas that folks should start considering how to bucket those risks,” said Duo’s Song. “The first is corporate risk in defending your users and applications they access. The second is application security and product risk. A third area is is around production, security and making sure that the operation of your security program is something that keeps up with that risk. And then a fourth — a new and emerging space — is trust, and not just privacy, but also safety.”

It’s better to be proactive about security than to be reactive to a data breach; not only will it help your company bolster its security posture, but it also serves as an important factor in future fundraising negotiations.

Song said founders have a “very direct obligation” to think about security as soon as they take someone else’s money, but especially when a company starts gathering user or customer data. “You have to put yourself in the shoes of those folks whose data you have to protect,” he said. “It’s not just your existential threats to your business, but you do have a responsibility, right to figure out how to do this well.”

IOActive’s Steffens said startups are already a target — simply because it’s assumed many won’t have thought much about security.

“A lot of attackers will go after startups who have high value data, because they know security is not a priority and it’s going to be a lot easier to get ahold of,” she said. “Data these days is extraordinarily valuable.”

2. Start with the security basics

Google’s Adkins, who runs the search giant’s internal information security team, joined the company almost two decades ago when it was just the size of a large startup. Her job is to keep the company’s network, assets, and employees safe.

“When I got there, they were so fanatical about security already, that half of the job was already done,” she said. “From the moment [Google] took its first search query, it was thinking about where those logs are stored, who has access to them, and what is its responsibility to its users,” she said.

“Startups who are successful with security are those where the chief executive and the founders are fanatical from day one and understand what threats exist to the business and what they need to do to protect it,” she said.

Song said many popular products and technologies these days come with strong security by default, such as iPhones, Chromebooks, security keys and Windows 10.

“You’re better off than the 90% of large companies out there,” he said. “That’s one of those few strategic advantages you have as a smaller, nimbler organization that doesn’t have a lot of legacy,” he added. “You can do things better from the start.”

“A lot of the basics are still key,” said Steffens. “Even as we come out with the new shiny technology, having things like firewalls and antivirus, and multi-factor authentication.”

“Security doesn’t always have to be a money thing,” she said. “There’s a lot of open source technology that’s really great.”

3. Start looking at security as an investment

“The sooner you start thinking about security, the less expensive it is in the end,” said Steffens.

That’s because, the experts said, proactive security gives companies an edge over competitors who tack on security solutions after a breach. It’s easier and more cost-effective to get it right the first time without having to fill in gaps years later.

It might be a hard sell to funnel money into something where you won’t actively see financial returns, which is why founders should think of security as investments for the future. The idea is that if you spend a little money at the start, it can save you down the line from the inevitable — a security incident that will cost you in bad headlines, lost customer trust, and potentially fines or other sanctions.

6 tips founders need to know about securing their startup

If you’ve read anything of mine in the past year, you know just how complicated security can be.

Every day it seems there’s a new security lapse, a breach, a hack, or an inadvertent exposure, such as leaving a cloud storage server unprotected without a password. These things happen, but they don’t have to; aecurity isn’t as difficult as it sounds, but there’s no one-size-fits-all solution.

We sat down with three experts on the Extra Crunch stage at TechCrunch’s Disrupt SF earlier this month to help startups and founders understand what they need to do, when, and why.

We asked Google’s Heather Adkins, Duo’s Dug Song, and IOActive’s Jennifer Sunshine Steffens for their best advice. Here’s what they had to say.

Quotes have been edited and condensed for clarity.

1. Don’t put off the security conversation

The one resounding message from the panel: don’t put security off.

“There are basically three areas that folks should start considering how to bucket those risks,” said Duo’s Song. “The first is corporate risk in defending your users and applications they access. The second is application security and product risk. A third area is is around production, security and making sure that the operation of your security program is something that keeps up with that risk. And then a fourth — a new and emerging space — is trust, and not just privacy, but also safety.”

It’s better to be proactive about security than to be reactive to a data breach; not only will it help your company bolster its security posture, but it also serves as an important factor in future fundraising negotiations.

Song said founders have a “very direct obligation” to think about security as soon as they take someone else’s money, but especially when a company starts gathering user or customer data. “You have to put yourself in the shoes of those folks whose data you have to protect,” he said. “It’s not just your existential threats to your business, but you do have a responsibility, right to figure out how to do this well.”

IOActive’s Steffens said startups are already a target — simply because it’s assumed many won’t have thought much about security.

“A lot of attackers will go after startups who have high value data, because they know security is not a priority and it’s going to be a lot easier to get ahold of,” she said. “Data these days is extraordinarily valuable.”

2. Start with the security basics

Google’s Adkins, who runs the search giant’s internal information security team, joined the company almost two decades ago when it was just the size of a large startup. Her job is to keep the company’s network, assets, and employees safe.

“When I got there, they were so fanatical about security already, that half of the job was already done,” she said. “From the moment [Google] took its first search query, it was thinking about where those logs are stored, who has access to them, and what is its responsibility to its users,” she said.

“Startups who are successful with security are those where the chief executive and the founders are fanatical from day one and understand what threats exist to the business and what they need to do to protect it,” she said.

Song said many popular products and technologies these days come with strong security by default, such as iPhones, Chromebooks, security keys and Windows 10.

“You’re better off than the 90% of large companies out there,” he said. “That’s one of those few strategic advantages you have as a smaller, nimbler organization that doesn’t have a lot of legacy,” he added. “You can do things better from the start.”

“A lot of the basics are still key,” said Steffens. “Even as we come out with the new shiny technology, having things like firewalls and antivirus, and multi-factor authentication.”

“Security doesn’t always have to be a money thing,” she said. “There’s a lot of open source technology that’s really great.”

3. Start looking at security as an investment

“The sooner you start thinking about security, the less expensive it is in the end,” said Steffens.

That’s because, the experts said, proactive security gives companies an edge over competitors who tack on security solutions after a breach. It’s easier and more cost-effective to get it right the first time without having to fill in gaps years later.

It might be a hard sell to funnel money into something where you won’t actively see financial returns, which is why founders should think of security as investments for the future. The idea is that if you spend a little money at the start, it can save you down the line from the inevitable — a security incident that will cost you in bad headlines, lost customer trust, and potentially fines or other sanctions.

EU contracts with Microsoft raising “serious” data concerns, says watchdog

Europe’s chief data protection watchdog has raised concerns over contractual arrangements between Microsoft and the European Union institutions which are making use of its software products and services.

The European Data Protection Supervisor (EDPS) opened an enquiry into the contractual arrangements between EU institutions and the tech giant this April, following changes to rules governing EU outsourcing.

Today it writes [with emphasis]: “Though the investigation is still ongoing, preliminary results reveal serious concerns over the compliance of the relevant contractual terms with data protection rules and the role of Microsoft as a processor for EU institutions using its products and services.”

We’ve reached out to Microsoft for comment.

A spokesperson for the company told Reuters: “We are committed to helping our customers comply with GDPR [General Data Protection Regulation], Regulation 2018/1725 and other applicable laws. We are in discussions with our customers in the EU institutions and will soon announce contractual changes that will address concerns such as those raised by the EDPS.”

The preliminary finding follows risk assessments carried out by the Dutch Ministry of Justice and Security, published this summer, which also found similar issues, per the EDPS.

At issue is whether contractual terms are compatible with EU data protection laws intended to protect individual rights across the region.

“Amended contractual terms, technical safeguards and settings agreed between the Dutch Ministry of Justice and Security and Microsoft to better protect the rights of individuals shows that there is significant scope for improvement in the development of contracts between public administration and the most powerful software developers and online service outsourcers,” the watchdog writes today.

“The EDPS is of the opinion that such solutions should be extended not only to all public and private bodies in the EU, which is our short-term expectation, but also to individuals.”

A conference, jointly organized by the EDPS and the Dutch Ministry, which was held in August, brought together EU customers of cloud giants to work on a joint response to tackle regulatory risks related to cloud software provision. The event agenda included a debate on what was billed as “Strategic Vendor Management with respect to hyperscalers such as Microsoft, Amazon Web Services and Google”.

The EDPS says the idea for The Hague Forum — as it’s been named — is to develop a common strategy to “take back control” over IT services and products sold to the public sector by cloud giants.

Such as by creating standard contracts with fair terms for public administration, instead of the EU’s various public bodies feeling forced into accepting T&Cs as written by the same few powerful providers.

Commenting in a statement today, assistant EDPS, Wojciech Wiewiórowski, said: “We expect that the creation of The Hague Forum and the results of our investigation will help improve the data protection compliance of all EU institutions, but we are also committed to driving positive change outside the EU institutions, in order to ensure maximum benefit for as many people as possible. The agreement reached between the Dutch Ministry of Justice and Security and Microsoft on appropriate contractual and technical safeguards and measures to mitigate risks to individuals is a positive step forward. Through The Hague Forum and by reinforcing regulatory cooperation, we aim to ensure that these safeguards and measures apply to all consumers and public authorities living and operating in the EEA.”

EU data protection law means data controllers who make use of third parties to process personal data on their behalf remain accountable for what’s done with the data — meaning EU public institutions have a responsibility to assess risks around cloud provision, and have appropriate contractual and technical safeguards in place to mitigate risks. So there’s a legal imperative to dial up scrutiny of cloud contracts.

In parallel, the EDPS has been pushing for greater transparency in consumer agreements too.

On the latter front Microsoft’s arrangements with consumers using its desktop OS remain under scrutiny in the EU. Earlier this year the Dutch data protection agency referred privacy concerns about how Windows 10 gathers user data to the company’s lead regulator in Europe.

While this summer the company made changes to its privacy policy for its VoIP product Skype and AI assistant Cortana after media reports revealed it employed contractors who could listen in to audio snippets to improve automated translation and inferences.

The French government, meanwhile, has been loudly pursuing a strategy of digital sovereignty to reduce the state’s reliance on foreign tech providers. Though kicking the cloud giant habit may prove harder than ditching Google search.

Edge computing startup Pensando comes out of stealth mode with a total of $278 million in funding

Pensando, an edge computing startup founded by former Cisco engineers, came out of stealth mode today with an announcement that it has raised a $145 million Series C. The company’s software and hardware technology, created to give data centers more of the flexibility of cloud computing servers, is being positioned as a competitor to Amazon Web Services Nitro.

The round was led by Hewlett Packard Enterprise and Lightspeed Venture Partners and brings Pensando’s total raised so far to $278 million. HPE chief technology officer Mark Potter and Lightspeed Venture partner Barry Eggers will join Pensando’s board of directors. The company’s chairman is former Cisco CEO John Chambers, who is also one of Pensando’s investors through JC2 Ventures.

Pensando was founded in 2017 by Mario Mazzola, Prem Jain, Luca Cafiero and Soni Jiandani, a team of engineers who spearheaded the development of several of Cisco’s key technologies, and founded four startups that were acquired by Cisco, including Insieme Networks. (In an interview with Reuters, Pensando chief financial offier Randy Pond, a former Cisco executive vice president, said it isn’t clear if Cisco is interested in acquiring the startup, adding “our aspirations at this point would be to IPO. But, you know, there’s always other possibilities for monetization events.”)

The startup claims its edge computing platform performs five to nine times better than AWS Nitro, in terms of productivity and scale. Pensando prepares data center infrastructure for edge computing, better equipping them to handle data from 5G, artificial intelligence and Internet of Things applications. While in stealth mode, Pensando acquired customers including HPE, Goldman Sachs, NetApp and Equinix.

In a press statement, Potter said “Today’s rapidly transforming, hyper-connected world requires enterprises to operate with even greater flexibility and choices than ever before. HPE’s expanding relationship with Pensando Systems stems from our shared understanding of enterprises and the cloud. We are proud to announce our investment and solution partnership with Pensando and will continue to drive solutions that anticipate our customers’ needs together.”

Suse’s OpenStack Cloud dissipates

Suse, the newly independent open-source company behind the eponymous Linux distribution and an increasingly large set of managed enterprise services, today announced a bit of a new strategy as it looks to stay on top of the changing trends in the enterprise developer space. Over the course of the last few years, Suse put a strong emphasis on the OpenStack platform, an open-source project that essentially allows big enterprises to build something in their own data centers akin to the core services of a public cloud like AWS or Azure. With this new strategy, Suse is transitioning away from OpenStack . It’s ceasing both production of new versions of its OpenStack Cloud and sales of its existing OpenStack product.

“As Suse embarks on the next stage of our growth and evolution as the world’s largest independent open source company, we will grow the business by aligning our strategy to meet the current and future needs of our enterprise customers as they move to increasingly dynamic hybrid and multi-cloud application landscapes and DevOps processes,” the company said in a statement. “We are ideally positioned to execute on this strategy and help our customers embrace the full spectrum of computing environments, from edge to core to cloud.”

What Suse will focus on going forward are its Cloud Application Platform (which is based on the open-source Cloud Foundry platform) and Kubernetes-based container platform.

Chances are, Suse wouldn’t shut down its OpenStack services if it saw growing sales in this segment. But while the hype around OpenStack died down in recent years, it’s still among the world’s most active open-source projects and runs the production environments of some of the world’s largest companies, including some very large telcos. It took a while for the project to position itself in a space where all of the mindshare went to containers — and especially Kubernetes — for the last few years. At the same time, though, containers are also opening up new opportunities for OpenStack, as you still need some way to manage those containers and the rest of your infrastructure.

The OpenStack Foundation, the umbrella organization that helps guide the project, remains upbeat.

“The market for OpenStack distributions is settling on a core group of highly supported, well-adopted players, just as has happened with Linux and other large-scale, open-source projects,” said OpenStack Foundation COO Mark Collier in a statement. “All companies adjust strategic priorities from time to time, and for those distro providers that continue to focus on providing open-source infrastructure products for containers, VMs and bare metal in private cloud, OpenStack is the market’s leading choice.”

He also notes that analyst firm 451 Research believes there is a combined Kubernetes and OpenStack market of about $11 billion, with $7.7 billion of that focused on OpenStack. “As the overall open-source cloud market continues its march toward eight figures in revenue and beyond — most of it concentrated in OpenStack products and services — it’s clear that the natural consolidation of distros is having no impact on adoption,” Collier argues.

For Suse, though, this marks the end of its OpenStack products. As of now, though, the company remains a top-level Platinum sponsor of the OpenStack Foundation and Suse’s Alan Clark remains on the Foundation’s board. Suse is involved in some of the other projects under the OpenStack brand, so the company will likely remain a sponsor, but it’s probably a fair guess that it won’t continue to do so at the highest level.