Oracle could be feeling cloud transition growing pains

Oracle is learning that it’s hard for enterprise companies born in the data center to make the transition to the cloud, an entirely new way of doing business. Yesterday it reported its earnings and it was a mixed bag, made harder by changing the way the company counts cloud revenue.

In its earnings press release from yesterday, it put it this way: “Q4 Cloud Services and License Support revenues were up 8% to $6.8 billion. Q4 Cloud License and On-Premise License revenues were down 5% to $2.5 billion.”

Let’s compare that with the language from their Q3 revenue in March: “Cloud Software as a Service (SaaS) revenues were up 33% to $1.2 billion. Cloud Platform as a Service (PaaS) plus Infrastructure as a Service (IaaS) revenues were up 28% to $415 million. Total Cloud Revenues were up 32% to $1.6 billion.”

See how they broke out the cloud revenue loudly and proudly in March, yet chose to combine their cloud revenue with license revenue in June.

In the post-reporting earnings call, Safra Catz, Oracle Co-CEO, responding to a question from analyst John DiFucci, took exception to the idea that the company was somehow obfuscating cloud revenue by reporting it in this way. “So first of all, there is no hiding. I told you the Cloud number, $1.7 billion. You can do the math. You see we are right where we said we’d be.”

She says the new reporting method is due to the new combined licensing products that lets customer use their license on-premise or in the cloud. Fair enough, but if your business is booming you probably want to let investors know about that. They seem to be uneasy about this approach with the stock down over 7 percent today as of publishing this article.

Oracle Stock Chart: Google

Oracle could of course settle all of this by spelling out their cloud revenue, but instead chose a different path. John Dinsdale, an analyst with Synergy Research, a firm that watches the cloud market was dubious about Oracle’s reasoning.

“Generally speaking, when a company chooses to reduce the amount of financial detail it shares on its key strategic initiatives, that is not a good sign. I think one of the justifications put forward is that is becoming difficult to differentiate between cloud and non-cloud revenues. If that is indeed what Oracle is claiming, I have a hard time buying into that argument. Its competitors are all moving in the opposite direction,” he said.

Indeed most are. While it’s often hard to tell exactly the nature of cloud revenue, the bigger players have been more open about this. For instance in its most recent earnings report, Microsoft reported its Azure cloud revenue grew 93 percent. Amazon reported its cloud revenue from AWS was up 49 percent to $5.4 billion in revenue, getting very specific about the revenue number.

Further you can see from Synergy’s most recent market share cloud growth numbers from the 4th quarter last year, Oracle was lumped in with “the Next 10,” not large enough to register on its own.

That Oracle chose not to break out cloud revenue this quarter can’t be seen as a good sign. To be fair, we haven’t really seen Google break out their cloud revenue either with one exception in February. But when the guys at the top of the market shout about their growth, and the guys further down don’t, you can draw your own conclusions.

Amazon starts shipping its $249 DeepLens AI camera for developers

Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models. The company started taking pre-orders for DeepLens a few months ago, but now the camera is actually shipping to developers.

Ahead of today’s launch, I had a chance to attend a workshop in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with the hardware and the software services that make it tick.

DeepLens is essentially a small Ubuntu- and Intel Atom-based computer with a built-in camera that’s powerful enough to easily run and evaluate visual machine learning models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of the usual I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let you create prototype applications, no matter whether those are simple toy apps that send you an alert when the camera detects a bear in your backyard or an industrial application that keeps an eye on a conveyor belt in your factory. The 4 megapixel camera isn’t going to win any prizes, but it’s perfectly adequate for most use cases. Unsurprisingly, DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models.

These integrations are also what makes getting started with the camera pretty easy. Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up your DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

But that’s obviously just the beginning. As the DeepLens team stressed during our workshop, even developers who have never worked with machine learning can take the existing templates and easily extend them. In part, that’s due to the fact that a DeepLens project consists of two parts: the model and a Lambda function that runs instances of the model and lets you perform actions based on the model’s output. And with SageMaker, AWS now offers a tool that also makes it easy to build models without having to manage the underlying infrastructure.

You could do a lot of the development on the DeepLens hardware itself, given that it is essentially a small computer, though you’re probably better off using a more powerful machine and then deploying to DeepLens using the AWS Console. If you really wanted to, you could use DeepLens as a low-powered desktop machine as it comes with Ubuntu 16.04 pre-installed.

For developers who know their way around machine learning frameworks, DeepLens makes it easy to import models from virtually all the popular tools, including Caffe, TensorFlow, MXNet and others. It’s worth noting that the AWS team also built a model optimizer for MXNet models that allows them to run more efficiently on the DeepLens device.

So why did AWS build DeepLens? “The whole rationale behind DeepLens came from a simple question that we asked ourselves: How do we put machine learning in the hands of every developer,” Sivasubramanian said. “To that end, we brainstormed a number of ideas and the most promising idea was actually that developers love to build solutions as hands-on fashion on devices.” And why did AWS decide to build its own hardware instead of simply working with a partner? “We had a specific customer experience in mind and wanted to make sure that the end-to-end experience is really easy,” he said. “So instead of telling somebody to go download this toolkit and then go buy this toolkit from Amazon and then wire all of these together. […] So you have to do like 20 different things, which typically takes two or three days and then you have to put the entire infrastructure together. It takes too long for somebody who’s excited about learning deep learning and building something fun.”

So if you want to get started with deep learning and build some hands-on projects, DeepLens is now available on Amazon. At $249, it’s not cheap, but if you are already using AWS — and maybe even use Lambda already — it’s probably the easiest way to get started with building these kind of machine learning-powered applications.

Salesforce deepens data sharing partnership with Google

Last Fall at Dreamforce, Salesforce announced a deepening friendship with Google . That began to take shape in January with integration between Salesforce CRM data and Google Analytics 360 and Google BigQuery. Today, the two cloud giants announced the next step as the companies will share data between Google Analytics 360 and the Salesforce Marketing Cloud.

This particular data sharing partnership makes even more sense as the companies can share web analytics data with marketing personnel to deliver ever more customized experiences for users (or so the argument goes, right?).

That connection certainly didn’t escape Salesforce’s VP of product marketing, Bobby Jania. “Now, marketers are able to deliver meaningful consumer experiences powered by the world’s number one marketing platform and the most widely adopted web analytics suite,” Jania told TechCrunch.

Brent Leary, owner of the consulting firm CRM Essentials says the partnership is going to be meaningful for marketers. “The tighter integration is a big deal because a large portion of Marketing Cloud customers are Google Analytics/GA 360 customers, and this paves the way to more seamlessly see what activities are driving successful outcomes,” he explained.

The partnership involves four integrations that effectively allow marketers to round-trip data between the two platforms. For starters, consumer insights from both Marketing Cloud and Google Analytics 360, will be brought together into a single analytics dashboard inside Marketing Cloud. Conversely, Market Cloud data will be viewable inside Google Analytics 360 for attribution analysis and also to use the Marketing Cloud information to deliver more customized web experiences. All three of these integrations will be generally available starting today.

A fourth element of the partnership being announced today won’t be available in Beta until the third quarter of this year. “For the first time ever audiences created inside the Google Analytics 360 platform can be activated outside of Google. So in this case, I’m able to create an audience inside of Google Analytics 360 and then I’m able to activate that audience in Marketing Cloud,” Jania explained.

An audience is like a segment, so if you have a group of like-minded individuals in the Google analytics tool, you can simply transfer it to Salesforce Marketing Cloud and send more relevant emails to that group.

This data sharing capability removes a lot of the labor involved in trying to monitor data stored in two places, but of course it also raises questions about data privacy. Jania was careful to point out that the two platforms are not sharing specific information about individual consumers, which could be in violation of the new GDPR data privacy rules that went into effect in Europe at the end of last month.

“What we’re [we’re sharing] is either metadata or aggregated reporting results. Just to be clear there’s no personal identifiable data that is flowing between the systems so everything here is 100% GDPR-compliant,” Jania said.

But Leary says it might not be so simple, especially in light of recent data sharing abuses. “With Facebook having to open up about how they’re sharing consumer data with other organizations, companies like Salesforce and Google will have to be more careful than ever before about how the consumer data they make available to their corporate customers will be used by them. It’s a whole new level of scrutiny that has to be apart of the data sharing equation,” Leary said.

The announcements were made today at the Salesforce Connections conference taking place in Chicago this week.

Why Microsoft wants to put data centers at the bottom of the ocean

Earlier this week, Microsoft announced the second phase of Project Natick, a research experiment that aims to understand the benefits and challenges of deploying large-scale data centers under water. In this second phase, the team sank a tank the size of a shipping container with numerous server racks off the coast of the Orkney islands and plans to keep it there for a few years to see if this is a viable way of deploying data centers in the future.

Computers and water famously don’t mix, as anyone who has ever spilled a cup of water over a laptop, so putting server racks under water sure seems like an odd idea. But as Microsoft Research’s Ben Cutler told me, there are good reasons for why the bottom of the ocean may be a good place for setting up servers.

The vast majority of people live within 200 kilometers of the ocean, Cutler noted, and Microsoft’s cloud strategy has long been about putting its data centers close to major population centers. So with large offshore wind farms potentially providing renewable power and the obvious cooling benefits of being under water (and cooling is a major cost factor for data centers), trying an experiment like this makes sense.

“Within Microsoft, we’ve spent an enormous amount of energy and time on cloud — and obviously money,” Cutler explained when I asked him about the genesis of this project. “So we’re always looking for new ways that we can innovate. And this idea sort of gelled originally with one of our employees who worked on a U.S. Navy submarine and knew something about this technology, and that this could maybe be applied to data centers.”

So back in 2013, the team launched phase one and dropped a small pressure vessel with a few servers into the waters of the Pacific Ocean. That experiment worked out pretty well. Even the local sea life seemed to appreciate it. The team found that the vessel didn’t heat up the water close to it by more than a few thousandths of a degree Celsius warmer than a few feet further away from it. The noise, too, was pretty much negligible. “We found that once we were a few meters away from the vessel, we were drowned out by background noise, which is things like snapping shrimp, which is actually the predominant sound of the ocean,” Cutler told me, and stressed that the team’s job is to measure all of this as the ocean is obviously a very sensitive environment. “What we found was that we’re very well received by wildlife and we’re very quickly colonized by crabs and octopus and other things that were in the area.”

For this second phase, the team decided on the location off the coast of Scotland because it’s also home to the European Marine Energy Center, so the infrastructure for powering the vessel from renewable energy from on- and off-shore sources was already in place.

Once the vessel is in the ocean, maintenance is pretty much impossible. The idea here is to accept that things will fail and can’t be replaced. Then, after a few years, the plan is to retrieve the vessel, refurbish it with new machines and deploy it again.

But as part of this experiment, the team also thought about how to best make these servers last as long as possible — and because nobody has to go replace a broken hard drive inside the vessel, the team decided to fill the atmosphere with nitrogen to prevent corrosion, for example. To measure the impact of that experiment, Microsoft also maintains a similar vessel on land so it can compare how well that system fares over time.

Cutler stressed that nothing here is cutting-edge technology. There are no exotic servers here and both underwater cabling and building vessels like this are well understood at this point.

Over time, Cutler envisions a factory that can prefabricate these vessels and ship them to where they are needed. That’s why the vessel is about the size of a shipping container and the team actually had it fabricated in France, loaded it on a truck and shipped it to England to test this logistics chain.

Whether that comes to pass remains to be seen, of course. The team is studying the economics of Natick for the time being, and then it’s up to Microsoft’s Azure team to take this out of the research labs and put it into more widespread production. “Our goal here is to drive this to a point where we understand that the economics make sense and that it has the characteristics that we wanted it to, and then it becomes a tool for that product group to decide whether and where to use it,” said Cutler.

Workday acquires Rallyteam to fuel machine learning efforts

Sometimes you acquire a company for the assets and sometimes you do it for the talent. Today Workday announced it was buying Rallyteam, a San Francisco startup that helps companies keep talented employees by matching them with more challenging opportunities in-house.

The companies did not share the purchase price or the number of Rallyteam employees who would be joining Workday .

In this case, Workday appears to be acquiring the talent. It wants to take the Rallyteam team and incorporate it into the company’s engineering unit to beef up its machine learning efforts, while taking advantage of the expertise it has built up over the years connecting employees with interesting internal projects.

“With Rallyteam, we gain incredible team members who created a talent mobility platform that uses machine learning to help companies better understand and optimize their workforces by matching a worker’s interests, skills and connections with relevant jobs, projects, tasks and people,” Workday’s Cristina Goldt wrote in a blog post announcing the acquisition.

Rallyteam, which was founded in 2013, and launched at TechCrunch Disrupt San Francisco in September 2014, helps employees find interesting internal projects that might otherwise get outsourced. “I knew there were opportunities that existed [internally] because as a manager, I was constantly outsourcing projects even though I knew there had to be people in the company that could solve this problem,” Rallyteam’s Huan Ho told TechCrunch’s Frederic Lardinois at the launch. Rallyteam was a service designed to solve this issue.

[gallery ids="1055100,1053586,1053580,1053581"]

Last fall the company raised $8.6 million led by Norwest Ventures with participation from Storm Ventures, Cornerstone OnDemand and Wilson Sonsini.

Workday provides a SaaS platform for human resources and finance, so the Rallyteam approach fits nicely within the scope of the Workday business. This is the 10th acquisition for Workday and the second this year.

Chart: Crunchbase

Workday raised over $230 million before going public in 2012.

Devo scores $25 million and cool new name

Logtrust is now known as Devo in one of the cooler name changes I’ve seen in a long time. Whether they intended to pay homage to the late 70s band is not clear, but investors probably didn’t care, as they gave the data operations startup a bushel of money today.

The company now known as Devo announced a $25 million Series C round led by Insight Venture Partners with participation from Kibo Ventures. Today’s investment brings the total raised to $71 million.

The company changed its name because it was about much more than logs, according to CEO Walter Scott. It offers a cloud service that allows customers to stream massive amounts of data — think terabytes or even petabytes — relieving the need to worry about all of the scaling and hardware requirements processing this amount of data would require. That could be from logs from web servers, security data from firewalls or transactions taking place on backend systems, as some examples.

The data can live on prem if required, but the processing always gets done in the cloud to provide for the scaling needs. Scott says this is about giving companies this ability to process and understand massive amounts of data that previously was only in reach of web scale companies like Google, Facebook or Amazon.

But it involves more than simply collecting the data. “It’s the combination of us being able to collect all of that data together with running analytics on top of it all in a unified platform, then allowing a very broad spectrum of the business [to make use of it],” Scott explained.

Devo dashboard. Photo: Devo

Devo sees Sumo Logic, Elastic and Splunk as its primary competitors in this space, but like many startups they often battle companies trying to build their own systems as well, a difficult approach for any company to take when you are dealing with this amount of data.

The company, which was founded in Spain is now based in Cambridge, Massachusetts, and has close to 100 employees. Scott says he has the budget to double that by the end of the year, although he’s not sure they will be able to hire that many people that rapidly

SAP gives CRM another shot with with new cloud-based suite

Customer Relationship Management (CRM) is a mature market with a clear market leader in Salesforce. It has a bunch other enterprise players like Microsoft, Oracle and SAP vying for position. SAP decided to take another shot today when it released a new business products suite called SAP C/4HANA. (Ya, catchy I know.)

SAP C/4HANA pulls together several acquisitions from the last several years. It started in 2013 when it bought Hybris for around a billion dollars. That gave them a logistics tracking piece. Then last year it got Gigya for $350 million, giving them a way to track customer identity. This year it bought the final piece when it paid $2.4 billion for CallidusCloud for a configure, price quote (CPQ) piece.

SAP has taken these three pieces and packaged them together into a customer relationship management package. They see this term much more broadly than simply tracking a database of names and vital information on customers. They hope with these products to give their customers a way to provide consumer data protection, marketing, commerce, sales and customer service.

They see this approach as different, but it’s really more of what the other players are doing by packaging sales, service and marketing into a single platform. “The legacy CRM systems are all about sales; SAP C/4HANA is all about the consumer. We recognize that every part of a business needs to be focused on a single view of the consumer. When you connect all SAP applications together in an intelligent cloud suite, the demand chain directly fuels the behaviors of the supply chain,” CEO Bill McDermott said in a statement.

It’s interesting that McDermott goes after legacy CRM tools because his company has offered its share of them over the years, but its market share has been headed in the wrong direction. This new cloud-based package is designed to change that. If you can’t build it, you can buy it, and that’s what SAP has done here.

Brent Leary, owner at CRM Essentials, who has been watching this market for many years says that while SAP has a big back-office customer base in ERP, it’s going to be tough to pull customers back to SAP as a CRM provider. “I think their huge base of ERP customers provides them with an opportunity to begin making inroads, but it will be tough as mindshare for CRM/Customer Engagement has moved away from SAP,” he told TechCrunch.

He says that it will be important with this new product to find its niche in a defined market. “It will be imperative going forward for SAP find spots to “own” in the minds of corporate buyers in order to optimize their chances of success against their main competitors,” he said.

It’s obviously not going to be easy, but SAP has used its cash to buy some companies and give it another shot. Time will tell if it was money well spent.

How Yelp (mostly) shut down its own data centers and moved to AWS

Back in 2013, Yelp was a 9-year old company built on a set of internal systems. It was coming to the realization that running its own data centers might not be the most efficient way to run a business that was continuing to scale rapidly. At the same time, the company understood that the tech world had changed dramatically from 2004 when it launched and it needed to transform the underlying technology to a more modern approach.

That’s a lot to take on in one bite, but it wasn’t something that happened willy-nilly or overnight says Jason Yellen, SVP of engineering at Yelp . The vast majority of the company’s data was being processed in a massive Python repository that was getting bigger all the time. The conversation about shifting to a microservices architecture began in 2012.

The company was also running the massive Yelp application inside its own datacenters, and as it grew it was increasingly becoming limited by long lead times required to procure and get new hardware online. It saw this was an unsustainable situation over the long-term and began a process of transforming from running a huge monolithic application on-premises to one built on microservices running in the cloud. It was a quite a journey.

The data center conundrum

Yellen described the classic scenario of a company that could benefit from a shift to the cloud. Yelp had a small operations team dedicated to setting up new machines. When engineering anticipated a new resource requirement, they had to give the operations team sufficient lead time to order new servers and get them up and running, certainly not the most efficient way to deal with a resource problem, and one that would have been easily solved by the cloud.

“We kept running into a bottleneck, I was running a chunk of the search team [at the time] and I had to project capacity out to 6-9 months. Then it would take a few months to order machines and another few months to set them up,” Yellen explained. He emphasized that the team charged with getting these machines going was working hard, but there were too few people and too many demands and something had to give.

“We were on this cusp. We could have scaled up that team dramatically and gotten [better] at building data centers and buying servers and doing that really fast, but we were hearing a lot of AWS and the advantages there,” Yellen explained.

To the cloud!

They looked at the cloud market landscape in 2013 and AWS was the clear leader technologically. That meant moving some part of their operations to EC2. Unfortunately, that exposed a new problem: how to manage this new infrastructure in the cloud. This was before the notion of cloud-native computing even existed. There was no Kubernetes. Sure, Google was operating in a cloud-native fashion in-house, but it was not really an option for most companies without a huge team of engineers.

Yelp needed to explore new ways of managing operations in a hybrid cloud environment where some of the applications and data lived in the cloud and some lived in their data center. It was not an easy problem to solve in 2013 and Yelp had to be creative to make it work.

That meant remaining with one foot in the public cloud and the other in a private data center. One tool that helped ease the transition was AWS Direct Connect, which was released the prior year and enabled Yelp to directly connect from their data center to the cloud.

Laying the groundwork

About this time, as they were figuring out how AWS works, another revolutionary technological change was occurring when Docker emerged and began mainstreaming the notion of containerization. “That’s another thing that’s been revolutionary. We could suddenly decouple the context of the running program from the machine it’s running on. Docker gives you this container, and is much lighter weight than virtualization and running full operating systems on a machine,” Yellen explained.

Another thing that was happening was the emergence of the open source data center operating system called Mesos, which offered a way to treat the data center as a single pool of resources. They could apply this notion to wherever the data and applications lived. Mesos also offered a container orchestration tool called Marathon in the days before Kubernetes emerged as a popular way of dealing with this same issue.

“We liked Mesos as a resource allocation framework. It abstracted away the fleet of machines. Mesos abstracts many machines and controls programs across them. Marathon holds guarantees about what containers are running where. We could stitch it all together into this clear opinionated interface,” he said.

Pulling it all together

While all this was happening, Yelp began exploring how to move to the cloud and use a Platform as a Service approach to the software layer. The problem was at the time they started, there wasn’t really any viable way to do this. In the buy versus build decision making that goes on in large transformations like this one, they felt they had little choice but to build that platform layer themselves.

In late 2013 they began to pull together the idea of building this platform on top of Mesos and Docker, giving it the name PaaSTA, an internal joke that stood for Platform as a Service, Totally Awesome. It became simply known as Pasta.

Photo: David Silverman/Getty Images

The project had the ambitious goal of making their infrastructure work as a single fabric, in a cloud-native fashion before most anyone outside of Google was using that term. Pasta developed slowly with the first developer piece coming online in August 2014 and the first  production service later that year in December. The company actually open sourced the technology the following year.

“Pasta gave us the interface between the applications and development teams. Operations had to make sure Pasta is up and running, while Development was responsible for implementing containers that implemented the interface,” Yellen said.

Moving to deeper into the public cloud

While Yelp was busy building these internal systems, AWS wasn’t sitting still. It was also improving its offerings with new instance types, new functionality and better APIs and tooling. Yellen reports this helped immensely as Yelp began a more complete move to the cloud.

He says there were a couple of tipping points as they moved more and more of the application to AWS — including eventually, the master database. This all happened in more recent years as they understood better how to use Pasta to control the processes wherever they lived. What’s more, he said that adoption of other AWS services was now possible due to tighter integration between the in-house data centers and AWS.

Photo: erhui1979/Getty Images

The first tipping point came around 2016 as all new services were configured for the cloud. He said they began to get much better at managing applications and infrastructure in AWS and their thinking shifted from how to migrate to AWS to how to operate and manage it.

Perhaps the biggest step in this years-long transformation came last summer when Yelp moved its master database from its own data center to AWS. “This was the last thing we needed to move over. Otherwise it’s clean up. As of 2018, we are serving zero production traffic through physical data centers,” he said. While they still have two data centers, they are getting to the point, they have the minimum hardware required to run the network backbone.

Yellen said they went from two weeks to a month to get a service up and running before this was all in place to just a couple of minutes. He says any loss of control by moving to the cloud has been easily offset by the convenience of using cloud infrastructure. “We get to focus on the things where we add value,” he said — and that’s the goal of every company.

OpenStack in transition

OpenStack is one of the most important and complex open-source projects you’ve never heard of. It’s a set of tools that allows large enterprises ranging from Comcast and PayPal to stock exchanges and telecom providers to run their own AWS-like cloud services inside their data centers. Only a few years ago, there was a lot of hype around OpenStack as the project went through the usual hype cycle. Now, we’re talking about a stable project that many of the most valuable companies on earth rely on. But this also means the ecosystem around it — and the foundation that shepherds it — is now trying to transition to this next phase.

The OpenStack project was founded by Rackspace and NASA in 2010. Two years later, the growing project moved into the OpenStack Foundation, a nonprofit group that set out to promote the project and help manage the community. When it was founded, OpenStack still had a few competitors, like CloudStack and Eucalyptus. OpenStack, thanks to the backing of major companies and its fast-growing community, quickly became the only game in town, though. With that, community events like the OpenStack Summit started to draw thousands of developers, and with each of its semi-annual releases, the number of contributors to the project has increased.

Now, that growth in contributors has slowed and, as evidenced by the attendance at this week’s Summit in Vancouver.

In the early days, there were also plenty of startups in the ecosystem — and the VC money followed them, together with some of the most lavish conference parties (or “bullshit,” as Canonical founder Mark Shuttleworth called it) that I have experienced. The OpenStack market didn’t materialize quite as fast as many had hoped, though, so some of the early players went out of business, some shut down their OpenStack units and others sold to the remaining players. Today, only a few of the early players remain standing, and the top players are now the likes of Red Hat, Canonical and Rackspace.

And to complicate matters, all of this is happening in the shadow of the Cloud Native Computing Foundation (CNCF) and the Kubernetes project it manages being in the early stages of the hype cycle.

Meanwhile, the OpenStack Foundation itself is in the middle of its own transition as it looks to bring on other open-source infrastructure projects that are complementary to its overall mission of making open-source infrastructure easier to build and consume.

Unsurprisingly, all of this clouded the mood at the OpenStack Summit this week, but I’m actually not part of the doom and gloom contingent. In my view, what we are seeing here is a mature open-source project that has gone through its ups and downs and now, with all of the froth skimmed off, it’s a tool that provides a critical piece of infrastructure for businesses. Canonical’s Mark Shuttleworth, who created his own bit of drama during his keynote by directly attacking his competitors like Red Hat, told me that low attendance at the conference may not be a bad thing, for example, since the people who are actually in attendance are now just trying to figure out what OpenStack is all about and are all potential customers.

Others echoed a similar sentiment. “I think some of it goes with, to some extent, what’s been building over the last couple of Summits,” Bryan Thompson, Rackspace’s senior director and general manager for OpenStack, said as he summed up what I heard from a number of other vendors at the event. “That is: Is open stack dead? Is this going away? Or is everything just leapfrogging and going straight to Kubernetes on bare metal. And I don’t want to phrase it as ‘it’s a good thing,’ because I think it’s a challenge for the foundation and for the community. But I think it’s actually a positive thing because the core OpenStack services — the core projects — have just matured. We’re not in the early science experiment days of trying to push ahead and scale and grow the core projects, they were actually achieved and people are actually using it.”

That current state produces fewer flashy headlines, but every survey, both from the Foundation itself and third-party analysts, show that the number of users — and their OpenStack clouds — continues to grow. Meanwhile, the Foundation is looking to bring up attendance at its events, too, by adding container and CI/CD tracks, for example.

The company that maybe best exemplifies the ups and downs of OpenStack is Mirantis, a well-funded startup that has weathered the storm by reinventing itself multiple times. Mirantis started as one of the first OpenStack distributions and contributors to the project. During those early days, it raised one of the largest funding rounds in the OpenStack world with a $100 million Series B round, which was quickly followed by another $100 million round in 2015. But by early 2017, Mirantis had pivoted from being a distribution and toward offering managed services for open-source platforms. It also made an early bet on Kubernetes and offered services for that, too. And then this year, it added yet another twist to its corporate story by refocusing its efforts on the Netflix-incubated Spinnaker open-source tool and helping companies build their CI/CD pipelines based on that. In the process, the company shrunk from almost 1,000 employees to 450 today, but as Mirantis CEO and co-founder Boris Renski told me, it’s now cash-flow positive.

So just as the OpenStack Foundation is moving toward CI/CD with its Zuul tool, Mirantis is betting on Spinnaker, which solves some of the same issues, but with an emphasis on integrating multiple code repositories. Renski, it’s worth noting, actually advocated for bringing Spinnaker into the OpenStack foundation (it’s currently managed on a more ad hoc basis by Netflix and Google).

“We need some governance, we need some process,” Renski said. “The [OpenStack] Foundation is known for actually being very good and effectively seeding this kind of formalized, automated and documented governance in open source and the two should work together much closer. I think that Spinnaker should become part of the Foundation. That’s the opportunity and I think it should focus 150 percent of their energy on that before it builds its own thing and before [Spinnaker] goes off to the CNCF as yet another project.”

So what does the Foundation think about all of this? In talking to OpenStack CTO Mark Collier and Executive Director Jonathan Bryce over the last few months, it’s clear that the Foundation knows that change is needed. That process started with opening up the Foundation to other projects, making it more akin to the Linux Foundation, where Linux remains in the name as its flagship project, but where a lot of the energy now comes from projects it helps manage, including the likes of the CNCF and Cloud Foundry. At the Sydney Summit last year, the team told me that part of the mission now is to retask the large OpenStack community to work on these new topics around open infrastructure. This week, that message became clearer.

“Our mission is all about making it easier for people to build and operate open infrastructure,” Bryce told me this week. “And open infrastructure is about operating functioning services based off of open source tool. So open source is not enough. And we’ve been, you know, I think, very, very oriented around a set of open source projects. But in the seven years since we launched, what we’ve seen is people have taken those projects, they’ve turned it into services that are running and then they piled a bunch of other stuff on top of it — and that becomes really difficult to maintain and manage over the long term.” So now, going forward, that part about maintaining these clouds is becoming increasingly important for the project.

“Open source is not enough,” is an interesting phrase here, because that’s really at the core of the issue at hand. “The best thing about open source is that there’s more of it than ever,” said Bryce. “And it’s also the worst thing. Because the way that most open source communities work is that it’s almost like having silos of developers inside of a company — and then not having them talk to each other, not having them test together, and then expecting to have a coherent, easy to use product come out at the end of the day.”

And Bryce also stressed that projects like OpenStack can’t be only about code. Moving to a cloud-native development model, whether that’s with Kubernetes on top of OpenStack or some other model, is about more than just changing how you release software. It’s also about culture.

“We realized that this was an aspect of the foundation that we were under-prioritizing,” said Bryce. “We focused a lot on the OpenStack projects and the upstream work and all those kinds of things. And we also built an operator community, but I think that thinking about it in broader terms lead us to a realization that we had last year. It’s not just about OpenStack. The things that we have done to make OpenStack more usable apply broadly to these businesses [that use it], because there isn’t a single one that’s only running OpenStack. There’s not a single one of them.”

More and more, the other thing they run, besides their legacy VMware stacks, is containers and specifically containers managed with Kubernetes, of course, and while the OpenStack community first saw containers as a bit of a threat, the Foundation is now looking at more ways to bring those communities together, too.

What about the flagging attendance at the OpenStack events? Bryce and Collier echoed what many of the vendors also noted. “In the past, we had something like 7,000 developers — something insane — but the bulk of the code comes down to about 200 or 300 developers,” said Bryce. Even the somewhat diminished commercial ecosystem doesn’t strike Bryce and Collier as too much of an issue, in part because the Foundation’s finances are closely tied to its membership. And while IBM dropped out as a project sponsor, Tencent took its place.

“There’s the ecosystem side in terms of who’s making a product and selling it to people,” Collier acknowledged. “But for whom is this so critical to their business results that they are going to invest in it. So there’s two sides to that, but in terms of who’s investing in OpenStack and the Foundation and making all the software better, I feel like we’re in a really good place.” He also noted that the Foundation is seeing lots of investment in China right now, so while other regions may be slowing down, others are picking up the slack.

So here is an open-source project in transition — one that has passed through the trough of disillusionment and hit the plateau of productivity, but that is now looking for its next mission. Bryce and Collier admit that they don’t have all the answers, but if there’s one thing that’s clear, it’s that both the OpenStack project and foundation are far from dead.

InVision design tool Studio gets an app store, asset store

InVision, the startup that wants to be the operating system for designers, today introduced its app store and asset store within InVision Studio. In short, InVision Studio users now have access to some of their most-used apps and services from right within the Studio design tool. Plus, those same users will be able to shop for icons, UX/UI components, typefaces and more from within Studio.

While Studio is still in its early days, InVision has compiled a solid list of initial app store partners, including Google, Salesforce, Slack, Getty, Atlassian, and more.

InVision first launched as a collaboration tool for designers, letting designers upload prototypes into the cloud so that other members of the organization could leave feedback before engineers set the design in stone. Since that launch in 2011, InVision has grown to 4 million users, capturing 80 percent of the Fortune 100, raising a total of $235 million in funding.

While collaboration is the bread and butter of InVision’s business, and the only revenue stream for the company, CEO and founder Clark Valberg feels that it isn’t enough to be complementary to the current design tool ecosystem. Which is why InVision launched Studio in late 2017, hoping to take on Adobe and Sketch head-on with its own design tool.

Studio differentiates itself by focusing on the designer’s real-life workflow, which often involves mocking up designs in one app, pulling assets from another, working on animations and transitions in another, and then stitching the whole thing together to share for collaboration across InVision Cloud. Studio aims to bring all those various services into a single product, and a critical piece of that mission is building out an app store and asset store with the services too sticky for InVision to rebuild from Scratch, such as Slack or Atlassian.

With the InVision app store, Studio users can search Getty from within their design and preview various Getty images without ever leaving the app. They can then share that design via Slack or send it off to engineers within Atlassian, or push it straight to UserTesting.com to get real-time feedback from real people.

InVision Studio launched with the ability to upload an organization’s design system (type faces, icons, logos, and hex codes) directly into Studio, ensuring that designers have easy access to all the assets they need. Now InVision is taking that a step further with the launch of the asset store, letting designers sell their own assets to the greater designer ecosystem.

“Our next big move is to truly become the operating system for product design,” said Valberg. “We want to be to designers what Atlassian is for engineers, what Salesforce is to sales. We’ve worked to become a full-stack company, and now that we’re managing that entire stack it has liberated us from being complementary products to our competitors. We are now a standalone product in that respect.”

Since launching Studio, the service has grown to more than 250,000 users. The company says that Studio is still in Early Access, though it’s available to everyone here.