Salesforce deepens data sharing partnership with Google

Last Fall at Dreamforce, Salesforce announced a deepening friendship with Google . That began to take shape in January with integration between Salesforce CRM data and Google Analytics 360 and Google BigQuery. Today, the two cloud giants announced the next step as the companies will share data between Google Analytics 360 and the Salesforce Marketing Cloud.

This particular data sharing partnership makes even more sense as the companies can share web analytics data with marketing personnel to deliver ever more customized experiences for users (or so the argument goes, right?).

That connection certainly didn’t escape Salesforce’s VP of product marketing, Bobby Jania. “Now, marketers are able to deliver meaningful consumer experiences powered by the world’s number one marketing platform and the most widely adopted web analytics suite,” Jania told TechCrunch.

Brent Leary, owner of the consulting firm CRM Essentials says the partnership is going to be meaningful for marketers. “The tighter integration is a big deal because a large portion of Marketing Cloud customers are Google Analytics/GA 360 customers, and this paves the way to more seamlessly see what activities are driving successful outcomes,” he explained.

The partnership involves four integrations that effectively allow marketers to round-trip data between the two platforms. For starters, consumer insights from both Marketing Cloud and Google Analytics 360, will be brought together into a single analytics dashboard inside Marketing Cloud. Conversely, Market Cloud data will be viewable inside Google Analytics 360 for attribution analysis and also to use the Marketing Cloud information to deliver more customized web experiences. All three of these integrations will be generally available starting today.

A fourth element of the partnership being announced today won’t be available in Beta until the third quarter of this year. “For the first time ever audiences created inside the Google Analytics 360 platform can be activated outside of Google. So in this case, I’m able to create an audience inside of Google Analytics 360 and then I’m able to activate that audience in Marketing Cloud,” Jania explained.

An audience is like a segment, so if you have a group of like-minded individuals in the Google analytics tool, you can simply transfer it to Salesforce Marketing Cloud and send more relevant emails to that group.

This data sharing capability removes a lot of the labor involved in trying to monitor data stored in two places, but of course it also raises questions about data privacy. Jania was careful to point out that the two platforms are not sharing specific information about individual consumers, which could be in violation of the new GDPR data privacy rules that went into effect in Europe at the end of last month.

“What we’re [we’re sharing] is either metadata or aggregated reporting results. Just to be clear there’s no personal identifiable data that is flowing between the systems so everything here is 100% GDPR-compliant,” Jania said.

But Leary says it might not be so simple, especially in light of recent data sharing abuses. “With Facebook having to open up about how they’re sharing consumer data with other organizations, companies like Salesforce and Google will have to be more careful than ever before about how the consumer data they make available to their corporate customers will be used by them. It’s a whole new level of scrutiny that has to be apart of the data sharing equation,” Leary said.

The announcements were made today at the Salesforce Connections conference taking place in Chicago this week.

SoftBank Vision Fund leads $250M Series D for Cohesity’s hyperconverged data platform

San Jose-based Cohesity has closed an oversubscribed $250M Series D funding round led by SoftBank’s Vision Fund, bringing its total raised to date to $410M. The enterprise software company offers a hyperconverged data platform for storing and managing all the secondary data created outside of production apps.

In a press release today it notes this is only the second time SoftBank’s gigantic Vision Fund has invested in an enterprise software company. The fund, which is almost $100BN in size — without factoring in all the planned sequels, also led an investment in enterprise messaging company Slack back in September 2017 (also a $250M round).

Cohesity pioneered hyperconverged secondary storage as a first stepping stone on the path to a much larger transformation of enterprise infrastructure spanning public and private clouds. We believe that Cohesity’s web-scale Google-like approach, cloud-native architecture, and incredible simplicity is changing the business of IT in a fundamental way,” said Deep Nishar, senior managing partner at SoftBank Investment Advisers, in a supporting statement.

Also participating in the financing are Cohesity’s existing strategic investors Cisco Investments, Hewlett Packard Enterprise (HPE), and Morgan Stanley Expansion Capital, along with early investor Sequoia Capital and others.

The company says the investment will be put towards “large-scale global expansion” by selling more enterprises on the claimed cost and operational savings from consolidating multiple separate point solutions onto its hyperconverged platform. On the customer acquisition front it flags up support from its strategic investors, Cisco and HPE, to help it reach more enterprises.

Cohesity says it’s onboarded more than 200 new enterprise customers in the last two quarters — including Air Bud Entertainment, AutoNation, BC Oil and Gas Commission, Bungie, Harris Teeter, Hyatt, Kelly Services, LendingClub, Piedmont Healthcare, Schneider Electric, the San Francisco Giants, TCF Bank, the U.S. Department of Energy, the U.S. Air Force, and WestLotto — and says annual revenues grew 600% between 2016 and 2017.

In another supporting statement, CEO and founder Mohit Aron, added: “My vision has always been to provide enterprises with cloud-like simplicity for their many fragmented applications and data — backup, test and development, analytics, and more.

“Cohesity has built significant momentum and market share during the last 12 months and we are just getting started.”

Workday acquires Rallyteam to fuel machine learning efforts

Sometimes you acquire a company for the assets and sometimes you do it for the talent. Today Workday announced it was buying Rallyteam, a San Francisco startup that helps companies keep talented employees by matching them with more challenging opportunities in-house.

The companies did not share the purchase price or the number of Rallyteam employees who would be joining Workday .

In this case, Workday appears to be acquiring the talent. It wants to take the Rallyteam team and incorporate it into the company’s engineering unit to beef up its machine learning efforts, while taking advantage of the expertise it has built up over the years connecting employees with interesting internal projects.

“With Rallyteam, we gain incredible team members who created a talent mobility platform that uses machine learning to help companies better understand and optimize their workforces by matching a worker’s interests, skills and connections with relevant jobs, projects, tasks and people,” Workday’s Cristina Goldt wrote in a blog post announcing the acquisition.

Rallyteam, which was founded in 2013, and launched at TechCrunch Disrupt San Francisco in September 2014, helps employees find interesting internal projects that might otherwise get outsourced. “I knew there were opportunities that existed [internally] because as a manager, I was constantly outsourcing projects even though I knew there had to be people in the company that could solve this problem,” Rallyteam’s Huan Ho told TechCrunch’s Frederic Lardinois at the launch. Rallyteam was a service designed to solve this issue.

[gallery ids="1055100,1053586,1053580,1053581"]

Last fall the company raised $8.6 million led by Norwest Ventures with participation from Storm Ventures, Cornerstone OnDemand and Wilson Sonsini.

Workday provides a SaaS platform for human resources and finance, so the Rallyteam approach fits nicely within the scope of the Workday business. This is the 10th acquisition for Workday and the second this year.

Chart: Crunchbase

Workday raised over $230 million before going public in 2012.

Devo scores $25 million and cool new name

Logtrust is now known as Devo in one of the cooler name changes I’ve seen in a long time. Whether they intended to pay homage to the late 70s band is not clear, but investors probably didn’t care, as they gave the data operations startup a bushel of money today.

The company now known as Devo announced a $25 million Series C round led by Insight Venture Partners with participation from Kibo Ventures. Today’s investment brings the total raised to $71 million.

The company changed its name because it was about much more than logs, according to CEO Walter Scott. It offers a cloud service that allows customers to stream massive amounts of data — think terabytes or even petabytes — relieving the need to worry about all of the scaling and hardware requirements processing this amount of data would require. That could be from logs from web servers, security data from firewalls or transactions taking place on backend systems, as some examples.

The data can live on prem if required, but the processing always gets done in the cloud to provide for the scaling needs. Scott says this is about giving companies this ability to process and understand massive amounts of data that previously was only in reach of web scale companies like Google, Facebook or Amazon.

But it involves more than simply collecting the data. “It’s the combination of us being able to collect all of that data together with running analytics on top of it all in a unified platform, then allowing a very broad spectrum of the business [to make use of it],” Scott explained.

Devo dashboard. Photo: Devo

Devo sees Sumo Logic, Elastic and Splunk as its primary competitors in this space, but like many startups they often battle companies trying to build their own systems as well, a difficult approach for any company to take when you are dealing with this amount of data.

The company, which was founded in Spain is now based in Cambridge, Massachusetts, and has close to 100 employees. Scott says he has the budget to double that by the end of the year, although he’s not sure they will be able to hire that many people that rapidly

SAP gives CRM another shot with with new cloud-based suite

Customer Relationship Management (CRM) is a mature market with a clear market leader in Salesforce. It has a bunch other enterprise players like Microsoft, Oracle and SAP vying for position. SAP decided to take another shot today when it released a new business products suite called SAP C/4HANA. (Ya, catchy I know.)

SAP C/4HANA pulls together several acquisitions from the last several years. It started in 2013 when it bought Hybris for around a billion dollars. That gave them a logistics tracking piece. Then last year it got Gigya for $350 million, giving them a way to track customer identity. This year it bought the final piece when it paid $2.4 billion for CallidusCloud for a configure, price quote (CPQ) piece.

SAP has taken these three pieces and packaged them together into a customer relationship management package. They see this term much more broadly than simply tracking a database of names and vital information on customers. They hope with these products to give their customers a way to provide consumer data protection, marketing, commerce, sales and customer service.

They see this approach as different, but it’s really more of what the other players are doing by packaging sales, service and marketing into a single platform. “The legacy CRM systems are all about sales; SAP C/4HANA is all about the consumer. We recognize that every part of a business needs to be focused on a single view of the consumer. When you connect all SAP applications together in an intelligent cloud suite, the demand chain directly fuels the behaviors of the supply chain,” CEO Bill McDermott said in a statement.

It’s interesting that McDermott goes after legacy CRM tools because his company has offered its share of them over the years, but its market share has been headed in the wrong direction. This new cloud-based package is designed to change that. If you can’t build it, you can buy it, and that’s what SAP has done here.

Brent Leary, owner at CRM Essentials, who has been watching this market for many years says that while SAP has a big back-office customer base in ERP, it’s going to be tough to pull customers back to SAP as a CRM provider. “I think their huge base of ERP customers provides them with an opportunity to begin making inroads, but it will be tough as mindshare for CRM/Customer Engagement has moved away from SAP,” he told TechCrunch.

He says that it will be important with this new product to find its niche in a defined market. “It will be imperative going forward for SAP find spots to “own” in the minds of corporate buyers in order to optimize their chances of success against their main competitors,” he said.

It’s obviously not going to be easy, but SAP has used its cash to buy some companies and give it another shot. Time will tell if it was money well spent.

How Yelp (mostly) shut down its own data centers and moved to AWS

Back in 2013, Yelp was a 9-year old company built on a set of internal systems. It was coming to the realization that running its own data centers might not be the most efficient way to run a business that was continuing to scale rapidly. At the same time, the company understood that the tech world had changed dramatically from 2004 when it launched and it needed to transform the underlying technology to a more modern approach.

That’s a lot to take on in one bite, but it wasn’t something that happened willy-nilly or overnight says Jason Yellen, SVP of engineering at Yelp . The vast majority of the company’s data was being processed in a massive Python repository that was getting bigger all the time. The conversation about shifting to a microservices architecture began in 2012.

The company was also running the massive Yelp application inside its own datacenters, and as it grew it was increasingly becoming limited by long lead times required to procure and get new hardware online. It saw this was an unsustainable situation over the long-term and began a process of transforming from running a huge monolithic application on-premises to one built on microservices running in the cloud. It was a quite a journey.

The data center conundrum

Yellen described the classic scenario of a company that could benefit from a shift to the cloud. Yelp had a small operations team dedicated to setting up new machines. When engineering anticipated a new resource requirement, they had to give the operations team sufficient lead time to order new servers and get them up and running, certainly not the most efficient way to deal with a resource problem, and one that would have been easily solved by the cloud.

“We kept running into a bottleneck, I was running a chunk of the search team [at the time] and I had to project capacity out to 6-9 months. Then it would take a few months to order machines and another few months to set them up,” Yellen explained. He emphasized that the team charged with getting these machines going was working hard, but there were too few people and too many demands and something had to give.

“We were on this cusp. We could have scaled up that team dramatically and gotten [better] at building data centers and buying servers and doing that really fast, but we were hearing a lot of AWS and the advantages there,” Yellen explained.

To the cloud!

They looked at the cloud market landscape in 2013 and AWS was the clear leader technologically. That meant moving some part of their operations to EC2. Unfortunately, that exposed a new problem: how to manage this new infrastructure in the cloud. This was before the notion of cloud-native computing even existed. There was no Kubernetes. Sure, Google was operating in a cloud-native fashion in-house, but it was not really an option for most companies without a huge team of engineers.

Yelp needed to explore new ways of managing operations in a hybrid cloud environment where some of the applications and data lived in the cloud and some lived in their data center. It was not an easy problem to solve in 2013 and Yelp had to be creative to make it work.

That meant remaining with one foot in the public cloud and the other in a private data center. One tool that helped ease the transition was AWS Direct Connect, which was released the prior year and enabled Yelp to directly connect from their data center to the cloud.

Laying the groundwork

About this time, as they were figuring out how AWS works, another revolutionary technological change was occurring when Docker emerged and began mainstreaming the notion of containerization. “That’s another thing that’s been revolutionary. We could suddenly decouple the context of the running program from the machine it’s running on. Docker gives you this container, and is much lighter weight than virtualization and running full operating systems on a machine,” Yellen explained.

Another thing that was happening was the emergence of the open source data center operating system called Mesos, which offered a way to treat the data center as a single pool of resources. They could apply this notion to wherever the data and applications lived. Mesos also offered a container orchestration tool called Marathon in the days before Kubernetes emerged as a popular way of dealing with this same issue.

“We liked Mesos as a resource allocation framework. It abstracted away the fleet of machines. Mesos abstracts many machines and controls programs across them. Marathon holds guarantees about what containers are running where. We could stitch it all together into this clear opinionated interface,” he said.

Pulling it all together

While all this was happening, Yelp began exploring how to move to the cloud and use a Platform as a Service approach to the software layer. The problem was at the time they started, there wasn’t really any viable way to do this. In the buy versus build decision making that goes on in large transformations like this one, they felt they had little choice but to build that platform layer themselves.

In late 2013 they began to pull together the idea of building this platform on top of Mesos and Docker, giving it the name PaaSTA, an internal joke that stood for Platform as a Service, Totally Awesome. It became simply known as Pasta.

Photo: David Silverman/Getty Images

The project had the ambitious goal of making their infrastructure work as a single fabric, in a cloud-native fashion before most anyone outside of Google was using that term. Pasta developed slowly with the first developer piece coming online in August 2014 and the first  production service later that year in December. The company actually open sourced the technology the following year.

“Pasta gave us the interface between the applications and development teams. Operations had to make sure Pasta is up and running, while Development was responsible for implementing containers that implemented the interface,” Yellen said.

Moving to deeper into the public cloud

While Yelp was busy building these internal systems, AWS wasn’t sitting still. It was also improving its offerings with new instance types, new functionality and better APIs and tooling. Yellen reports this helped immensely as Yelp began a more complete move to the cloud.

He says there were a couple of tipping points as they moved more and more of the application to AWS — including eventually, the master database. This all happened in more recent years as they understood better how to use Pasta to control the processes wherever they lived. What’s more, he said that adoption of other AWS services was now possible due to tighter integration between the in-house data centers and AWS.

Photo: erhui1979/Getty Images

The first tipping point came around 2016 as all new services were configured for the cloud. He said they began to get much better at managing applications and infrastructure in AWS and their thinking shifted from how to migrate to AWS to how to operate and manage it.

Perhaps the biggest step in this years-long transformation came last summer when Yelp moved its master database from its own data center to AWS. “This was the last thing we needed to move over. Otherwise it’s clean up. As of 2018, we are serving zero production traffic through physical data centers,” he said. While they still have two data centers, they are getting to the point, they have the minimum hardware required to run the network backbone.

Yellen said they went from two weeks to a month to get a service up and running before this was all in place to just a couple of minutes. He says any loss of control by moving to the cloud has been easily offset by the convenience of using cloud infrastructure. “We get to focus on the things where we add value,” he said — and that’s the goal of every company.

Vulcan Cyber raises $4M for its vulnerability remediation platform

Vulcan Cyber, a Tel Aviv-based security startup that helps enterprises quickly detect and fix vulnerabilities in their software stack and code, is coming out of stealth today and announcing a $4 million seed round led by YL Ventures with participation from r a number of other cybersecurity investors.

The general idea behind Vulcan Cyber is that as businesses continue to increase the pace at which they build and adopt new software, the risk of introducing vulnerabilities only increases. But at the same time, most companies don’t have the tools in place to automatically detect and mitigate these issues, meaning that it can often take weeks before a patch rolls out.

The company argues that its position in the cybersecurity space is somewhat unique because it doesn’t just focus on detecting vulnerabilities but also helps businesses remediate them. All users have to do is give Vulcan access to the APIs of their existing vulnerability, DevOps and IT tools and the service will simply take over from there. It then watches over both the infrastructure as well as the code that runs on it.

“It might sound more glamorous to talk about zero-day and next-generation threats, but vulnerability remediation is truly where the rubber meets the road,” said Yaniv Bar-Dayan, Vulcan Cyber’s CEO and co-founder. “The only way to deal with this continuous risk exposure is through continuous remediation, achieved with robust data collection, advanced analytics, automation, and closed-loop remediation planning, orchestration and validation. This is exactly what we are delivering to IT security teams with Vulcan Cyber.”

Vulcan cyber plays nicely with all o the major cloud platforms, as well as tools like Puppet, Chef and Ansible, as well as GitHub and Bitbucket. It also integrates with a number of major security testing tools and vulnerability scanners, including Black Duck, Nessus, Fortify, Tripwire, Checkmarx, Rapid7 and Veracode.

OpenStack in transition

OpenStack is one of the most important and complex open-source projects you’ve never heard of. It’s a set of tools that allows large enterprises ranging from Comcast and PayPal to stock exchanges and telecom providers to run their own AWS-like cloud services inside their data centers. Only a few years ago, there was a lot of hype around OpenStack as the project went through the usual hype cycle. Now, we’re talking about a stable project that many of the most valuable companies on earth rely on. But this also means the ecosystem around it — and the foundation that shepherds it — is now trying to transition to this next phase.

The OpenStack project was founded by Rackspace and NASA in 2010. Two years later, the growing project moved into the OpenStack Foundation, a nonprofit group that set out to promote the project and help manage the community. When it was founded, OpenStack still had a few competitors, like CloudStack and Eucalyptus. OpenStack, thanks to the backing of major companies and its fast-growing community, quickly became the only game in town, though. With that, community events like the OpenStack Summit started to draw thousands of developers, and with each of its semi-annual releases, the number of contributors to the project has increased.

Now, that growth in contributors has slowed and, as evidenced by the attendance at this week’s Summit in Vancouver.

In the early days, there were also plenty of startups in the ecosystem — and the VC money followed them, together with some of the most lavish conference parties (or “bullshit,” as Canonical founder Mark Shuttleworth called it) that I have experienced. The OpenStack market didn’t materialize quite as fast as many had hoped, though, so some of the early players went out of business, some shut down their OpenStack units and others sold to the remaining players. Today, only a few of the early players remain standing, and the top players are now the likes of Red Hat, Canonical and Rackspace.

And to complicate matters, all of this is happening in the shadow of the Cloud Native Computing Foundation (CNCF) and the Kubernetes project it manages being in the early stages of the hype cycle.

Meanwhile, the OpenStack Foundation itself is in the middle of its own transition as it looks to bring on other open-source infrastructure projects that are complementary to its overall mission of making open-source infrastructure easier to build and consume.

Unsurprisingly, all of this clouded the mood at the OpenStack Summit this week, but I’m actually not part of the doom and gloom contingent. In my view, what we are seeing here is a mature open-source project that has gone through its ups and downs and now, with all of the froth skimmed off, it’s a tool that provides a critical piece of infrastructure for businesses. Canonical’s Mark Shuttleworth, who created his own bit of drama during his keynote by directly attacking his competitors like Red Hat, told me that low attendance at the conference may not be a bad thing, for example, since the people who are actually in attendance are now just trying to figure out what OpenStack is all about and are all potential customers.

Others echoed a similar sentiment. “I think some of it goes with, to some extent, what’s been building over the last couple of Summits,” Bryan Thompson, Rackspace’s senior director and general manager for OpenStack, said as he summed up what I heard from a number of other vendors at the event. “That is: Is open stack dead? Is this going away? Or is everything just leapfrogging and going straight to Kubernetes on bare metal. And I don’t want to phrase it as ‘it’s a good thing,’ because I think it’s a challenge for the foundation and for the community. But I think it’s actually a positive thing because the core OpenStack services — the core projects — have just matured. We’re not in the early science experiment days of trying to push ahead and scale and grow the core projects, they were actually achieved and people are actually using it.”

That current state produces fewer flashy headlines, but every survey, both from the Foundation itself and third-party analysts, show that the number of users — and their OpenStack clouds — continues to grow. Meanwhile, the Foundation is looking to bring up attendance at its events, too, by adding container and CI/CD tracks, for example.

The company that maybe best exemplifies the ups and downs of OpenStack is Mirantis, a well-funded startup that has weathered the storm by reinventing itself multiple times. Mirantis started as one of the first OpenStack distributions and contributors to the project. During those early days, it raised one of the largest funding rounds in the OpenStack world with a $100 million Series B round, which was quickly followed by another $100 million round in 2015. But by early 2017, Mirantis had pivoted from being a distribution and toward offering managed services for open-source platforms. It also made an early bet on Kubernetes and offered services for that, too. And then this year, it added yet another twist to its corporate story by refocusing its efforts on the Netflix-incubated Spinnaker open-source tool and helping companies build their CI/CD pipelines based on that. In the process, the company shrunk from almost 1,000 employees to 450 today, but as Mirantis CEO and co-founder Boris Renski told me, it’s now cash-flow positive.

So just as the OpenStack Foundation is moving toward CI/CD with its Zuul tool, Mirantis is betting on Spinnaker, which solves some of the same issues, but with an emphasis on integrating multiple code repositories. Renski, it’s worth noting, actually advocated for bringing Spinnaker into the OpenStack foundation (it’s currently managed on a more ad hoc basis by Netflix and Google).

“We need some governance, we need some process,” Renski said. “The [OpenStack] Foundation is known for actually being very good and effectively seeding this kind of formalized, automated and documented governance in open source and the two should work together much closer. I think that Spinnaker should become part of the Foundation. That’s the opportunity and I think it should focus 150 percent of their energy on that before it builds its own thing and before [Spinnaker] goes off to the CNCF as yet another project.”

So what does the Foundation think about all of this? In talking to OpenStack CTO Mark Collier and Executive Director Jonathan Bryce over the last few months, it’s clear that the Foundation knows that change is needed. That process started with opening up the Foundation to other projects, making it more akin to the Linux Foundation, where Linux remains in the name as its flagship project, but where a lot of the energy now comes from projects it helps manage, including the likes of the CNCF and Cloud Foundry. At the Sydney Summit last year, the team told me that part of the mission now is to retask the large OpenStack community to work on these new topics around open infrastructure. This week, that message became clearer.

“Our mission is all about making it easier for people to build and operate open infrastructure,” Bryce told me this week. “And open infrastructure is about operating functioning services based off of open source tool. So open source is not enough. And we’ve been, you know, I think, very, very oriented around a set of open source projects. But in the seven years since we launched, what we’ve seen is people have taken those projects, they’ve turned it into services that are running and then they piled a bunch of other stuff on top of it — and that becomes really difficult to maintain and manage over the long term.” So now, going forward, that part about maintaining these clouds is becoming increasingly important for the project.

“Open source is not enough,” is an interesting phrase here, because that’s really at the core of the issue at hand. “The best thing about open source is that there’s more of it than ever,” said Bryce. “And it’s also the worst thing. Because the way that most open source communities work is that it’s almost like having silos of developers inside of a company — and then not having them talk to each other, not having them test together, and then expecting to have a coherent, easy to use product come out at the end of the day.”

And Bryce also stressed that projects like OpenStack can’t be only about code. Moving to a cloud-native development model, whether that’s with Kubernetes on top of OpenStack or some other model, is about more than just changing how you release software. It’s also about culture.

“We realized that this was an aspect of the foundation that we were under-prioritizing,” said Bryce. “We focused a lot on the OpenStack projects and the upstream work and all those kinds of things. And we also built an operator community, but I think that thinking about it in broader terms lead us to a realization that we had last year. It’s not just about OpenStack. The things that we have done to make OpenStack more usable apply broadly to these businesses [that use it], because there isn’t a single one that’s only running OpenStack. There’s not a single one of them.”

More and more, the other thing they run, besides their legacy VMware stacks, is containers and specifically containers managed with Kubernetes, of course, and while the OpenStack community first saw containers as a bit of a threat, the Foundation is now looking at more ways to bring those communities together, too.

What about the flagging attendance at the OpenStack events? Bryce and Collier echoed what many of the vendors also noted. “In the past, we had something like 7,000 developers — something insane — but the bulk of the code comes down to about 200 or 300 developers,” said Bryce. Even the somewhat diminished commercial ecosystem doesn’t strike Bryce and Collier as too much of an issue, in part because the Foundation’s finances are closely tied to its membership. And while IBM dropped out as a project sponsor, Tencent took its place.

“There’s the ecosystem side in terms of who’s making a product and selling it to people,” Collier acknowledged. “But for whom is this so critical to their business results that they are going to invest in it. So there’s two sides to that, but in terms of who’s investing in OpenStack and the Foundation and making all the software better, I feel like we’re in a really good place.” He also noted that the Foundation is seeing lots of investment in China right now, so while other regions may be slowing down, others are picking up the slack.

So here is an open-source project in transition — one that has passed through the trough of disillusionment and hit the plateau of productivity, but that is now looking for its next mission. Bryce and Collier admit that they don’t have all the answers, but if there’s one thing that’s clear, it’s that both the OpenStack project and foundation are far from dead.

InVision design tool Studio gets an app store, asset store

InVision, the startup that wants to be the operating system for designers, today introduced its app store and asset store within InVision Studio. In short, InVision Studio users now have access to some of their most-used apps and services from right within the Studio design tool. Plus, those same users will be able to shop for icons, UX/UI components, typefaces and more from within Studio.

While Studio is still in its early days, InVision has compiled a solid list of initial app store partners, including Google, Salesforce, Slack, Getty, Atlassian, and more.

InVision first launched as a collaboration tool for designers, letting designers upload prototypes into the cloud so that other members of the organization could leave feedback before engineers set the design in stone. Since that launch in 2011, InVision has grown to 4 million users, capturing 80 percent of the Fortune 100, raising a total of $235 million in funding.

While collaboration is the bread and butter of InVision’s business, and the only revenue stream for the company, CEO and founder Clark Valberg feels that it isn’t enough to be complementary to the current design tool ecosystem. Which is why InVision launched Studio in late 2017, hoping to take on Adobe and Sketch head-on with its own design tool.

Studio differentiates itself by focusing on the designer’s real-life workflow, which often involves mocking up designs in one app, pulling assets from another, working on animations and transitions in another, and then stitching the whole thing together to share for collaboration across InVision Cloud. Studio aims to bring all those various services into a single product, and a critical piece of that mission is building out an app store and asset store with the services too sticky for InVision to rebuild from Scratch, such as Slack or Atlassian.

With the InVision app store, Studio users can search Getty from within their design and preview various Getty images without ever leaving the app. They can then share that design via Slack or send it off to engineers within Atlassian, or push it straight to UserTesting.com to get real-time feedback from real people.

InVision Studio launched with the ability to upload an organization’s design system (type faces, icons, logos, and hex codes) directly into Studio, ensuring that designers have easy access to all the assets they need. Now InVision is taking that a step further with the launch of the asset store, letting designers sell their own assets to the greater designer ecosystem.

“Our next big move is to truly become the operating system for product design,” said Valberg. “We want to be to designers what Atlassian is for engineers, what Salesforce is to sales. We’ve worked to become a full-stack company, and now that we’re managing that entire stack it has liberated us from being complementary products to our competitors. We are now a standalone product in that respect.”

Since launching Studio, the service has grown to more than 250,000 users. The company says that Studio is still in Early Access, though it’s available to everyone here.

Box expands Zones to manage content in multiple regions

When Box announced Zones a couple of years ago, it was providing a way for customers to store data outside the U.S., but there were some limits. Each customer could choose the U.S. and one additional zone. Customers wanted more flexibility, and today the company announced it was allowing them to choose to multiple zones.

The new feature gives a company the ability to store content across any of the 7 zones (plus the U.S) that Box currently supports across the world. A zone is essentially a Box co-location datacenter partner in various locations. The customer can now choose a default zone and then manage multiple zones from a single customer ID in the Box admin console, according to Jeetu Patel, chief product officer at Box.

Initially customers wanted to have a choice to store data in a region outside the U.S., but over time they began asking for a solution to not just pick one additional zone, but to have access to multiple zones.

Current Box Zones. Photo: Box

Content will go to a defined default zone unless the admin creates rules specifying another location. In terms of data sovereignty, the file will always live in the country of record, even if an employee outside that country has access to it. From an end user perspective, they won’t know where the content lives if the administrators allow access to it.

This may not seem like a huge deal on its face, but from a content management standpoint, it presented some challenges. Patel says the company designed the product with this ability in mind from the start, but it took some development time to get there.

“When we launched Zones we knew we would [eventually require] multi-zone capability, and we had to make sure the architecture could handle that,” Patel explained. They did this by abstracting the architecture to separate the storage and business logic tiers. Creating this modular approach allowed them to increase the capabilities as they built out Zones.

It doesn’t hurt that this feature is being made available just days before the EU’s GDPR data privacy rules are going into effect. “Zones is not just for GDPR, but it does help customers meet their GDPR obligations,” Patel said.

Overall, Zones is part of Box’s strategy to provide content management services in the cloud and give customers, even regulated industries, the ability to control how that content is used. This expansion is one more step on that journey.