ScyllaDB takes on Amazon with new DynamoDB migration tool

There are a lot of open source databases out there, and ScyllaDB, a NoSQL variety, is looking to differentiate itself by attracting none other than Amazon users. Today, it announced a DynamoDB migration tool to help Amazon customers move to its product.

It’s a bold move, but Scylla, which has a free open source product along with paid versions, has always had a penchant for going after bigger players. It has had a tool to help move Cassandra users to ScyllaDB for some time.

CEO Dor Laor says DynamoDB customers can now also migrate existing code with little modification. “If you’re using DynamoDB today, you will still be using the same drivers and the same client code. In fact, you don’t need to modify your client code one bit. You just need to redirect access to a different IP address running Scylla,” Laor told TechCrunch.

He says that the reason customers would want to switch to Scylla is because it offers a faster and cheaper experience by utilizing the hardware more efficiently. That means companies can run the same workloads on fewer machines, and do it faster, which ultimately should translate to lower costs.

The company also announced a $25 million Series C extension led by Eight Roads Ventures. Existing investors Bessemer Venture Partners, Magma Venture Partners, Qualcomm Ventures and TLV Partners also participated. Scylla has raised a total of $60 million, according to the company.

The startup has been around for 6 years and customers include Comcast, GE, IBM and Samsung. Laor says that Comcast went from running Cassandra on 400 machines to running the same workloads with Scylla on just 60.

Laor is playing the long game in the database market, and it’s not about taking on Cassandra, DynamoDB or any other individual product. “Our main goal is to be the default NoSQL database where if someone has big data, real-time workloads, they’ll think about us first, and we will become the default.”

APIs are the next big SaaS wave

While the software revolution started out slowly, over the past few years it’s exploded and the fastest-growing segment to-date has been the shift towards software as a service or SaaS.

SaaS has dramatically lowered the intrinsic total cost of ownership for adopting software, solved scaling challenges and taken away the burden of issues with local hardware. In short, it has allowed a business to focus primarily on just that — its business — while simultaneously reducing the burden of IT operations.

Today, SaaS adoption is increasingly ubiquitous. According to IDG’s 2018 Cloud Computing Survey, 73% of organizations have at least one application or a portion of their computing infrastructure already in the cloud. While this software explosion has created a whole range of downstream impacts, it has also caused software developers to become more and more valuable.

The increasing value of developers has meant that, like traditional SaaS buyers before them, they also better intuit the value of their time and increasingly prefer businesses that can help alleviate the hassles of procurement, integration, management, and operations. Developer needs to address those hassles are specialized.

They are looking to deeply integrate products into their own applications and to do so, they need access to an Application Programming Interface, or API. Best practices for API onboarding include technical documentation, examples, and sandbox environments to test.

APIs tend to also offer metered billing upfront. For these and other reasons, APIs are a distinct subset of SaaS.

For fast-moving developers building on a global-scale, APIs are no longer a stop-gap to the future—they’re a critical part of their strategy. Why would you dedicate precious resources to recreating something in-house that’s done better elsewhere when you can instead focus your efforts on creating a differentiated product?

Thanks to this mindset shift, APIs are on track to create another SaaS-sized impact across all industries and at a much faster pace. By exposing often complex services as simplified code, API-first products are far more extensible, easier for customers to integrate into, and have the ability to foster a greater community around potential use cases.

Screen Shot 2019 09 06 at 10.40.51 AM

Graphics courtesy of Accel

Billion-dollar businesses building APIs

Whether you realize it or not, chances are that your favorite consumer and enterprise apps—Uber, Airbnb, PayPal, and countless more—have a number of third-party APIs and developer services running in the background. Just like most modern enterprises have invested in SaaS technologies for all the above reasons, many of today’s multi-billion dollar companies have built their businesses on the backs of these scalable developer services that let them abstract everything from SMS and email to payments, location-based data, search and more.

Simultaneously, the entrepreneurs behind these API-first companies like Twilio, Segment, Scale and many others are building sustainable, independent—and big—businesses.

Valued today at over $22 billion, Stripe is the biggest independent API-first company. Stripe took off because of its initial laser-focus on the developer experience setting up and taking payments. It was even initially known as /dev/payments!

Stripe spent extra time building the right, idiomatic SDKs for each language platform and beautiful documentation. But it wasn’t just those things, they rebuilt an entire business process around being API-first.

Companies using Stripe didn’t need to fill out a PDF and set up a separate merchant account before getting started. Once sign-up was complete, users could immediately test the API with a sandbox and integrate it directly into their application. Even pricing was different.

Stripe chose to simplify pricing dramatically by starting with a single, simple price for all cards and not breaking out cards by type even though the costs for AmEx cards versus Visa can differ. Stripe also did away with a monthly minimum fee that competitors had.

Many competitors used the monthly minimum to offset the high cost of support for new customers who weren’t necessarily processing payments yet. Stripe flipped that on its head. Developers integrate Stripe earlier than they integrated payments before, and while it costs Stripe a lot in setup and support costs, it pays off in brand and loyalty.

Checkr is another excellent example of an API-first company vastly simplifying a massive yet slow-moving industry. Very little had changed over the last few decades in how businesses ran background checks on their employees and contractors, involving manual paperwork and the help of 3rd party services that spent days verifying an individual.

Checkr’s API gives companies immediate access to a variety of disparate verification sources and allows these companies to plug Checkr into their existing on-boarding and HR workflows. It’s used today by more than 10,000 businesses including Uber, Instacart, Zenefits and more.

Like Checkr and Stripe, Plaid provides a similar value prop to applications in need of banking data and connections, abstracting away banking relationships and complexities brought upon by a lack of tech in a category dominated by hundred-year-old banks. Plaid has shown an incredible ramp these past three years, from closing a $12 million Series A in 2015 to reaching a valuation over $2.5 billion this year.

Today the company is fueling an entire generation of financial applications, all on the back of their well-built API.

Screen Shot 2019 09 06 at 10.41.02 AM

Graphics courtesy of Accel

Then and now

Accel’s first API investment was in Braintree, a mobile and web payment systems for e-commerce companies, in 2011. Braintree eventually sold to, and became an integral part of, PayPal as it spun out from eBay and grew to be worth more than $100 billion. Unsurprisingly, it was shortly thereafter that our team decided to it was time to go big on the category. By the end of 2014 we had led the Series As in Segment and Checkr and followed those investments with our first APX conference in 2015.

Plaid, Segment, Auth0, and Checkr had only raised Seed or Series A financings! And we are even more excited and bullish on the space. To convey just how much API-first businesses have grown in such a short period of time, we thought it would be useful perspective to share some metrics over the past five years, which we’ve broken out in the two visuals included above in this article.

While SaaS may have pioneered the idea that the best way to do business isn’t to actually build everything in-house, today we’re seeing APIs amplify this theme. At Accel, we firmly believe that APIs are the next big SaaS wave — having as much if not more impact as its predecessor thanks to developers at today’s fastest-growing startups and their preference for API-first products. We’ve actively continued to invest in the space (in companies like, Scale, mentioned above).

And much like how a robust ecosystem developed around SaaS, we believe that one will continue to develop around APIs. Given the amount of progress that has happened in just a few short years, Accel is hosting our second APX conference to once again bring together this remarkable community and continue to facilitate discussion and innovation.

Screen Shot 2019 09 06 at 10.41.10 AM

Graphics courtesy of Accel

New open source project wants to expand serverless vision beyond functions

Serverless technology offers developers a way to develop without thinking about the infrastructure resources required to run a program, but up until now it has mostly been limited to function-driven programming. CloudState, a new open source project from Lightbend, wants to change that by moving beyond functions.

Lightbend CTO Jonas Bonér believes this ability to abstract away infrastructure could extend beyond functions and triggers into a broader developer experience. “I think people sometimes [don’t distinguish] between serverless and Function as a Service. I think that’s actually cutting the technology short. What serverless really brings to the table is this completely new developer experience and operations experience by trying to automate as much as possible,” Bonér told TechCrunch.

He says that when he talks to customers, they are hankering for a more complete serverless developer experience that includes all parts of the program. “A lot of people say that I have this excellent use case for the current incarnation of serverless and Function as a Service, but the rest of my application doesn’t really work running there,” he said. That’s exactly what CloudState is trying to address.

Bonér is careful to point out that he’s not looking to replace function-driven programming. He only wants to augment it. CloudState takes advantage of some existing technologies like KNative, the open source project that is trying to bring together serverless and containerization, as well as gRPC, Akka Cluster, and GraalVM on Kubernetes.

He acknowledges that CloudState is still a work in progress, but he has the basic building blocks in place, and he’s hoping to use the power of open source to drive the development of this early-stage project. Today, it includes several key pieces — a specification outlining the goals of the project, a protocol to begin implementing it and a testing kit.

The goal here is to bring to fruition this broader vision of what serverless means where developers can just write code without having to worry about the underlying infrastructure where the program will run. It’s a bold approach, but as Bonér says, it’s still early days, and will take time and a community to really build this out.

How Pivotal got bailed out by fellow Dell family member, VMware

When Dell acquired EMC in 2016 for $67 billion, it created a complicated consortium of interconnected organizations. Some, like VMware and Pivotal, operate as completely separate companies. They have their own boards of directors, can acquire companies and are publicly traded on the stock market. Yet they work closely within Dell, partnering where it makes sense. When Pivotal’s stock price plunged recently, VMware saved the day when it bought the faltering company for $2.7 billion yesterday.

Pivotal went public last year, and sometimes struggled, but in June the wheels started to come off after a poor quarterly earnings report. The company had what MarketWatch aptly called “a train wreck of a quarter.”

How bad was it? So bad that its stock price was down 42% the day after it reported its earnings. While the quarter itself wasn’t so bad, with revenue up year over year, the guidance was another story. The company cut its 2020 revenue guidance by $40-$50 million and the guidance it gave for the upcoming 2Q 19 was also considerably lower than consensus Wall Street estimates.

The stock price plunged from a high of $21.44 on May 30th to a low of $8.30 on August 14th. The company’s market cap plunged in that same time period falling from $5.828 billion on May 30th to $2.257 billion on August 14th. That’s when VMware admitted it was thinking about buying the struggling company.

IBM is moving OpenPower Foundation to The Linux Foundation

IBM makes the Power Series chips, and as part of that has open sourced some of the underlying technologies to encourage wider use of these chips. The open source pieces have been part of the OpenPower Foundation. Today, the company announced it was moving the foundation under The Linux Foundation, and while it was at it, announced it was open sourcing several other important bits.

Ken King, general manager for OpenPower at IBM, says that at this point in his organization’s evolution, they wanted to move it under the auspices of the Linux Foundation . “We are taking the OpenPower Foundation, and we are putting it as an entity or project underneath The Linux Foundation with the mindset that we are now bringing more of an open governance approach and open governance principles to the foundation,” King told TechCrunch.

But IBM didn’t stop there. It also announced that it was open sourcing some of the technical underpinnings of the Power Series chip to make it easier for developers and engineers to build on top of the technology. Perhaps most importantly, the company is open sourcing the Power Instruction Set Architecture (ISA). These are “the definitions developers use for ensuring hardware and software work together on Power,” the company explained.

King sees open sourcing this technology as an important step for a number of reasons around licensing and governance. “The first thing is that we are taking the ability to be able to implement what we’re licensing, the ISA instruction set architecture, for others to be able to implement on top of that instruction set royalty free with patent rights,” he explained.

The company is also putting this under an open governance workgroup at the OpenPower Foundation. This matters to open source community members because it provides a layer of transparency that might otherwise be lacking. What that means in practice is that any changes will be subject to a majority vote, so long as the changes meet compatibility requirements, King said.

Jim Zemlin, executive director at the Linux Foundation, says that making all of this part of the Linux Foundation open source community could drive more innovation. “Instead of a very, very long cycle of building an application and working separately with hardware and chip designers, because all of this is open, you’re able to quickly build your application, prototype it with hardware folks, and then work with a service provider or a company like IBM to take it to market. So there’s not tons of layers in between the actual innovation and value captured by industry in that cycle,” Zemlin explained.

In addition, IBM made several other announcements around open sourcing other Power Chip technologies designed to help developers and engineers customize and control their implementations of Power chip technology. “IBM will also contribute multiple other technologies including a softcore implementation of the Power ISA, as well as reference designs for the architecture-agnostic Open Coherent Accelerator Processor Interface (OpenCAPI) and the Open Memory Interface (OMI). The OpenCAPI and OMI technologies help maximize memory bandwidth between processors and attached devices, critical to overcoming performance bottlenecks for emerging workloads like AI,” the company said in a statement.

The softcore implementation of the Power ISA, in particular, should give developers more control and even enable them to build their own instruction sets, Hugh Blemings, executive director of the OpenPower Foundation explained. “They can now actually try crafting their own instruction sets, and try out new ways of the accelerated data processes and so forth at a lower level than previously possible,” he said.

The company is announcing all of this today at the The Linux Foundation Open Source Summit and OpenPower Summit in San Diego.

With MapR fire sale, Hadoop’s promise has fallen on hard times

If you go back about a decade, Hadoop was hot and getting hotter. It was a platform for processing big data, just as big data was emerging from the domain of a few web-scale companies to one where every company was suddenly concerned about processing huge amounts of data. The future was bright, an open source project with a bunch of startups emerging to fulfill that big data promise in the enterprise.

Three companies in particular emerged out of that early scrum — Cloudera, Hortonworks and MapR — and between them raised more than $1.5 billion. The lion’s share of that went to Cloudera in one massive chunk when Intel Capital invested a whopping $740 million in the company. But times have changed.

2018 china ipos

Via TechCrunch, Crunchbase, Infogram

Falling hard

Just yesterday, HPE bought the assets of MapR, a company that had raised $280 million. The deal was pegged at under $50 million, according to multiple reports. That’s not what you call a healthy return on investment.

Mesosphere changes name to D2IQ, shifts focus to Kubernetes, cloud native

Mesosphere was born as the commercial face of the open source Mesos project. It was surely a clever solution to make virtual machines run much more efficiently, but times change and companies change. Today the company announced it was changing its name to Day2IQ or D2IQ for short, and fixing its sights on Kubernetes and cloud native, which have grown quickly in the years since Mesos appeared on the scene.

D2IQ CEO Mike Fey says that the name reflects the company’s new approach. Instead of focusing entirely on the Mesos project, it wants to concentrate on helping more mature organizations adopt cloud native technologies.

“We felt like the Mesosphere name was somewhat of constrictive. It made statements about the company that really allocated us to a given technology, instead of to our core mission, which is supporting successful Day Two operations, making cloud native a viable approach not just for the early adopters, but for everybody,” Fey explained.

Fey is careful to point out that the company will continue to support the Mesos-driven DC/OS solution, but the general focus of the company has shifted, and the new name is meant to illustrate that. “The Mesos product line is still doing well, and there are things that it does that nothing else can deliver on yet. So we’re not abandoning that totally, but we do see that Kubernetes is very powerful, and the community behind it is amazing, and we want to be a value added member of that community,” he said.

He adds that this is not about jumping on the cloud native bandwagon all of a sudden. He points out his company has had a Kubernetes product for more than a year running on top of DC/OS, and it has been a contributing member to the cloud native community.

It’s not just about a name change and refocusing the company and the brand, it also involves several new cloud native products that the company has built to serve the type of audience, the more mature organization, that the new name was inspired by.

For starters, it’s introducing its own flavor of Kubernetes called Konvoy, which it says, provides an “enterprise-grade Kubernetes experience.” The company will also provide a support and training layer, which it believes is a key missing piece, and one that is required by larger organizations looking to move to cloud native.

In addition, it is offering a data integration layer, which is designed to help integrate large amounts of data in a cloud-native fashion. To that end, it is introducing a Beta of Kudo, an open source cloud-native tool for building stateful operations in Kubernetes. The company has already donated this tool to the Cloud Native Computing foundation, the open source organization that houses Kubernetes and other cloud native projects.

The company faces stiff competition in this space from some heavy hitters like the newly combined IBM and Red Hat, but it believes by adhering to a strong open source ethos, it can move beyond its Mesos roots to become a player in the cloud native space. Time will tell if it made a good bet.

We’re talking Kubernetes at TC Sessions: Enterprise with Google’s Aparna Sinha and VMware’s Craig McLuckie

Over the past five years, Kubernetes has grown from a project inside of Google to an open source powerhouse with an ecosystem of products and services, attracting billions of dollars in venture investment. In fact, we’ve already seen some successful exits, including one from one of our panelists.

On September 5th at TC Sessions: Enterprise, we’re going to be discussing the rise of Kubernetes with two industry veterans. For starters we have Aparna Sinha, director of product management for Kubernetes and the newly announced Anthos product. Sinha was in charge of several early Kubernetes releases and has worked on the Kubernetes team at Google since 2016. Prior to joining Google, she had 15 years experience in enterprise software settings.

Craig McLuckie will also be joining the conversation. He’s one of the original developers of Kubernetes at Google. He went on to found his own Kubernetes startup, Heptio, with Joe Beda, another Google Kubernetes alum. They sold the company to VMware last year for $505 million after raising $33.5 million, according to Crunchbase data.

The two bring a vast reservoir of knowledge and will be discussing the history of Kubernetes, why Google decided to open source it and how it came to grow so quickly. Two other Kubernetes luminaries will be joining them. We’ll have more about them in another post soon.

Kubernetes is a container orchestration engine. Instead of developing large monolithic applications that sit on virtual machines, containers run a small part of the application. As the components get smaller, it requires an orchestration layer to deliver the containers when needed and make them go away when they are not longer required. Kubernetes acts as the orchestra leader.

As Kubernetes, containerization and the cloud-native ethos it encompasses has grown, it has helped drive the enterprise shift to the cloud in general. If you can write your code once, and use it in the cloud or on prem, it means you don’t have to manage applications using different tool sets and that has had broad appeal for enterprises making the shift to the cloud.

TC Sessions: Enterprise (September 5 at San Francisco’s Yerba Buena Center) will take on the big challenges and promise facing enterprise companies today. TechCrunch’s editors will bring to the stage founders and leaders from established and emerging companies to address rising questions, like the promised revolution from machine learning and AI, intelligent marketing automation and the inevitability of the cloud, as well as the outer reaches of technology, like quantum computing and blockchain.

Tickets are now available for purchase on our website at the early-bird rate of $395; student tickets are just $245.

Student tickets are just $245 – grab them here.

We have a limited number of Startup Demo Packages available for $2,000, which includes four tickets to attend the event.

For each ticket purchased for TC Sessions: Enterprise, you will also be registered for a complimentary Expo Only pass to TechCrunch Disrupt SF on October 2-4.

Logz.io lands $52M to keep growing open source-based logging tools

Logz.io announced a $52 million Series D investment today. The round was led by General Catalyst.

Other investors participating in the round included OpenView Ventures, 83North, Giza Venture Capital, Vintage Investment Partners, Greenspring Associates and Next47. Today’s investment brings the total raised to nearly $100 million, according to Crunchbase data.

Logz.io is a company built on top of the open source tools Elasticsearch, Logstash, and Kibana (collectively known by the acronym ELK) and Grafana. It’s taking those tools in a typical open source business approach, packaging them up and offering them as a service. This approach enables large organizations to take advantage of these tools without having to deal with the raw open source projects.

The company’s solutions intelligently scan logs looking for anomalies. When it finds them, it surfaces the problem and informs IT or security, depending on the scenario, using a tool like PagerDuty. This area of the market has been dominated in recent years by vendors like Splunk and Sumo Logic, but company founder and CEO Tomer Levy saw a chance to disrupt that space by packaging a set of open source logging tools that were rapidly increasing in popularity. They believed could build on that growing popularity, while solving a pain point the founders had actually experienced in previous positions, which is always a good starting point for a startup idea.

Screenshot: Logz.io

“We saw that the majority of the market is actually using open source. So we said, we want to solve this problem, a problem we have faced in the past and didn’t have a solution. What we’re going to do is we’re going to provide you with an easy-to-use cloud service that is offering an open source compatible solution,” Levy explained. In other words, they wanted to build on that open source idea, but offer it in a form that was easier to consume.

Larry Bohn, who is leading the investment for General Catalyst, says that his firm liked the idea of a company building on top of open source because it provides a built-in community of developers to drive the startup’s growth — and it appears to be working. “The numbers here were staggering in terms of how quickly people were adopting this and how quickly it was growing. It was very clear to us that the company was enjoying great success without much of a commercial orientation,” Bohn explained.

In fact, Logz.io already has 700 customers including large names like Schneider Electric, The Economist and British Airways. The company has 175 employees today, but Levy says they expect to grow that 250 by the end of this year, as they use this money to accelerate their overall growth.

The challenges of truly embracing cloud native

There is a tendency at any conference to get lost in the message. Spending several days immersed in any subject tends to do that. The purpose of such gatherings is, after all, to sell the company or technologies being featured.

Against the beautiful backdrop of the city of Barcelona last week, we got the full cloud native message at KubeCon and CloudNativeCon. The Cloud Native Computing Foundation (CNCF), which houses Kubernetes and related cloud native projects, had certainly honed the message along with the community who came to celebrate its five-year anniversary. The large crowds that wandered the long hallways of the Fira Gran Via conference center proved it was getting through, at least to a specific group.

Cloud native computing involves a combination of software containerization along with Kubernetes and a growing set of adjacent technologies to manage and understand those containers. It also involves the idea of breaking down applications into discrete parts known as microservices, which in turn leads to a continuous delivery model, where developers can create and deliver software more quickly and efficiently. At the center of all this is the notion of writing code once and being able to deliver it on any public cloud, or even on-prem. These approaches were front and center last week.

At five years old, many developers have embraced these concepts, but cloud native projects have reached a size and scale where they need to move beyond the early adopters and true believers and make their way deep into the enterprise. It turns out that it might be a bit harder for larger companies with hardened systems to make wholesale changes in the way they develop applications, just as it is difficult for large organizations to take on any type of substantive change.

Putting up stop signs