Wasabi announces $30M in debt financing as cloud storage business continues to grow

We may be in the thick of a pandemic with all of the economic fallout that comes from that, but certain aspects of technology don’t change no matter the external factors. Storage is one of them. In fact, we are generating more digital stuff than ever, and Wasabi, a Boston-based startup that has figured out a way to drive down the cost of cloud storage is benefiting from that.

Today it announced a $30 million debt financing round led led by Forestay Capital, the technology innovation arm of Waypoint Capital with help from previous investors. As with the previous round, Wasabi is going with home office investors, rather than traditional venture capital firms. Today’s round brings the total raised to $110 million, according to the company.

Founder and CEO David Friend says the company needs the funds to keep up with the rapid growth. “We’ve got about 15,000 customers today, hundreds of petabytes of storage, 2500 channel partners, 250 technology partners — so we’ve been busy,” he said.

He says that revenue continues to grow in spite of the impact of COVID-19 on other parts of the economy. “Revenue grew 5x last year. It’ll probably grow 3.5x this year. We haven’t seen any real slowdown from the Coronavirus. Quarter over quarter growth will be in excess of 40% — this quarter over Q1 — so it’s just continuing on a torrid pace,” he said.

He said the money will be used mostly to continue to expand its growing infrastructure requirements. The more they store, the more data centers they need and that takes money. He is going the debt route because his products are backed by a tangible asset, the infrastructure used to store all the data in the Wasabi system. And it turns out that debt financing is a lot cheaper in terms of payback than equity terms.

“Our biggest need is to build more infrastructure, because we are constantly buying equipment. We have to pay for it even before it fills up with customer data, so we’re raising another debt round now,” Friend said. He added, “Part of what we’re doing is just strengthening our balance sheet to give us access to more inexpensive debt to finance the building of the infrastructure.”

The challenge for a company like Wasabi, which is looking to capture a large chunk of the growing cloud storage market is the infrastructure piece. It needs to keep building more to meet increasing demand, while keeping costs down, which remains its primary value proposition with customers.

The money will help the company expand into new markets as many countries have data sovereignty laws that require data to be stored in-country. That requires more money and that’s the thinking behind this round.

The company launched in 2015. It previously raised $68 million in 2018.

Rallyhood exposed a decade of users’ private data

Rallyhood says it’s “private and secure.” But for some time, it wasn’t.

The social network designed to help groups communicate and coordinate left one of its cloud storage buckets containing user data open and exposed. The bucket, hosted on Amazon Web Services (AWS), was not protected with a password, allowing anyone who knew the easily-guessable web address access to a decade’s worth of user files.

Rallyhood boasts users from Girl Scout and Boy Scout troops, and Komen, Habitat for Humanities, and YMCA factions. The company also hosts thousands of smaller groups, like local bands, sports teams, art clubs, and organizing committees. Many flocked to the site after Rallyhood said it would help migrate users from Yahoo Groups, after Verizon (which also owns TechCrunch) said it would shut down the discussion forum site last year.

The bucket contained group data as far back to 2011 up to and including last month. In total, the bucket contained 4.1 terabytes of uploaded files, representing millions of users’ files.

Some of the files we reviewed contained sensitive data, like shared password lists and contracts or other permission slips and agreements. The documents also included non-disclosure agreements and other files that were not intended to be public.

Where we could identify contact information of users whose information was exposed, TechCrunch reached out to verify the authenticity of the data.

A security researcher who goes by the handle Timeless found the exposed bucket and informed TechCrunch, so that the bucket and its files could be secured.

When reached, Rallyhood chief technology officer Chris Alderson initially claimed that the bucket was for “testing” and that all user data was stored “in a highly secured bucket,” but later admitted that during a migration project, “there was a brief period when permissions were mistakenly left open.”

It’s not known if Rallyhood plans to warn its users and customers of the security lapse. At the time of writing, Rallyhood has made no statement on its website or any of its social media profiles of the incident.

How Spotify ran the largest Google Dataflow job ever for Wrapped 2019

In early December, Spotify launched its annual personalized Wrapped playlist with its users’ most-streamed sounds of 2019. That has become a bit of a tradition and isn’t necessarily anything new, but for 2019, it also gave users a look back at how they used Spotify over the last decade. Because this was quite a large job, Spotify gave us a bit of a look under the covers of how it generated these lists for its ever-growing number of free and paid subscribers.

It’s no secret that Spotify is a big Google Cloud Platform user. Back in 2016, the music streaming service publicly said that it was going to move to Google Cloud, after all, and in 2018, it disclosed that it would spend at least $450 million on its Google Cloud infrastructure in the following three years.

It was also back in 2018, for that year’s Wrapped, that Spotify ran the largest Google Cloud Dataflow job ever run on the platform, a service the company started experimenting with a few years earlier. “Back in 2015, we built and open-sourced a big data processing Scala API for Apache Beam and Google Cloud Dataflow called Scio,” Spotify’s VP of Engineering Tyson Singer told me. “We chose Dataflow over Dataproc because it scales with less operational overhead and Dataflow fit with our expected needs for streaming processing. Now we have a great open-source toolset designed and optimized for Dataflow, which in addition to being used by most internal teams, is also used outside of Spotify.”

For Wrapped 2019, which includes the annual and decadal lists, Spotify ran a job that was five times larger than in 2018 — but it did so at three-quarters of the cost. Singer attributes this to his team’s familiarity with the platform. “With this type of global scale, complexity is a natural consequence. By working closely with Google Cloud’s engineering teams and specialists and drawing learnings from previous years, we were able to run one of the most sophisticated Dataflow jobs ever written.”

Still, even with this expertise, the team couldn’t just iterate on the full data set as it figured out how to best analyze the data and use it to tell the most interesting stories to its users. “Our jobs to process this would be large and complex; we needed to decouple the complexity and processing in order to not overwhelm Google Cloud Dataflow,” Singer said. “This meant that we had to get more creative when it came to going from idea, to data analysis, to producing unique stories per user, and we would have to scale this in time and at or below cost. If we weren’t careful, we risked being wasteful with resources and slowing down downstream teams.”

To handle this workload, Spotify not only split its internal teams into three groups (data processing, client-facing and design, and backend systems), but also split the data processing jobs into smaller pieces. That marked a very different approach for the team. “Last year Spotify had one huge job that used a specific feature within Dataflow called “Shuffle.” The idea here was that having a lot of data, we needed to sort through it, in order to understand who did what. While this is quite powerful, it can be costly if you have large amounts of data.”

This year, the company’s engineers minimized the use of Shuffle by using Google Cloud’s Bigtable as an intermediate storage layer. “Bigtable was used as a remediation tool between Dataflow jobs in order for them to process and store more data in a parallel way, rather than the need to always regroup the data,” said Singer. “By breaking down our Dataflow jobs into smaller components — and reusing core functionality — we were able to speed up our jobs and make them more resilient.”

Singer attributes at least a part of the cost savings to this technique of using Bigtable, but he also noted that the team decomposed the problem into data collection, aggregation and data transformation jobs, which it then split into multiple separate jobs. “This way, we were not only able to process more data in parallel, but be more selective about which jobs to rerun, keeping our costs down.”

Many of the techniques the engineers on Singer’s teams developed are currently in use across Spotify. “The great thing about how Wrapped works is that we are able to build out more tools to understand a user, while building a great product for them,” he said. “Our specialized techniques and expertise of Scio, Dataflow and big data processing, in general, is widely used to power Spotify’s portfolio of products.”

How Spotify ran the largest Google Dataflow job ever for Wrapped 2019

In early December, Spotify launched its annual personalized Wrapped playlist with its users’ most-streamed sounds of 2019. That has become a bit of a tradition and isn’t necessarily anything new, but for 2019, it also gave users a look back at how they used Spotify over the last decade. Because this was quite a large job, Spotify gave us a bit of a look under the covers of how it generated these lists for its ever-growing number of free and paid subscribers.

It’s no secret that Spotify is a big Google Cloud Platform user. Back in 2016, the music streaming service publicly said that it was going to move to Google Cloud, after all, and in 2018, it disclosed that it would spend at least $450 million on its Google Cloud infrastructure in the following three years.

It was also back in 2018, for that year’s Wrapped, that Spotify ran the largest Google Cloud Dataflow job ever run on the platform, a service the company started experimenting with a few years earlier. “Back in 2015, we built and open-sourced a big data processing Scala API for Apache Beam and Google Cloud Dataflow called Scio,” Spotify’s VP of Engineering Tyson Singer told me. “We chose Dataflow over Dataproc because it scales with less operational overhead and Dataflow fit with our expected needs for streaming processing. Now we have a great open-source toolset designed and optimized for Dataflow, which in addition to being used by most internal teams, is also used outside of Spotify.”

For Wrapped 2019, which includes the annual and decadal lists, Spotify ran a job that was five times larger than in 2018 — but it did so at three-quarters of the cost. Singer attributes this to his team’s familiarity with the platform. “With this type of global scale, complexity is a natural consequence. By working closely with Google Cloud’s engineering teams and specialists and drawing learnings from previous years, we were able to run one of the most sophisticated Dataflow jobs ever written.”

Still, even with this expertise, the team couldn’t just iterate on the full data set as it figured out how to best analyze the data and use it to tell the most interesting stories to its users. “Our jobs to process this would be large and complex; we needed to decouple the complexity and processing in order to not overwhelm Google Cloud Dataflow,” Singer said. “This meant that we had to get more creative when it came to going from idea, to data analysis, to producing unique stories per user, and we would have to scale this in time and at or below cost. If we weren’t careful, we risked being wasteful with resources and slowing down downstream teams.”

To handle this workload, Spotify not only split its internal teams into three groups (data processing, client-facing and design, and backend systems), but also split the data processing jobs into smaller pieces. That marked a very different approach for the team. “Last year Spotify had one huge job that used a specific feature within Dataflow called “Shuffle.” The idea here was that having a lot of data, we needed to sort through it, in order to understand who did what. While this is quite powerful, it can be costly if you have large amounts of data.”

This year, the company’s engineers minimized the use of Shuffle by using Google Cloud’s Bigtable as an intermediate storage layer. “Bigtable was used as a remediation tool between Dataflow jobs in order for them to process and store more data in a parallel way, rather than the need to always regroup the data,” said Singer. “By breaking down our Dataflow jobs into smaller components — and reusing core functionality — we were able to speed up our jobs and make them more resilient.”

Singer attributes at least a part of the cost savings to this technique of using Bigtable, but he also noted that the team decomposed the problem into data collection, aggregation and data transformation jobs, which it then split into multiple separate jobs. “This way, we were not only able to process more data in parallel, but be more selective about which jobs to rerun, keeping our costs down.”

Many of the techniques the engineers on Singer’s teams developed are currently in use across Spotify. “The great thing about how Wrapped works is that we are able to build out more tools to understand a user, while building a great product for them,” he said. “Our specialized techniques and expertise of Scio, Dataflow and big data processing, in general, is widely used to power Spotify’s portfolio of products.”

What Nutanix got right (and wrong) in its IPO roadshow

Back in 2016, Nutanix decided to take the big step of going public. Part of that process was creating a pitch deck and presenting it during its roadshow, a coming-out party when a company goes on tour prior to its IPO and pitches itself to investors of all stripes.

It’s a huge moment in the life of any company, and after talking to CEO Dheeraj Pandey and CFO Duston Williams, one we better understood. They spoke about how every detail helped define their company and demonstrate its long-term investment value to investors who might not have been entirely familiar with the startup or its technology.

Pandey and Williams reported going through more than 100 versions of the deck before they finished the one they took on the road. Pandey said they had a data room checking every fact, every number — which they then checked yet again.

In a separate Extra Crunch post, we looked at the process of building that deck. Today, we’re looking more closely at the content of the deck itself, especially the numbers Nutanix presented to the world. We want to see what investors did more than three years ago and what’s happened since — did the company live up to its promises?

Plan of attack

What Nutanix got right (and wrong) in its IPO roadshow

Back in 2016, Nutanix decided to take the big step of going public. Part of that process was creating a pitch deck and presenting it during its roadshow, a coming-out party when a company goes on tour prior to its IPO and pitches itself to investors of all stripes.

It’s a huge moment in the life of any company, and after talking to CEO Dheeraj Pandey and CFO Duston Williams, one we better understood. They spoke about how every detail helped define their company and demonstrate its long-term investment value to investors who might not have been entirely familiar with the startup or its technology.

Pandey and Williams reported going through more than 100 versions of the deck before they finished the one they took on the road. Pandey said they had a data room checking every fact, every number — which they then checked yet again.

In a separate Extra Crunch post, we looked at the process of building that deck. Today, we’re looking more closely at the content of the deck itself, especially the numbers Nutanix presented to the world. We want to see what investors did more than three years ago and what’s happened since — did the company live up to its promises?

Plan of attack

An adult sexting site exposed thousands of models’ passports and driver’s licenses

A popular sexting website has exposed thousands of photo IDs belonging to models and sex workers who earn commissions from the site.

SextPanther, an Arizona-based adult site, stored over 11,000 identity documents on an exposed Amazon Web Services (AWS) storage bucket, including passports, driver’s licenses, and Social Security numbers, without a password. The company says on its website that it uses to verify the ages of models who users communicate with.

Most of the exposed identity documents contain personal information, such as names, home addresses, dates of birth, biometrics, and their photos.

Although most of the data came from models in the U.S., some of the documents were supplied by workers in Canada, India, and the United Kingdom.

The site allows models and sex workers to earn money by exchanging text messages, photos, and videos with paying users, including explicit and nude content. The exposed storage bucket also contained over a hundred thousand photos and videos sent and received by the workers.

It was not immediately clear who owned the storage bucket. TechCrunch asked U.K.-based penetration testing company Fidus Information Security, which has experience in discovering and identifying exposed data, to help.

Researchers at Fidus quickly found evidence suggesting the exposed data could belong to SextPanther.

An hour after we alerted the site’s owner, Alexander Guizzetti, to the exposed data, the storage bucket was pulled offline.

“We have passed this on to our security and legal teams to investigate further. We take accusations like this very seriously,” Guizzetti said in an email, who did not explicitly confirm the bucket belonged to his company.

Using information from identity documents matched against public records, we contacted several models whose information was exposed by the security lapse.

“I’m sure I sent it to them,” said one model, referring to her driver’s license which was exposed. (We agreed to withhold her name given the sensitivity of the data.) We passed along a photo of her license as it found in the exposed bucket. She confirmed it was her license, but said that the information on her license is no longer current.

“I truly feel awful for others whom have signed up with their legit information,” she said.

The security lapse comes a week after researchers found a similar cache of highly sensitive personal information of sex workers on adult webcam streaming site, PussyCash.

More than 850,000 documents were insecurely stored in another unprotected storage bucket.

Read more:


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849.

Brand power vs. product power

Most tech companies — particularly B2B companies — either don’t understand the power of a brand, or do a really poor job of creating one.

An informal survey of a dozen of my young CEO friends showed that, given the choice, 10 out of 12 — 83% — would rather spend an extra dollar on product development than brand-building. It is dangerous (or at least foolish) to assume that the ROI on product development is greater than the ROI on brand building.

As a serial entrepreneur and CEO, I have had to make this choice many times. In 2006, I co-founded PC backup company Carbonite . I left the company five years ago after taking it public and I no longer have any financial interest in it, which is why I can write about it now — it was just sold for $1.4 billion to OpenText. There were many other backup products on the market at that time and many more appeared over the first five years of the company’s life. I would argue that Carbonite was slicker than most of the others, but essentially every backup product accomplishes the same result.

Unlike Carbonite’s competitors, we focused on our brand. That meant raising more money than we would have if we were just investing in R&D. But, after five years of investing in our brand, we had eleven times the brand recognition of any other consumer backup company and we dominated the market.

Here’s why: a study by Kettlefire Creative showed that 59% of people prefer to buy brands that they have heard of. Since none of our competitors had widely recognized brands, we got most of that 59%. Of the remaining 41%, we fought it out on other criteria and won most of that as well. Put yourself in the shoes of a potential customer looking to back up their PC. What do you worry about? Well, before we even launched the company, we asked PC owners to choose the five most important attributes of their ideal backup company from a list of ten possible attributes, and we found the following:

1. Trustworthy: you won’t look at my files or allow anyone to see them (1127 votes)

2. Peace of mind: when I go to retrieve my backup, it will always be there (811 votes)

3. Reliable: it backs up everything and doesn’t stop (696 votes)

4. Helpful: if I lose my computer, I want to talk to a human who can help me (446 votes)

5. Easy: it should be simple and require little attention (444 votes)

The attributes that didn’t make the top five:

6. Fast: backups happen quickly

Why is Dropbox reinventing itself?

According to Dropbox CEO Drew Houston, 80% of the product’s users rely on it, at least partially, for work.

It makes sense, then, that the company is refocusing to try and cement its spot in the workplace; to shed its image as “just” a file storage company (in a time when just about every big company has its own cloud storage offering) and evolve into something more immutably core to daily operations.

Earlier this week, Dropbox announced that the “new Dropbox” would be rolling out to all users. It takes the simple, shared folders that Dropbox is known for and turns them into what the company calls “Spaces” — little mini collaboration hubs for your team, complete with comment streams, AI for highlighting files you might need mid-meeting, and integrations into things like Slack, Trello and G Suite. With an overhauled interface that brings much of Dropbox’s functionality out of the OS and into its own dedicated app, it’s by far the biggest user-facing change the product has seen since launching 12 years ago.

Shortly after the announcement, I sat down with Dropbox VP of Product Adam Nash and CTO Quentin Clark . We chatted about why the company is changing things up, why they’re building this on top of the existing Dropbox product, and the things they know they just can’t change.

You can find these interviews below, edited for brevity and clarity.

Greg Kumparak: Can you explain the new focus a bit?

Adam Nash: Sure! I think you know this already, but I run products and growth, so I’m gonna have a bit of a product bias to this whole thing. But Dropbox… one of its differentiating characteristics is really that when we built this utility, this “magic folder”, it kind of went everywhere.

Enterprise software is hot — who would have thought?

Once considered the most boring of topics, enterprise software is now getting infused with such energy that it is arguably the hottest space in tech.

It’s been a long time coming. And it is the developers, software engineers and veteran technologists with deep experience building at-scale technologies who are energizing enterprise software. They have learned to build resilient and secure applications with open-source components through continuous delivery practices that align technical requirements with customer needs. And now they are developing application architectures and tools for at-scale development and management for enterprises to make the same transformation.

“Enterprise had become a dirty word, but there’s a resurgence going on and Enterprise doesn’t just mean big and slow anymore,” said JD Trask, co-founder of Raygun enterprise monitoring software. “I view the modern enterprise as one that expects their software to be as good as consumer software. Fast. Easy to use. Delivers value.”

The shift to scale out computing and the rise of the container ecosystem, driven largely by startups, is disrupting the entire stack, notes Andrew Randall, vice president of business development at Kinvolk.

In advance of TechCrunch’s first enterprise-focused event, TC Sessions: Enterprise, The New Stack examined the commonalities between the numerous enterprise-focused companies who sponsor us. Their experiences help illustrate the forces at play behind the creation of the modern enterprise tech stack. In every case, the founders and CTOs recognize the need for speed and agility, with the ultimate goal of producing software that’s uniquely in line with customer needs.

We’ll explore these topics in more depth at The New Stack pancake breakfast and podcast recording at TC Sessions: Enterprise. Starting at 7:45 a.m. on Sept. 5, we’ll be serving breakfast and hosting a panel discussion on “The People and Technology You Need to Build a Modern Enterprise,” with Sid Sijbrandij, founder and CEO, GitLab, and Frederic Lardinois, enterprise writer and editor, TechCrunch, among others. Questions from the audience are encouraged and rewarded, with a raffle prize awarded at the end.

Traditional virtual machine infrastructure was originally designed to help manage server sprawl for systems-of-record software — not to scale out across a fabric of distributed nodes. The disruptors transforming the historical technology stack view the application, not the hardware, as the main focus of attention. Companies in The New Stack’s sponsor network provide examples of the shift toward software that they aim to inspire in their enterprise customers. Portworx provides persistent state for containers; NS1 offers a DNS platform that orchestrates the delivery internet and enterprise applications; Lightbend combines the scalability and resilience of microservices architecture with the real-time value of streaming data.

“Application development and delivery have changed. Organizations across all industry verticals are looking to leverage new technologies, vendors and topologies in search of better performance, reliability and time to market,” said Kris Beevers, CEO of NS1. “For many, this means embracing the benefits of agile development in multicloud environments or building edge networks to drive maximum velocity.”

Enterprise software startups are delivering that value, while they embody the practices that help them deliver it.

The secrets to speed, agility and customer focus

Speed matters, but only if the end result aligns with customer needs. Faster time to market is often cited as the main driver behind digital transformation in the enterprise. But speed must also be matched by agility and the ability to adapt to customer needs. That means embracing continuous delivery, which Martin Fowler describes as the process that allows for the ability to put software into production at any time, with the workflows and the pipeline to support it.

Continuous delivery (CD) makes it possible to develop software that can adapt quickly, meet customer demands and provide a level of satisfaction with benefits that enhance the value of the business and the overall brand. CD has become a major category in cloud-native technologies, with companies such as CircleCI, CloudBees, Harness and Semaphore all finding their own ways to approach the problems enterprises face as they often struggle with the shift.

“The best-equipped enterprises are those [that] realize that the speed and quality of their software output are integral to their bottom line,” Rob Zuber, CTO of CircleCI, said.

Speed is also in large part why monitoring and observability have held their value and continue to be part of the larger dimension of at-scale application development, delivery and management. Better data collection and analysis, assisted by machine learning and artificial intelligence, allow companies to quickly troubleshoot and respond to customer needs with reduced downtime and tight DevOps feedback loops. Companies in our sponsor network that fit in this space include Raygun for error detection; Humio, which provides observability capabilities; InfluxData with its time-series data platform for monitoring; Epsagon, the monitoring platform for serverless architectures and Tricentis for software testing.

“Customer focus has always been a priority, but the ability to deliver an exceptional experience will now make or break a “modern enterprise,” said Wolfgang Platz, founder of Tricentis, which makes automated software testing tools. “It’s absolutely essential that you’re highly responsive to the user base, constantly engaging with them to add greater value. This close and constant collaboration has always been central to longevity, but now it’s a matter of survival.”

DevOps is a bit overplayed, but it still is the mainstay workflow for cloud-native technologies and critical to achieving engineering speed and agility in a decoupled, cloud-native architecture. However, DevOps is also undergoing its own transformation, buoyed by the increasing automation and transparency allowed through the rise of declarative infrastructure, microservices and serverless technologies. This is cloud-native DevOps. Not a tool or a new methodology, but an evolution of the longstanding practices that further align developers and operations teams — but now also expanding to include security teams (DevSecOps), business teams (BizDevOps) and networking (NetDevOps).

“We are in this constant feedback loop with our customers where, while helping them in their digital transformation journey, we learn a lot and we apply these learnings for our own digital transformation journey,” Francois Dechery, chief strategy officer and co-founder of CloudBees, said. “It includes finding the right balance between developer freedom and risk management. It requires the creation of what we call a continuous everything culture.”

Leveraging open-source components is also core in achieving speed for engineering. Open-source use allows engineering teams to focus on building code that creates or supports the core business value. Startups in this space include Tidelift and open-source security companies such as Capsule8. Organizations in our sponsor portfolio that play roles in the development of at-scale technologies include The Linux Foundation, the Cloud Native Computing Foundation and the Cloud Foundry Foundation.

“Modern enterprises … think critically about what they should be building themselves and what they should be sourcing from somewhere else,” said Chip Childers, CTO of Cloud Foundry Foundation . “Talented engineers are one of the most valuable assets a company can apply to being competitive, and ensuring they have the freedom to focus on differentiation is super important.”

You need great engineering talent, giving them the ability to build secure and reliable systems at scale while also the trust in providing direct access to hardware as a differentiator.

Is the enterprise really ready?

The bleeding edge can bleed too much for the likings of enterprise customers, said James Ford, an analyst and consultant.

“It’s tempting to live by mantras like ‘wow the customer,’ ‘never do what customers want (instead build innovative solutions that solve their need),’ ‘reduce to the max,’ … and many more,” said Bernd Greifeneder, CTO and co-founder of Dynatrace . “But at the end of the day, the point is that technology is here to help with smart answers … so it’s important to marry technical expertise with enterprise customer need, and vice versa.”

How the enterprise adopts new ways of working will affect how startups ultimately fare. The container hype has cooled a bit and technologists have more solid viewpoints about how to build out architecture.

One notable trend to watch: The role of cloud services through projects such as Firecracker. AWS Lambda is built on Firecracker, the open-source virtualization technology, built originally at Amazon Web Services . Firecracker serves as a way to get the speed and density that comes with containers and the hardware isolation and security capabilities that virtualization offers. Startups such as Weaveworks have developed a platform on Firecracker. OpenStack’s Kata containers also use Firecracker.

“Firecracker makes it easier for the enterprise to have secure code,” Ford said. It reduces the surface security issues. “With its minimal footprint, the user has control. It means less features that are misconfigured, which is a major security vulnerability.”

Enterprise startups are hot. How they succeed will determine how well they may provide a uniqueness in the face of the ever-consuming cloud services and at-scale startups that inevitably launch their own services. The answer may be in the middle with purpose-built architectures that use open-source components such as Firecracker to provide the capabilities of containers and the hardware isolation that comes with virtualization.

Hope to see you at TC Sessions: Enterprise. Get there early. We’ll be serving pancakes to start the day. As we like to say, “Come have a short stack with The New Stack!”