After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.

Investing in frontier technology is (and isn’t) cleantech all over again

I entered the world of venture investing a dozen years ago.  Little did I know that I was embarking on a journey to master the art of balancing contradictions: building up experience and pattern recognition to identify outliers, emphasizing what’s possible over what’s actual, generating comfort and consensus around a maverick founder with a non-consensus view, seeking the comfort of proof points in startups that are still very early, and most importantly, knowing that no single lesson learned can ever be applied directly in the future as every future scenario will certainly be different.

I was fortunate to start my venture career at a fund specializing in funding “Frontier” technology companies. Real-estate was white hot, banks were practically giving away money, and VCs were hungry to fund hot startups.

I quickly found myself in the same room as mainstream software investors looking for what’s coming after search, social, ad-tech, and enterprise software. Cleantech was very compelling: an opportunity to make money while saving our planet.  Unfortunately for most, neither happened: they lost their money and did little to save the planet.

Fast forward a decade, after investors scored their wins in online lending, cloud storage, and on-demand, I find myself, again, in the same room with consumer and cloud investors venturing into “Frontier Tech”.  The are dazzled by the founders’ presentations, and proud to have a role in funding turning the seemingly impossible to what’s possible through science. However, what lessons did they take away from the Cleantech cycle? What should Frontier Tech founders and investors be thinking about to avoid the same fate?

Coming from a predominantly academic background, I was excited to be part of the emerging trend of funding founders leveraging technology to make how we generate, move, and consume our natural resources more efficient and sustainable. I was thrilled to be digging into technologies underpinning new batteries, photovoltaics, wind turbines, superconductors, and power electronics.  

To prove out their business models, these companies needed to build out factories, supply chains, and distribution channels. It wasn’t long until the core technology development became a small piece of an otherwise complex, expensive operation. The hot energy startup factory started to look and feel mysteriously like a magnetic hard drive factory down the street. Wait a minute, that’s because much of the equipment and staff did come from factories making components for PCs; but this time they were making products for generating, storing, and moving energy more renewably. So what went wrong?

Whether it was solar, wind, or batteries, the metrics were pretty similar: dollars per megawatt, mass per megawatt, or multiplying by time to get dollars and mass per unit energy, whether it was for the factories or the systems. Energy is pretty abundant, so the race was on to to produce and handle a commodity. Getting started as a real competitive business meant going BIG: as many of the metrics above depended on size and scale. Hundreds of millions of dollars of venture money only went so far.

The onus was on banks, private equity, engineering firms, and other entities that do not take technology risk, to take a leap of faith to take a product or factory from 1/10th scale to full-scale. The rest is history: most cleantech startups hit a funding valley of death.  They need to raise big money while sitting at high valuations, without a kernel of a real business to attract investors that write those big checks to scale up businesses.

How are Frontier-Tech companies advantaged relative to their Cleantech counterparts? For starters, most aren’t producing a commodity…

Frontier Tech, like Cleantech, can be capital-intense. Whether its satellite communications, driverless cars, AI chips, or quantum computing; like Cleantech, there is relatively larger amounts of capital needed to take the startups the point where they can demonstrate the kernel of a competitive business.  In other words, they typically need at least tens of millions of dollars to show they can sell something and profitably scale that business into a big market. Some money is dedicated to technology development, but, like cleantech a disproportionate amount will go into building up an operation to support the business. Here are a couple examples:

  • Satellite communications: It takes a few million dollars to demonstrate a new radio and spacecraft. It takes tens of millions of dollars to produce the satellites, put them into orbit, build up ground station infrastructure, the software, systems, and operations needed to serve fickle, enterprise customers. All of this while facing competition from incumbent or in-house efforts. At what point will the economics of the business attract a conventional growth investor to fund expansion? If Cleantech taught us anything, it’s that the big money would prefer to watch from the sidelines for longer than you’d think.
  • Quantum compute: Moore’s law is improving new computers at a breakneck pace, but the way they get implemented as pretty incremental. Basic compute architectures date back to the dawn of computing, and new devices can take decades to find their way into servers. For example, NAND Flash technology dates back to the 80s, found its way into devices in the 90s, and has been slowly penetrating datacenters in the past decade. Same goes for GPUs; even with all the hype around AI. Quantum compute companies can offer a service direct to users, i.e., homomorphic computing, advanced encryption/decryption, or molecular simulations. However, that would one of the rare occasions where novel computing machine company has offered computing as opposed to just selling machines. If I had to guess; building the quantum computers will be relatively quick; building the business will be expensive.
  • Operating systems for driverless cars: Tremendous progress has been made since Google first presented its early work in 2011. Dozens of companies are building software that do some combination of perception, prediction, planning, mapping, and simulations.  Every operator of autonomous cars, whether they are vertical like Zoox, or working in partnerships like GM/Cruise, have their own proprietary technology stacks. Unlike building an iPhone app, where the tools are abundant and the platform is well-understood, integrating a complete software module into an autonomous driving system may take up more effort than putting together the original code in the first place.

How are Frontier-Tech companies advantaged relative to their Cleantech counterparts? For starters, most aren’t producing a commodity: it’s easier to build a Frontier-tech company that doesn’t need to raise big dollars before demonstrating the kernel of an interesting business. On rare occasions, if the Frontier tech startup is a pioneer in its field, then it can be acquired for top dollar for the quality of its results and its team.

Recent examples are Salesforce’s acquisition of Metamind, GM’s acquisition of Cruise, and Intel’s acquisition of Nervana (a Lux investment). However, as more competing companies get to work on a new technology, the sense of urgency to acquire rapidly diminishes as the scarce, emerging technology quickly becomes widely available: there are now scores of AI, autonomous car, and AI chip companies out there. Furthermore, as technology becomes more complex, its cost of integration into a product (think about the driverless car example above) also skyrockets.  Knowing this likely liability, acquirers will tend to pay less.

Creative founding teams will find ways to incrementally build interesting businesses as they are building up their technologies.  

I encourage founders, and investors to emphasize the businesses they are building through their inventions.  I encourage founders to rethink plans that require tens of millions of dollars before being able to sell products, while warning founders not to chase revenue for the sake of revenue.  

I suggest they look closely at their plans and find creative ways to start penetrating, or building exciting markets, hence interesting businesses, with modest amounts of capital. I advise them to work with investors who, regardless of whether they saw how Cleantech unfolded, are convinced that their $$ can take the company to the point where it can engage customers with an interesting product with a sense for how it can scale into an attractive business.

One month after denying it will exit China, AWS opens its second region there

 Amazon Web Services has opened its second region in China with a local partner, Ningxia Western Cloud Data Technology. The launch comes just one month after Amazon denied reports that AWS is leaving China, but said the company sold “certain physical infrastructure assets” to Internet services company Beijing Sinnet, which operates its first region in the country, in order to… Read More

As Cook and Pichai leave China, Valley confronts rising internet tyranny in world’s second largest market

 It’s been a bad few months for internet freedom in China (and really a bad few decades, but who is counting?). The government brought into force a broad-ranging “cybersecurity law” earlier this year that empowers Beijing to take unilateral control over critical internet infrastructure, while also mandating that foreign companies keep all citizen data local inside China… Read More

AWS announces per-second billing for EC2 instances

aws logo When Amazon launched the AWS EC2 cloud computing service back in 2006, per-hour billing was a big deal, but that scheme also meant that you’d pay for a full hour even if you only used an instance for a few minutes. Over the last few years, AWS’s competitors moved to more flexible billing models (mostly per-minute billing) and now, starting October 2, AWS is one-upping many of them… Read More

Microsoft Azure gets a new VM family for bursty workloads

 Microsoft today announced the preview launch of a new family of virtual machines for its Azure cloud computing platform that’s specifically geared toward bursty workloads. Microsoft argues that these so-called B-series machines, which are currently the lowest cost Azure machines with flexible CPU usage, should work well for workloads like web servers, small databases, and dev/test… Read More

AWS launches new programs to support its partners

img_20161129_121008 Amazon’s AWS cloud computing division is hosting its annual re:Invent developer conference in Las Vegas this week. Ahead of the main part of the event, the company today hosted a keynote for its ecosystem partners who sell tools and services for AWS. During the keynote, the company announced a major extension of its partner programs, as well as a few new features for vendors who want to… Read More

Google’s Cloud Platform will get GPU machines in early 2017

google_data_center_2 Google’s Cloud Machine Learning service launched earlier this year and, already, the company is calling it one of its “fastest growing product areas.” Today, the company is announcing a number of new features for Cloud Machine Learning users and developers who want to run their own machine learning workloads in Google’s cloud. Unlike its competitors, like AWS and… Read More

Google launches its Cloud Platform region in Tokyo

tokyo-region-ga-width-1000 Google today announced the launch of the Tokyo region of its cloud computing platform. With this, the Google Cloud Platform now features two regions (Tokyo and Taiwan) and a total of six availability zones in Asia. In the Asia-Pacific region, the plan is to also open new regions in Mumbai, Singapore and Sydney over the course of the next year, in addition to new regions in the U.S.,… Read More

Tools that help startups scale effectively

Colorful wooden blocks stacked over green background If you wanted to start your own tech business 10 years ago, you needed deep pockets and extensive knowledge of building the various parts of a company yourself. But you should neither need a hefty amount of capital nor a degree in engineering. If you have a great idea, you should be able to focus on that idea and not have to worry about building non-core parts of your business from scratch. Read More