Password bypass flaw in Western Digital My Cloud drives puts data at risk

A security researcher has published details of a vulnerability in a popular cloud storage drive after the company failed to issue security patches for over a year.

Remco Vermeulen found a privilege escalation bug in Western Digital’s My Cloud devices, which he said allows an attacker to bypass the admin password on the drive, gaining “complete control” over the user’s data.

The exploit works because drive’s web-based dashboard doesn’t properly check a user’s credentials before giving a possible attacker access to tools that should require higher levels of access.

The bug was “easy” to exploit, Vermeulen told TechCrunch in an email, and that it was remotely exploitable if a My Cloud device allows remote access over the internet — which thousands of devices are. He posted a proof-of-concept video on Twitter.

Details of the bug were also independently found by another security team, which released its own exploit code.

Vermeulen reported the bug over a year ago in April 2017, but said the company stopped responding. Normally, security researchers give 90 days for a company to respond, in line with industry-accepted responsible disclosure guidelines.

After he found that WD updated the My Cloud firmware in the meanwhile without fixing the vulnerability he found, he decided to post his findings.

A year later, WD still hasn’t release a patch.

The company confirmed that it knows of the vulnerability but did not say why it took more than a year to issue a fix. “We are in the process of finalizing a scheduled firmware update that will resolve the reported issue,” a spokesperson said, which will arrive “within a few weeks.”

WD said that several of its My Cloud products are vulnerable — including the EX2, EX4, and Mirror, but not My Cloud Home.

In the meantime, Vermeulen said that there’s no fix and that users have to “just disconnect” the drive altogether if they want to keep their data safe.

OpenStack’s latest release focuses on bare metal clouds and easier upgrades

The OpenStack Foundation today released the 18th version of its namesake open-source cloud infrastructure software. The project has had its ups and downs, but it remains the de facto standard for running and managing large private clouds.

What’s been interesting to watch over the years is how the project’s releases have mirrored what’s been happening in the wider world of enterprise software. The core features of the platform (compute, storage, networking) are very much in place at this point, allowing the project to look forward and to add new features that enterprises are now requesting.

The new release, dubbed Rocky, puts an emphasis on bare metal clouds, for example. While the majority of enterprises still run their workloads in virtual machines, a lot of them are now looking at containers as an alternative with less overhead and the promise of faster development cycles. Many of these enterprises want to run those containers on bare metal clouds and the project is reacting to this with its “Ironic” project that offers all of the management and automation features necessary to run these kinds of deployments.

“There’s a couple of big features that landed in Ironic in the Rocky release cycle that we think really set it up well for OpenStack bare clouds to be the foundation for both running VMs and containers,” OpenStack Foundation VP of marketing and community Lauren Sell told me. 

Ironic itself isn’t new, but in today’s update, Ironic gets use-managed BIOS settings (to configure power management, for example) and RAM disk support for high-performance computing workloads. Magnum, OpenStack’s service for using container engines like Docker Swarm, Apache Mesos and Kubernetes, is now also a Kubernetes certified installer, meaning that users can be confident that OpenStack and Kubernetes work together just like a user would expect.

Another trend that’s becoming quite apparent is that many enterprises that build their own private clouds do so because they have very specific hardware needs. Often, that includes GPUs and FPGAs, for example, for machine learning workloads. To make it easier for these businesses to use OpenStack, the project now includes a lifecycle management service for these kinds of accelerators.

“Specialized hardware is getting a lot of traction right now,” OpenStack CTO Mark Collier noted. “And what’s interesting is that FPGAs have been around for a long time but people are finding out that they are really useful for certain types of AI, because they’re really good at doing the relatively simple math that you need to repeat over and over again millions of times. It’s kind of interesting to see this kind of resurgence of certain types of hardware that maybe was seen as going to be disrupted by cloud and now it’s making a roaring comeback.”

With this update, the OpenStack project is also enabling easier upgrades, something that was long a daunting process for enterprises. Because it was so hard, many chose to simply not update to the latest releases and often stayed a few releases behind. Now, the so-called Fast Forward Upgrade feature allows these users to get on new releases faster, even if they are well behind the project’s own cycle. Oath, which owns TechCrunch, runs a massive OpenStack cloud, for example, and the team recently upgraded a 20,000-core deployment from Juno (the 10th OpenStack release) to Ocata (the 15th release).

The fact that Vexxhost, a Canadian cloud provider, is already offering support for the Rocky release in its new Silicon Valley cloud today is yet another sign that updates are getting a bit easier (and the whole public cloud side of OpenStack, too, often gets overlooked, but continues to grow).

AWS cuts in half the price of most of its Lightsail virtual private servers

AWS Lightsail, which launched in 2016, is Amazon’s answer to the rise of Digital Ocean, OVH and other affordable virtual private server (VPS) players. Lightsail started as a pretty basic service, but over the course of the last two years, AWS added features like block storage, Windows support and additional regions.

Today, the company announced it is launching two new instance sizes and cutting in half the price of most Linux-based Lightsail instances. Windows instances are also getting cheaper, though the price cut there is closer to 30 percent for most instances.

The only Linux instance that isn’t getting a full 50 percent cut is the $5/month 512 MB instance, which will now cost $3.50. That’s not too bad, either. Depending on your needs, 512 MB can be enough to run a few projects, so if you don’t need a full 1 GB, you can save a few dollars by going with Lightsail over Digital Ocean’s smallest $5/month 1 GB instance. Indeed, it’s probably no surprise that Lightsail’s 1 GB instance now also costs $5/month.

All instance types come with attached SSD storage, SSH access, a static IP address and all of the other features you’d expect from a VPS hosting service.

As usual, Windows instances cost a bit more (those Windows licenses aren’t free, after all) and now start at $8 per month for a 512 MB instances. The more usable 1 GB instance will set you back $12 per month.

As for the new instance sizes, the new 16 GB instance will feature 4 vCPUs, 320 GB of storage and a generous 6 TB of data transfer. The 32 GB instance doubles the vCPU and storage numbers and offers 7 TB of data transfer.

 

Google’s G Suite apps and Calendar are getting Gmail’s side panels

One of the best features of the new Gmail is its quick-access side panel with easy access to Google Calendar, Tasks, Keep and your Gmail extensions. Now, Google is bringing this same functionality to Google Calendar, Docs, Sheets, Slides and Drawings, too.

In Google Calendar, you’ll be able to quickly access Keep and Tasks, while in the rest of the G Suite apps, you’ll get easy access to Calendar, Keep and Tasks.

In Gmail, the side panel also brings up access to various G Suite extensions that you may have installed from the marketplace. It doesn’t look like that’s possible in Docs and Calendar right now, though it’s probably only a matter of time before there will be compatible extensions for those products, too. By then, we’ll likely see a “works with Google Calendar” section and support for other G Suite apps in the marketplace, too.

I’m already seeing this in my personal Google Calendar, but not in Google Docs, so this looks to be a slow rollout. The official word is that paying G Suite subscribers on the rapid release schedule should get access now, with those on the slower release schedule getting access in two weeks.

Big tech companies are looking at Hollywood as the next stage in their play for the cloud

This week, both Microsoft and Google made moves to woo Hollywood to their cloud computing platforms in the latest act of the unfolding drama over who will win the multi-billion dollar business of the entertainment industry as it moves to the cloud.

Google raised the curtain with a splashy announcement that they’d be setting up their fifth cloud region in the U.S. in Los Angeles. Keeping the focus squarely on tools for artists and designers the company talked up its tools like Zync Render, which Google acquired back in 2014, and Anvato, a video streaming and monetization platform it acquired in 2016.

While Google just launched its LA hub, Microsoft has operated a cloud region in Southern California for a while, and started wooing Hollywood last year at the National Association of Broadcasters conference, according to Tad Brockway, a general manager for Azure’s storage and media business.

Now Microsoft has responded with a play of its own, partnering with the provider of a suite of hosted graphic design and animation software tools called Nimble Collective.

Founded by a former Pixar and DreamWorks animator, Rex Grignon, Nimble launched in 2014 and has raised just under $10 million from investors including the UCLA VC Fund and New Enterprise Associates, according to Crunchbase.

“Microsoft is committed to helping content creators achieve more using the cloud with a partner-focused approach to this industries transformation,” said Tad Brockway, General Manager, Azure Storage, Media and Edge at Microsoft, in a statement. “We’re excited to work with innovators like Nimble Collective to help them transform how animated content is produced, managed and delivered.”

There’s a lot at stake for Microsoft, Google and Amazon as entertainment companies look to migrate to managed computing services. Tech firms like IBM have been pitching the advantages of cloud computing for Hollywood since 2010, but it’s only recently that companies have begun courting the entertainment industry in earnest.

While leaders like Netflix migrated to cloud services in 2012 and 21st Century Fox worked with HP to get its infrastructure on cloud computing, other companies have lagged. Now companies like Microsoft, Google, and Amazon are competing for their business as more companies wake up to the pressures and demands for more flexible technology architectures.

As broadcasters face more demanding consumers, fragmented audiences, and greater time pressures to produce and distribute more content more quickly, cloud architectures for technology infrastructure can provide a solution, tech vendors argue.

Stepping into the breach, cloud computing and technology service providers like Google, Amazon, and Microsoft are trying to buy up startups servicing the entertainment market specifically, or lock in vendors like Nimble through exclusive partnerships that they can leverage to win new customers. For instance, Microsoft bought Avere Systems in January, and Google picked up Anvato in 2016 to woo entertainment companies.

The result should be lower cost tools for a broader swath of the market, and promote more cross-pollination across different geographies, according to Grignon, Nimble’s chief executive.

“That worldwide reach is very important,” Grignon said. “In media and entertainment there are lots of isolated studios around the world. We afford this pathway between the studio in LA and the studio in Bangalore. We open these doorways.”

There are other, more obvious advantages as well. Streaming — exemplified by the relationship between Amazon and Netflix is well understood — but the possibility to bring costs down by moving to cloud architectures holds several other distribution advantages as well as simplifying processes across pre- and post-production, insiders said.

 

After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.

Investing in frontier technology is (and isn’t) cleantech all over again

I entered the world of venture investing a dozen years ago.  Little did I know that I was embarking on a journey to master the art of balancing contradictions: building up experience and pattern recognition to identify outliers, emphasizing what’s possible over what’s actual, generating comfort and consensus around a maverick founder with a non-consensus view, seeking the comfort of proof points in startups that are still very early, and most importantly, knowing that no single lesson learned can ever be applied directly in the future as every future scenario will certainly be different.

I was fortunate to start my venture career at a fund specializing in funding “Frontier” technology companies. Real-estate was white hot, banks were practically giving away money, and VCs were hungry to fund hot startups.

I quickly found myself in the same room as mainstream software investors looking for what’s coming after search, social, ad-tech, and enterprise software. Cleantech was very compelling: an opportunity to make money while saving our planet.  Unfortunately for most, neither happened: they lost their money and did little to save the planet.

Fast forward a decade, after investors scored their wins in online lending, cloud storage, and on-demand, I find myself, again, in the same room with consumer and cloud investors venturing into “Frontier Tech”.  The are dazzled by the founders’ presentations, and proud to have a role in funding turning the seemingly impossible to what’s possible through science. However, what lessons did they take away from the Cleantech cycle? What should Frontier Tech founders and investors be thinking about to avoid the same fate?

Coming from a predominantly academic background, I was excited to be part of the emerging trend of funding founders leveraging technology to make how we generate, move, and consume our natural resources more efficient and sustainable. I was thrilled to be digging into technologies underpinning new batteries, photovoltaics, wind turbines, superconductors, and power electronics.  

To prove out their business models, these companies needed to build out factories, supply chains, and distribution channels. It wasn’t long until the core technology development became a small piece of an otherwise complex, expensive operation. The hot energy startup factory started to look and feel mysteriously like a magnetic hard drive factory down the street. Wait a minute, that’s because much of the equipment and staff did come from factories making components for PCs; but this time they were making products for generating, storing, and moving energy more renewably. So what went wrong?

Whether it was solar, wind, or batteries, the metrics were pretty similar: dollars per megawatt, mass per megawatt, or multiplying by time to get dollars and mass per unit energy, whether it was for the factories or the systems. Energy is pretty abundant, so the race was on to to produce and handle a commodity. Getting started as a real competitive business meant going BIG: as many of the metrics above depended on size and scale. Hundreds of millions of dollars of venture money only went so far.

The onus was on banks, private equity, engineering firms, and other entities that do not take technology risk, to take a leap of faith to take a product or factory from 1/10th scale to full-scale. The rest is history: most cleantech startups hit a funding valley of death.  They need to raise big money while sitting at high valuations, without a kernel of a real business to attract investors that write those big checks to scale up businesses.

How are Frontier-Tech companies advantaged relative to their Cleantech counterparts? For starters, most aren’t producing a commodity…

Frontier Tech, like Cleantech, can be capital-intense. Whether its satellite communications, driverless cars, AI chips, or quantum computing; like Cleantech, there is relatively larger amounts of capital needed to take the startups the point where they can demonstrate the kernel of a competitive business.  In other words, they typically need at least tens of millions of dollars to show they can sell something and profitably scale that business into a big market. Some money is dedicated to technology development, but, like cleantech a disproportionate amount will go into building up an operation to support the business. Here are a couple examples:

  • Satellite communications: It takes a few million dollars to demonstrate a new radio and spacecraft. It takes tens of millions of dollars to produce the satellites, put them into orbit, build up ground station infrastructure, the software, systems, and operations needed to serve fickle, enterprise customers. All of this while facing competition from incumbent or in-house efforts. At what point will the economics of the business attract a conventional growth investor to fund expansion? If Cleantech taught us anything, it’s that the big money would prefer to watch from the sidelines for longer than you’d think.
  • Quantum compute: Moore’s law is improving new computers at a breakneck pace, but the way they get implemented as pretty incremental. Basic compute architectures date back to the dawn of computing, and new devices can take decades to find their way into servers. For example, NAND Flash technology dates back to the 80s, found its way into devices in the 90s, and has been slowly penetrating datacenters in the past decade. Same goes for GPUs; even with all the hype around AI. Quantum compute companies can offer a service direct to users, i.e., homomorphic computing, advanced encryption/decryption, or molecular simulations. However, that would one of the rare occasions where novel computing machine company has offered computing as opposed to just selling machines. If I had to guess; building the quantum computers will be relatively quick; building the business will be expensive.
  • Operating systems for driverless cars: Tremendous progress has been made since Google first presented its early work in 2011. Dozens of companies are building software that do some combination of perception, prediction, planning, mapping, and simulations.  Every operator of autonomous cars, whether they are vertical like Zoox, or working in partnerships like GM/Cruise, have their own proprietary technology stacks. Unlike building an iPhone app, where the tools are abundant and the platform is well-understood, integrating a complete software module into an autonomous driving system may take up more effort than putting together the original code in the first place.

How are Frontier-Tech companies advantaged relative to their Cleantech counterparts? For starters, most aren’t producing a commodity: it’s easier to build a Frontier-tech company that doesn’t need to raise big dollars before demonstrating the kernel of an interesting business. On rare occasions, if the Frontier tech startup is a pioneer in its field, then it can be acquired for top dollar for the quality of its results and its team.

Recent examples are Salesforce’s acquisition of Metamind, GM’s acquisition of Cruise, and Intel’s acquisition of Nervana (a Lux investment). However, as more competing companies get to work on a new technology, the sense of urgency to acquire rapidly diminishes as the scarce, emerging technology quickly becomes widely available: there are now scores of AI, autonomous car, and AI chip companies out there. Furthermore, as technology becomes more complex, its cost of integration into a product (think about the driverless car example above) also skyrockets.  Knowing this likely liability, acquirers will tend to pay less.

Creative founding teams will find ways to incrementally build interesting businesses as they are building up their technologies.  

I encourage founders, and investors to emphasize the businesses they are building through their inventions.  I encourage founders to rethink plans that require tens of millions of dollars before being able to sell products, while warning founders not to chase revenue for the sake of revenue.  

I suggest they look closely at their plans and find creative ways to start penetrating, or building exciting markets, hence interesting businesses, with modest amounts of capital. I advise them to work with investors who, regardless of whether they saw how Cleantech unfolded, are convinced that their $$ can take the company to the point where it can engage customers with an interesting product with a sense for how it can scale into an attractive business.

One month after denying it will exit China, AWS opens its second region there

 Amazon Web Services has opened its second region in China with a local partner, Ningxia Western Cloud Data Technology. The launch comes just one month after Amazon denied reports that AWS is leaving China, but said the company sold “certain physical infrastructure assets” to Internet services company Beijing Sinnet, which operates its first region in the country, in order to… Read More

As Cook and Pichai leave China, Valley confronts rising internet tyranny in world’s second largest market

 It’s been a bad few months for internet freedom in China (and really a bad few decades, but who is counting?). The government brought into force a broad-ranging “cybersecurity law” earlier this year that empowers Beijing to take unilateral control over critical internet infrastructure, while also mandating that foreign companies keep all citizen data local inside China… Read More

AWS announces per-second billing for EC2 instances

aws logo When Amazon launched the AWS EC2 cloud computing service back in 2006, per-hour billing was a big deal, but that scheme also meant that you’d pay for a full hour even if you only used an instance for a few minutes. Over the last few years, AWS’s competitors moved to more flexible billing models (mostly per-minute billing) and now, starting October 2, AWS is one-upping many of them… Read More