The five great reasons to attend TechCrunch’s Enterprise show Sept. 5 in SF

The vast enterprise tech category is Silicon Valley’s richest, and today it’s poised to change faster than ever before. That’s probably the biggest reason to come to TechCrunch’s first-ever show focused entirely on enterprise. But here are five more reasons to commit to joining TechCrunch’s editors on September 5 at San Francisco’s Yerba Buena Center for an outstanding day (agenda here) addressing the tech tsunami sweeping through enterprise. 

#1 Artificial Intelligence.
At once the most consequential and most hyped technology, no one doubts that AI will change business software and increase productivity like few if any, technologies before it. To peek ahead  into that future, TechCrunch will interview Andrew Ng, arguably the world’s most experienced AI practitioner at huge companies (Baidu, Google) as well as at startups. AI will be a theme across every session, but we’ll address again it head-on in a panel with investor Jocelyn Goldfein (Zetta), founder Bindu Reddy (Reality Engines) and executive John Ball (Salesforce / Einstein). 

#2. Data, The Cloud and Kubernetes.
If AI is at the dawn of tomorrow, cloud transformation is the high noon of today.  90% of the world’s data was created in the past two years, and no enterprise can keep its data hoard on-prem forever. Azure’s CTO
Mark Russinovitch (CTO) will discuss Microsft’s vision for the cloud. Leaders in the open-source Kubernetes revolution, Joe Beda (VMWare) and Aparna Sinha (Google) and others will dig into what Kubernetes means to companies making the move to cloud. And last, there is the question of how to find signal in all the data – which will bring three visionary founders to the stage: Benoit Dageville (Snowflake), Ali Ghodsi (Databricks), Murli Thirumale (Portworx). 

#3 Everything else on the main stage!
Let’s start with a fireside chat with
SAP CEO Bill McDermott and Qualtrics Chief Experience Officer Julie Larson-Green.  We have top investors talking where they are making their bets, and security experts talking data and privacy. And then there is quantum,  the technology revolution waiting on the other side of AI: Jay Gambetta, the principal theoretical scientist behind IBM’s quantum computing effort,  Jim Clarke, the director of quantum hardware at Intel Labs, and Krysta Svore, style="font-weight: 400;"> who leads the Microsoft’s quantum effort.

All told, there are 21 programming sessions.

#4 Network and get your questions answered.
There will be two Q&A breakout sessions with top enterprise investors for founders (and anyone else) to query investors directly. Plus, TechCrunch’s unbeatable CrunchMatch app makes it really easy to set up meetings with the other attendees, an
incredible array of folks, plus the  20 early-stage startups exhibiting on the expo floor.

#5 SAP
Enterprise giant SAP is our sponsor for the show, and they are not only bringing a squad of top executives, they are producing four parallel track sessions featuring key SAP Chief Innovation Officer
Max Wessel,  SAP Chief Designer and Futurist  Martin Wezowski and SAP.IO’s managing director Ram Jambunathan (SAP.iO) in sessions including, how to scale-up an enterprise startup, how startups win large enterprise customers, and what the enterprise future looks like.

Check out the complete agenda. Don’t miss this show! This line-up is a view into the future like none other. 

Grab your $349 tickets today, and don’t wait till the day of to book because prices go up at the door!

We still have 2 Startup Demo Tables left. Each table comes with 4 tickets and a prime location to demo your startup on the expo floor. Book your demo table now before they’re all gone!

Microsoft Azure CTO Mark Russinovich will join us for TC Sessions: Enterprise on September 5

Being the CTO for one of the three major hyperclouds providers may seem like enough of a job for most people, but Mark Russinovich, the CTO of Microsoft Azure, has a few other talents in his back pocket. Russinovich, who will join us for a fireside chat at our TechCrunch Sessions: Enterprise event in San Francisco on September 5 (p.s. early-bird sale ends Friday), is also an accomplished novelist who has published four novels, all of which center around tech and cybersecurity.

At our event, though, we won’t focus on his literary accomplishments (except for maybe his books about Windows Server) as much as on the trends he’s seeing in enterprise cloud adoption. Microsoft, maybe more so than its competitors, always made enterprise customers and their needs the focus of its cloud initiatives from the outset. Today, as the majority of enterprises is looking to move at least some of their legacy workloads into the cloud, they are often stumped by the sheer complexity of that undertaking.

In our fireside chat, we’ll talk about what Microsoft is doing to reduce this complexity and how enterprises can maximize their current investments into the cloud, both for running new cloud-native applications and for bringing legacy applications into the future. We’ll also talk about new technologies that can make the move to the cloud more attractive to enterprises, including the current buzz around edge computing, IoT, AI and more.

Before joining Microsoft, Russinovich, who has a Ph.D. in computer engineering from Carnegie Mellon, was the co-founder and chief architect of Winternals Software, which Microsoft acquired in 2006. During his time at Winternals, Russinovich discovered the infamous Sony rootkit. Over his 13 years at Microsoft, he moved from Technical Fellow up to the CTO position for Azure, which continues to grow at a rapid clip as it looks to challenge AWS’s leadership in total cloud revenue.

Tomorrow, Friday, August 16 is your last day to save $100 on tickets before prices go up. Book your early-bird tickets now and keep that Benjamin in your pocket.

If you’re an early-stage startup, we only have 3 demo table packages left! Each demo package comes with 4 tickets and a great location for your company to get in front of attendees. Book your demo package today before we sell out!

VMware says it’s looking to acquire Pivotal

VMware today confirmed that it is in talks to acquire software development platform Pivotal Software, the service best known for commercializing the open-source Cloud Foundry platform. The proposed transaction would see VMware acquire all outstanding Pivotal Class A stock for $15 per share, a significant markup over Pivotal’s current share price (which unsurprisingly shot up right after the announcement).

Pivotal’s shares have struggled since the company’s IPO in April 2018. The company was originally spun out of EMC Corporation (now DellEMC) and VMware in 2012 to focus on Cloud Foundry, an open-source software development platform that is currently in use by the majority of Fortune 500 companies. A lot of these enterprises are working with Pivotal to support their Cloud Foundry efforts. Dell itself continues to own the majority of VMware and Pivotal, and VMware also owns an interest in Pivotal already and sells Pivotal’s services to its customers as well. It’s a bit of an ouroboros of a transaction.

Pivotal Cloud Foundry was always the company’s main product, but it also offered additional consulting services on top of that. Despite improving its execution since going public, Pivotal still lost $31.7 million in its last financial quarter as its stock price traded at just over half of the IPO price. Indeed, the $15 per share VMware is offering is identical to Pivotal’s IPO price.

An acquisition by VMware would bring Pivotal’s journey full circle, though this is surely not the journey the Pivotal team expected. VMware is a Cloud Foundry Foundation platinum member, together with Pivotal, DellEMC, IBM, SAP and Suse, so I wouldn’t expect any major changes in VMware’s support of the overall open-source ecosystem behind Pivotal’s core platform.

It remains to be seen whether the acquisition will indeed happen, though. In a press release, VMware acknowledged the discussion between the two companies but noted that “there can be no assurance that any such agreement regarding the potential transaction will occur, and VMware does not intend to communicate further on this matter unless and until a definitive agreement is reached.” That’s the kind of sentence lawyers like to write. I would be quite surprised if this deal didn’t happen, though.

Buying Pivotal would also make sense in the grand scheme of VMware’s recent acquisitions. Earlier this year, the company acquired Bitnami and last year, it acquired Heptio, the startup founded by two of the three co-founders of the Kubernetes project, which now forms the basis of many new enterprise cloud deployments and, most recently, Pivotal Cloud Foundry.

Hundreds of exposed Amazon cloud backups found leaking sensitive data

How safe are your secrets? If you used Amazon’s Elastic Block Storage snapshots, you might want to check your settings.

New research just presented at the Def Con security conference reveals how companies, startups and governments are inadvertently leaking their own files from the cloud.

You may have heard of exposed S3 buckets — those Amazon-hosted storage servers packed with customer data but often misconfigured and inadvertently set to “public” for anyone to access. But you may not have heard about exposed EBS snapshots, which poses as much, if not a greater, risk.

These elastic block storage (EBS) snapshots are the “keys to the kingdom,” said Ben Morris, a senior security analyst at cybersecurity firm Bishop Fox, in a call with TechCrunch ahead of his Def Con talk. EBS snapshots store all the data for cloud applications. “They have the secret keys to your applications and they have database access to your customers’ information,” he said.

“When you get rid of the hard disk for your computer, you know, you usually shredded or wipe it completely,” he said. “But these public EBS volumes are just left for anyone to take and start poking at.”

He said that all too often cloud admins don’t choose the correct configuration settings, leaving EBS snapshots inadvertently public and unencrypted. “That means anyone on the internet can download your hard disk and boot it up, attach it to a machine they control, and then start rifling through the disk to look for any kind of secrets,” he said.

Screen Shot 2019 08 09 at 2.45.51 PM

One of Morris’ Def Con slides explaining how EBS snapshots can be exposed. (Image: Ben Morris/Bishop Fox; supplied)

Morris built a tool using Amazon’s own internal search feature to query and scrape publicly exposed EBS snapshots, then attach it, make a copy and list the contents of the volume on his system.

“If you expose the disk for even just a couple of minutes, our system will pick it up and make a copy of it,” he said.

Screen Shot 2019 08 07 at 2.14.30 PM

Another slide noting the types of compromised data found using his research, often known as the “Wall of Sheep” (Image: Ben Morris/Bishop Fox; supplied)

It took him two months to build up a database of exposed data and just a few hundred dollars spent on Amazon cloud resources. Once he validates each snapshot, he deletes the data.

Morris found dozens of snapshots exposed publicly in one region alone, he said, including application keys, critical user or administrative credentials, source code and more. He found several major companies, including healthcare providers and tech companies.

He also found VPN configurations, which he said could allow him to tunnel into a corporate network. Morris said he did not use any credentials or sensitive data, as it would be unlawful.

Among the most damaging things he found, Morris said he found a snapshot for one government contractor, which he did not name, but provided data storage services to federal agencies. “On their website, they brag about holding this data,” he said, referring to collected intelligence from messages sent to and from the so-called Islamic State terror group to data on border crossings.

“Those are the kind of things I would definitely not want to be exposed to the public internet,” he said.

He estimates the figure could be as many as 1,250 exposures across all Amazon cloud regions.

Morris plans to release his proof-of-concept code in the coming weeks.

“I’m giving companies a couple of weeks to go through their own disks and make sure that they don’t have any accidental exposures,” he said.

After the Capital One breach, do you know who’s in your cloud?

The recently reported Capital One data breach has once again turned the technology world’s attention to cloud security. A lot of speculation is all that the industry can surmise about exactly what happened and how the events came to pass. The indictment is vague and the companies are in PR crisis mode.

Let’s not waste this time on conjecture. It’s important to focus on the uncomfortable yet completely valid cloud security concerns while everyone is listening.

The elephant in the room in cloud platform security is the inherently problematic issue of customers not knowing which cloud provider employees are entrusted with administrative-level access to the clouds themselves. Cloud Customer X does not know the names of employees at Cloud Provider Y who, upon succumbing to moral failing, could theoretically abuse privileged knowledge, credentials, or internal cloud provider tools in order to inappropriately access, copy, or otherwise interact with Cloud Customer X’s provisioned systems or stored data.

To be clear, there’s no suggestion that the Capital One breach is the result of insider access or privileged knowledge abuse. While the alleged perpetrator’s prior work history includes employment at Amazon Web Services — the cloud provider which data was downloaded from — the amount of cloud service know-how necessary to pull off the alleged wrongful acts can certainly be gained by anyone with an internet connection and enough curiosity.

Instead, we need to talk about cloud platform security in a broader sense. We need to make sure when executives sign on the dotted line and agree to put mountains of their own customer data under someone else’s control that they understand the stark trade-off realities, rather than the myths, of cloud platform security.

Simply put, moving operations into the cloudspace means you are putting yourself at the mercy of the cloud host. Ultimately, the cloud provider can take their ball and go home, leaving your business stranded. Doing so might be in violation of some words that an attorney typed up and both sides agreed to. But those words cannot physically stop a cloud provider’s rogue subcontractor from abusing trusted access — of which the cloud customer would most likely never know.

There are no easy fixes for such a scenario. But it would be foolish to wait for egregious examples of cloud platform insider abuse to be known publicly prior to sparking the very important conversation, even if the topic is uncomfortable for cloud providers to acknowledge and unsettling for cloud users to realize.

Microsoft Azure now lets you have a server all to yourself

Microsoft today announced the preview launch of Azure Dedicated Host, a new cloud service that will allow you to run your virtual machines on single-tenant physical services. That means you’re not sharing any resources on that server with anybody else and you’ll get full control over everything that’s running on that machine.

Previously, Azure already offered isolated Virtual Machine sizes for two very large virtual machine types. Those are still available, but their use cases are comparably limited to these new hosts, which offer far more flexibility.

With this move, Microsoft is following in the footsteps of AWS, which also offers Dedicated Hosts with very similar capabilities. Google Cloud, too, offers what it calls ‘sole-tenant nodes.’

Azure Dedicated Host will support Windows, Linux and SQL Server virtual machines and pricing is per host, independent of the number of virtual machines you end up running on them. You can currently opt for machines with up to 144 physical cores and prices start at $4.039 per hour.

To do this, Microsoft is offering two different processors to power these machines. Type 1 is based on the 2.3 GHz Intel Xeon E5-2673 v4 with up to 3.5 gigahertz of clock speed, while Type 2 features the Intel Xeon® Platinum 8168 with single-core clock speeds of up to 3.7 gigahertz. The available memory ranges from 32GiB to 448GiB. You can find more details here.

As Microsoft notes, these new dedicated hosts can help companies reach their compliance requirements for physical security, data integrity and monitoring. The dedicated hosts still share the same underlying infrastructure as any other host in the Azure data centers, but users have full control over any maintenance window that could impact their servers.

These dedicated hosts can also be grouped into larger host groups in a given Azure region, allowing you to build clusters of your own physical servers inside the Azure data center. Since you’re actually renting a physical machine, any hardware issue on that machine will impact the virtual machines you are running on them, so chances are you’ll want to have multiple dedicated hosts for your failover strategy anyway.

110b3725 54e2 4840 a609 adf18fcbe32f

Amazon says U.S. government demands for customer data went up

Amazon said the U.S. government asked more data from the company during the first-half of 2019 than on the previous six-month period.

The latest figures landed in the company’s transparency report, published quietly on its website late Wednesday, said the number of subpoenas it received went up by 14% and search warrants went up by close to 35%.

That includes data collected from its Amazon Echo voice assistant service, its Kindle and Fire tablets, and its home security devices.

Amazon turned over some or all data in about four out of five cases, the figures show.

But the number of other legal demands Amazon received were down slightly.

The company’s cloud business, Amazon Web Services — which makes up the bulk of Amazon’s annual operating income — also reported separately a 77% increase in the number of subpoenas it received for cloud-stored customer data, but a decline in received search warrants.

Per reporting rules set out by the Justice Department, Amazon said it received between 0 and 249 national security requests, for both consumer and cloud services.

Amazon was one of the last major tech companies to issue a transparency report, despite mounting pressure from privacy advocates. The company eventually buckled, releasing its first set of figures several days after whistleblower Edward Snowden leaked highly classified documents which revealed mass surveillance by the U.S. National Security Agency and its global intelligence counterparts.

The company said at the time and continued to maintain until recently that it “never participated” in the NSA’s so-called PRISM program, which allowed the government to obtain data from Apple, Google, Microsoft, and several other tech companies.

But TechCrunch noticed that Amazon removed that wording from its transparency report pages several weeks ago.

When reached, an Amazon spokesperson said that the change was “simply because it was a somewhat dated reference.”

Amazon acquires flash-based cloud storage startup E8 Storage

Amazon has acquired Isreali storage tech startup E8 Storage, as first reported to Reuters, CNBC and Globes and confirmed by TechCrunch. The acquisition will bring the team and technology from E8 in to Amazon’s existing Amazon Web Services center in Tel Aviv, per reports.

E8 Storage’s particular focus was on building storage hardware that employs flash-based memory to deliver faster performance than competing offerings, according to its own claims. How exactly AWS intends to use the company’s talent or assets isn’t yet known, but it clearly lines up with their primary business.

AWS acquisitions this year include TSO Logic, a Vancouver-based startup that optimizes data center workload operating efficiency, and Israel-based CloudEndure, which provides data recovery services in the event of a disaster.

DigitalOcean gets a new CEO and CFO

DigitalOcean, the cloud infrastructure service that made a name for itself by focusing on low-cost hosting options in its early days, today announced that it has appointed former SendGrid COO and CFO Yancey Spruill as its new CEO and former EnerNOC CFO Bill Sorenson as its new CFO. Spruill will replace Mark Templeton, who only joined the company a little more than a year ago and who had announced in May his decision to step down for personal reasons.

DigitalOcean is a brand I’ve followed and admired for a while — the leadership team has done a tremendous job building out the products, services and, most importantly, a community, that puts developer needs first,” said Spruill in today’s announcement. “We have a multi-billion dollar revenue opportunity in front of us and I’m looking forward to working closely with our strong leadership team to build upon the current strategy to drive DigitalOcean to the company’s full potential.”

Spruill does have a lot of experience, given that he was in CxO positions at SendGrid through both its IPO in 2017 and its sale to Twilio in 2019. He also previously held the CFO role at DigitalGlobe, which he also guided to an IPO.

In his announcement, Spruill notes that he expects DigitalOcean to focus on its core business, which currently has about 500,000 users (though it’s unclear how many of those are active, paying users). “My aspiration is for us to continue to provide everything you love about DO now, but to also enhance our offerings in a way that is meaningful, strategic and most helpful for you over time,” he writes.

Spruill’s history as CFO includes its fair share of IPOs and sales, but so does Sorenson’s. As CFO at EnerNOC, he guided that company to a sale to investor Enel Group. Before that, he led business intelligence firm Qlik to an IPO.

It’s not unusual for incoming CEOs and CFOs to have this kind of experience, but it does make you wonder what DigitalOcean’s future holds in store. The company isn’t as hyped as it once was and while it still offers one of the best user experiences for developers, it remains a relatively small player in the overall cloud game. That’s a growing market, but the large companies — the ones that bring in the majority of revenue — are looking to Amazon, Microsoft and Google for their cloud infrastructure. Even a small piece of the overall cloud pie can be quite lucrative, but I think DigitalOcean’s ambitions go beyond that.

Google teams up with VMware to bring more enterprises to its cloud

Google today announced a new partnership with VMware that will make it easier for enterprises to run their VMware workloads on Google Cloud.  Specifically, Google Cloud will now support VMware Cloud Foundation, the company’s system for deploying and running hybrid clouds. The solution was developed by CloudSimple, not VMware or Google, and Google will offer first-line support, working together with CloudSimple.

While Google would surely love for all enterprises to move to containers and utilize its Anthos hybrid cloud service, most large companies currently use VMware. They may want to move those workloads to a public cloud, but they aren’t ready to give up a tool that has long worked for them. With this new capability, Google isn’t offering anything that is especially new or innovative, but that’s not what this is about. Instead, Google is simply giving enterprises fewer reasons to opt for a competitor without even taking its offerings into account.

“Customers have asked us to provide broad support for VMware, and now with Google Cloud VMware Solution by CloudSimple, our customers will be able to run VMware vSphere-based workloads in GCP,” the company notes in the announcement, which we got an early copy of but which for reasons unknown to us will only go live on the company’s blog tomorrow. “This brings customers a wide breadth of choices for how to run their VMware workloads in a hybrid deployment, from modern containerized applications with Anthos to VM-based applications with VMware in GCP.”

The new solution will offer support for the full VMware stack, including the likes of vCenter, vSAN and NSX-T.

“Our partnership with Google Cloud has always been about addressing customers’ needs, and we’re excited to extend the partnership to enable our mutual customers to run VMware workloads on VMware Cloud Foundation in Google Cloud Platform,” said Sanjay Poonen, chief operating officer, customer operations at VMware. “With VMware on Google Cloud Platform, customers will be able to leverage all of the familiarity and investment protection of VMware tools and training as they execute on their cloud strategies, and rapidly bring new services to market and operate them seamlessly and more securely across a hybrid cloud environment.”

While Google’s announcement highlights that the company has a long history of working with VMware, it’s interesting to note that at least the technical aspects of this partnership are more about CloudSimple than VMware. It’s also worth noting that VMware has long had a close relationship with Google’s cloud competitor AWS and Microsoft Azure, too, offers tools for running VMware-based workloads on its cloud.