Google One now offers free phone backups up to 15GB on Android and iOS

Google One, Google’s subscription program for buying additional storage and live support, is getting an update today that will bring free phone backups for Android and iOS devices to anybody who installs the app — even if they don’t have a paid membership. The catch: While the feature is free, the backups count against your free Google storage allowance of 15GB. If you need more you need — you guessed it — a Google One membership to buy more storage or delete data you no longer need. Paid memberships start at $1.99/month for 100GB.

Image Credits: Google

Last year, paid members already got access to this feature on Android, which stores your texts, contacts, apps, photos and videos in Google’s cloud. The “free” backups are now available to Android users. iOS users will get access to it once the Google One app rolls out on iOS in the near future.

Image Credits: Google

With this update, Google is also introducing a new storage manager tool in Google One, which is available in the app and on the web, and which allows you to delete files and backups as needed. The tool works across Google properties and lets you find emails with very large attachments or large files in your Google Drive storage, for example.

With this free backup feature, Google is clearly trying to get more people onto Google One. The free 15GB storage limit is pretty easy to hit, after all (and that’s for your overall storage on Google, including Gmail and other services) and paying $1.99 for 100GB isn’t exactly a major expense, especially if you are already part of the Google ecosystem and use apps like Google Photos already.

Wasabi announces $30M in debt financing as cloud storage business continues to grow

We may be in the thick of a pandemic with all of the economic fallout that comes from that, but certain aspects of technology don’t change no matter the external factors. Storage is one of them. In fact, we are generating more digital stuff than ever, and Wasabi, a Boston-based startup that has figured out a way to drive down the cost of cloud storage is benefiting from that.

Today it announced a $30 million debt financing round led led by Forestay Capital, the technology innovation arm of Waypoint Capital with help from previous investors. As with the previous round, Wasabi is going with home office investors, rather than traditional venture capital firms. Today’s round brings the total raised to $110 million, according to the company.

Founder and CEO David Friend says the company needs the funds to keep up with the rapid growth. “We’ve got about 15,000 customers today, hundreds of petabytes of storage, 2500 channel partners, 250 technology partners — so we’ve been busy,” he said.

He says that revenue continues to grow in spite of the impact of COVID-19 on other parts of the economy. “Revenue grew 5x last year. It’ll probably grow 3.5x this year. We haven’t seen any real slowdown from the Coronavirus. Quarter over quarter growth will be in excess of 40% — this quarter over Q1 — so it’s just continuing on a torrid pace,” he said.

He said the money will be used mostly to continue to expand its growing infrastructure requirements. The more they store, the more data centers they need and that takes money. He is going the debt route because his products are backed by a tangible asset, the infrastructure used to store all the data in the Wasabi system. And it turns out that debt financing is a lot cheaper in terms of payback than equity terms.

“Our biggest need is to build more infrastructure, because we are constantly buying equipment. We have to pay for it even before it fills up with customer data, so we’re raising another debt round now,” Friend said. He added, “Part of what we’re doing is just strengthening our balance sheet to give us access to more inexpensive debt to finance the building of the infrastructure.”

The challenge for a company like Wasabi, which is looking to capture a large chunk of the growing cloud storage market is the infrastructure piece. It needs to keep building more to meet increasing demand, while keeping costs down, which remains its primary value proposition with customers.

The money will help the company expand into new markets as many countries have data sovereignty laws that require data to be stored in-country. That requires more money and that’s the thinking behind this round.

The company launched in 2015. It previously raised $68 million in 2018.

Rallyhood exposed a decade of users’ private data

Rallyhood says it’s “private and secure.” But for some time, it wasn’t.

The social network designed to help groups communicate and coordinate left one of its cloud storage buckets containing user data open and exposed. The bucket, hosted on Amazon Web Services (AWS), was not protected with a password, allowing anyone who knew the easily-guessable web address access to a decade’s worth of user files.

Rallyhood boasts users from Girl Scout and Boy Scout troops, and Komen, Habitat for Humanities, and YMCA factions. The company also hosts thousands of smaller groups, like local bands, sports teams, art clubs, and organizing committees. Many flocked to the site after Rallyhood said it would help migrate users from Yahoo Groups, after Verizon (which also owns TechCrunch) said it would shut down the discussion forum site last year.

The bucket contained group data as far back to 2011 up to and including last month. In total, the bucket contained 4.1 terabytes of uploaded files, representing millions of users’ files.

Some of the files we reviewed contained sensitive data, like shared password lists and contracts or other permission slips and agreements. The documents also included non-disclosure agreements and other files that were not intended to be public.

Where we could identify contact information of users whose information was exposed, TechCrunch reached out to verify the authenticity of the data.

A security researcher who goes by the handle Timeless found the exposed bucket and informed TechCrunch, so that the bucket and its files could be secured.

When reached, Rallyhood chief technology officer Chris Alderson initially claimed that the bucket was for “testing” and that all user data was stored “in a highly secured bucket,” but later admitted that during a migration project, “there was a brief period when permissions were mistakenly left open.”

It’s not known if Rallyhood plans to warn its users and customers of the security lapse. At the time of writing, Rallyhood has made no statement on its website or any of its social media profiles of the incident.

How Spotify ran the largest Google Dataflow job ever for Wrapped 2019

In early December, Spotify launched its annual personalized Wrapped playlist with its users’ most-streamed sounds of 2019. That has become a bit of a tradition and isn’t necessarily anything new, but for 2019, it also gave users a look back at how they used Spotify over the last decade. Because this was quite a large job, Spotify gave us a bit of a look under the covers of how it generated these lists for its ever-growing number of free and paid subscribers.

It’s no secret that Spotify is a big Google Cloud Platform user. Back in 2016, the music streaming service publicly said that it was going to move to Google Cloud, after all, and in 2018, it disclosed that it would spend at least $450 million on its Google Cloud infrastructure in the following three years.

It was also back in 2018, for that year’s Wrapped, that Spotify ran the largest Google Cloud Dataflow job ever run on the platform, a service the company started experimenting with a few years earlier. “Back in 2015, we built and open-sourced a big data processing Scala API for Apache Beam and Google Cloud Dataflow called Scio,” Spotify’s VP of Engineering Tyson Singer told me. “We chose Dataflow over Dataproc because it scales with less operational overhead and Dataflow fit with our expected needs for streaming processing. Now we have a great open-source toolset designed and optimized for Dataflow, which in addition to being used by most internal teams, is also used outside of Spotify.”

For Wrapped 2019, which includes the annual and decadal lists, Spotify ran a job that was five times larger than in 2018 — but it did so at three-quarters of the cost. Singer attributes this to his team’s familiarity with the platform. “With this type of global scale, complexity is a natural consequence. By working closely with Google Cloud’s engineering teams and specialists and drawing learnings from previous years, we were able to run one of the most sophisticated Dataflow jobs ever written.”

Still, even with this expertise, the team couldn’t just iterate on the full data set as it figured out how to best analyze the data and use it to tell the most interesting stories to its users. “Our jobs to process this would be large and complex; we needed to decouple the complexity and processing in order to not overwhelm Google Cloud Dataflow,” Singer said. “This meant that we had to get more creative when it came to going from idea, to data analysis, to producing unique stories per user, and we would have to scale this in time and at or below cost. If we weren’t careful, we risked being wasteful with resources and slowing down downstream teams.”

To handle this workload, Spotify not only split its internal teams into three groups (data processing, client-facing and design, and backend systems), but also split the data processing jobs into smaller pieces. That marked a very different approach for the team. “Last year Spotify had one huge job that used a specific feature within Dataflow called “Shuffle.” The idea here was that having a lot of data, we needed to sort through it, in order to understand who did what. While this is quite powerful, it can be costly if you have large amounts of data.”

This year, the company’s engineers minimized the use of Shuffle by using Google Cloud’s Bigtable as an intermediate storage layer. “Bigtable was used as a remediation tool between Dataflow jobs in order for them to process and store more data in a parallel way, rather than the need to always regroup the data,” said Singer. “By breaking down our Dataflow jobs into smaller components — and reusing core functionality — we were able to speed up our jobs and make them more resilient.”

Singer attributes at least a part of the cost savings to this technique of using Bigtable, but he also noted that the team decomposed the problem into data collection, aggregation and data transformation jobs, which it then split into multiple separate jobs. “This way, we were not only able to process more data in parallel, but be more selective about which jobs to rerun, keeping our costs down.”

Many of the techniques the engineers on Singer’s teams developed are currently in use across Spotify. “The great thing about how Wrapped works is that we are able to build out more tools to understand a user, while building a great product for them,” he said. “Our specialized techniques and expertise of Scio, Dataflow and big data processing, in general, is widely used to power Spotify’s portfolio of products.”

How Spotify ran the largest Google Dataflow job ever for Wrapped 2019

In early December, Spotify launched its annual personalized Wrapped playlist with its users’ most-streamed sounds of 2019. That has become a bit of a tradition and isn’t necessarily anything new, but for 2019, it also gave users a look back at how they used Spotify over the last decade. Because this was quite a large job, Spotify gave us a bit of a look under the covers of how it generated these lists for its ever-growing number of free and paid subscribers.

It’s no secret that Spotify is a big Google Cloud Platform user. Back in 2016, the music streaming service publicly said that it was going to move to Google Cloud, after all, and in 2018, it disclosed that it would spend at least $450 million on its Google Cloud infrastructure in the following three years.

It was also back in 2018, for that year’s Wrapped, that Spotify ran the largest Google Cloud Dataflow job ever run on the platform, a service the company started experimenting with a few years earlier. “Back in 2015, we built and open-sourced a big data processing Scala API for Apache Beam and Google Cloud Dataflow called Scio,” Spotify’s VP of Engineering Tyson Singer told me. “We chose Dataflow over Dataproc because it scales with less operational overhead and Dataflow fit with our expected needs for streaming processing. Now we have a great open-source toolset designed and optimized for Dataflow, which in addition to being used by most internal teams, is also used outside of Spotify.”

For Wrapped 2019, which includes the annual and decadal lists, Spotify ran a job that was five times larger than in 2018 — but it did so at three-quarters of the cost. Singer attributes this to his team’s familiarity with the platform. “With this type of global scale, complexity is a natural consequence. By working closely with Google Cloud’s engineering teams and specialists and drawing learnings from previous years, we were able to run one of the most sophisticated Dataflow jobs ever written.”

Still, even with this expertise, the team couldn’t just iterate on the full data set as it figured out how to best analyze the data and use it to tell the most interesting stories to its users. “Our jobs to process this would be large and complex; we needed to decouple the complexity and processing in order to not overwhelm Google Cloud Dataflow,” Singer said. “This meant that we had to get more creative when it came to going from idea, to data analysis, to producing unique stories per user, and we would have to scale this in time and at or below cost. If we weren’t careful, we risked being wasteful with resources and slowing down downstream teams.”

To handle this workload, Spotify not only split its internal teams into three groups (data processing, client-facing and design, and backend systems), but also split the data processing jobs into smaller pieces. That marked a very different approach for the team. “Last year Spotify had one huge job that used a specific feature within Dataflow called “Shuffle.” The idea here was that having a lot of data, we needed to sort through it, in order to understand who did what. While this is quite powerful, it can be costly if you have large amounts of data.”

This year, the company’s engineers minimized the use of Shuffle by using Google Cloud’s Bigtable as an intermediate storage layer. “Bigtable was used as a remediation tool between Dataflow jobs in order for them to process and store more data in a parallel way, rather than the need to always regroup the data,” said Singer. “By breaking down our Dataflow jobs into smaller components — and reusing core functionality — we were able to speed up our jobs and make them more resilient.”

Singer attributes at least a part of the cost savings to this technique of using Bigtable, but he also noted that the team decomposed the problem into data collection, aggregation and data transformation jobs, which it then split into multiple separate jobs. “This way, we were not only able to process more data in parallel, but be more selective about which jobs to rerun, keeping our costs down.”

Many of the techniques the engineers on Singer’s teams developed are currently in use across Spotify. “The great thing about how Wrapped works is that we are able to build out more tools to understand a user, while building a great product for them,” he said. “Our specialized techniques and expertise of Scio, Dataflow and big data processing, in general, is widely used to power Spotify’s portfolio of products.”

What Nutanix got right (and wrong) in its IPO roadshow

Back in 2016, Nutanix decided to take the big step of going public. Part of that process was creating a pitch deck and presenting it during its roadshow, a coming-out party when a company goes on tour prior to its IPO and pitches itself to investors of all stripes.

It’s a huge moment in the life of any company, and after talking to CEO Dheeraj Pandey and CFO Duston Williams, one we better understood. They spoke about how every detail helped define their company and demonstrate its long-term investment value to investors who might not have been entirely familiar with the startup or its technology.

Pandey and Williams reported going through more than 100 versions of the deck before they finished the one they took on the road. Pandey said they had a data room checking every fact, every number — which they then checked yet again.

In a separate Extra Crunch post, we looked at the process of building that deck. Today, we’re looking more closely at the content of the deck itself, especially the numbers Nutanix presented to the world. We want to see what investors did more than three years ago and what’s happened since — did the company live up to its promises?

Plan of attack

What Nutanix got right (and wrong) in its IPO roadshow

Back in 2016, Nutanix decided to take the big step of going public. Part of that process was creating a pitch deck and presenting it during its roadshow, a coming-out party when a company goes on tour prior to its IPO and pitches itself to investors of all stripes.

It’s a huge moment in the life of any company, and after talking to CEO Dheeraj Pandey and CFO Duston Williams, one we better understood. They spoke about how every detail helped define their company and demonstrate its long-term investment value to investors who might not have been entirely familiar with the startup or its technology.

Pandey and Williams reported going through more than 100 versions of the deck before they finished the one they took on the road. Pandey said they had a data room checking every fact, every number — which they then checked yet again.

In a separate Extra Crunch post, we looked at the process of building that deck. Today, we’re looking more closely at the content of the deck itself, especially the numbers Nutanix presented to the world. We want to see what investors did more than three years ago and what’s happened since — did the company live up to its promises?

Plan of attack

An adult sexting site exposed thousands of models’ passports and driver’s licenses

A popular sexting website has exposed thousands of photo IDs belonging to models and sex workers who earn commissions from the site.

SextPanther, an Arizona-based adult site, stored over 11,000 identity documents on an exposed Amazon Web Services (AWS) storage bucket, including passports, driver’s licenses, and Social Security numbers, without a password. The company says on its website that it uses to verify the ages of models who users communicate with.

Most of the exposed identity documents contain personal information, such as names, home addresses, dates of birth, biometrics, and their photos.

Although most of the data came from models in the U.S., some of the documents were supplied by workers in Canada, India, and the United Kingdom.

The site allows models and sex workers to earn money by exchanging text messages, photos, and videos with paying users, including explicit and nude content. The exposed storage bucket also contained over a hundred thousand photos and videos sent and received by the workers.

It was not immediately clear who owned the storage bucket. TechCrunch asked U.K.-based penetration testing company Fidus Information Security, which has experience in discovering and identifying exposed data, to help.

Researchers at Fidus quickly found evidence suggesting the exposed data could belong to SextPanther.

An hour after we alerted the site’s owner, Alexander Guizzetti, to the exposed data, the storage bucket was pulled offline.

“We have passed this on to our security and legal teams to investigate further. We take accusations like this very seriously,” Guizzetti said in an email, who did not explicitly confirm the bucket belonged to his company.

Using information from identity documents matched against public records, we contacted several models whose information was exposed by the security lapse.

“I’m sure I sent it to them,” said one model, referring to her driver’s license which was exposed. (We agreed to withhold her name given the sensitivity of the data.) We passed along a photo of her license as it found in the exposed bucket. She confirmed it was her license, but said that the information on her license is no longer current.

“I truly feel awful for others whom have signed up with their legit information,” she said.

The security lapse comes a week after researchers found a similar cache of highly sensitive personal information of sex workers on adult webcam streaming site, PussyCash.

More than 850,000 documents were insecurely stored in another unprotected storage bucket.

Read more:


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849.

Brand power vs. product power

Most tech companies — particularly B2B companies — either don’t understand the power of a brand, or do a really poor job of creating one.

An informal survey of a dozen of my young CEO friends showed that, given the choice, 10 out of 12 — 83% — would rather spend an extra dollar on product development than brand-building. It is dangerous (or at least foolish) to assume that the ROI on product development is greater than the ROI on brand building.

As a serial entrepreneur and CEO, I have had to make this choice many times. In 2006, I co-founded PC backup company Carbonite . I left the company five years ago after taking it public and I no longer have any financial interest in it, which is why I can write about it now — it was just sold for $1.4 billion to OpenText. There were many other backup products on the market at that time and many more appeared over the first five years of the company’s life. I would argue that Carbonite was slicker than most of the others, but essentially every backup product accomplishes the same result.

Unlike Carbonite’s competitors, we focused on our brand. That meant raising more money than we would have if we were just investing in R&D. But, after five years of investing in our brand, we had eleven times the brand recognition of any other consumer backup company and we dominated the market.

Here’s why: a study by Kettlefire Creative showed that 59% of people prefer to buy brands that they have heard of. Since none of our competitors had widely recognized brands, we got most of that 59%. Of the remaining 41%, we fought it out on other criteria and won most of that as well. Put yourself in the shoes of a potential customer looking to back up their PC. What do you worry about? Well, before we even launched the company, we asked PC owners to choose the five most important attributes of their ideal backup company from a list of ten possible attributes, and we found the following:

1. Trustworthy: you won’t look at my files or allow anyone to see them (1127 votes)

2. Peace of mind: when I go to retrieve my backup, it will always be there (811 votes)

3. Reliable: it backs up everything and doesn’t stop (696 votes)

4. Helpful: if I lose my computer, I want to talk to a human who can help me (446 votes)

5. Easy: it should be simple and require little attention (444 votes)

The attributes that didn’t make the top five:

6. Fast: backups happen quickly

Why is Dropbox reinventing itself?

According to Dropbox CEO Drew Houston, 80% of the product’s users rely on it, at least partially, for work.

It makes sense, then, that the company is refocusing to try and cement its spot in the workplace; to shed its image as “just” a file storage company (in a time when just about every big company has its own cloud storage offering) and evolve into something more immutably core to daily operations.

Earlier this week, Dropbox announced that the “new Dropbox” would be rolling out to all users. It takes the simple, shared folders that Dropbox is known for and turns them into what the company calls “Spaces” — little mini collaboration hubs for your team, complete with comment streams, AI for highlighting files you might need mid-meeting, and integrations into things like Slack, Trello and G Suite. With an overhauled interface that brings much of Dropbox’s functionality out of the OS and into its own dedicated app, it’s by far the biggest user-facing change the product has seen since launching 12 years ago.

Shortly after the announcement, I sat down with Dropbox VP of Product Adam Nash and CTO Quentin Clark . We chatted about why the company is changing things up, why they’re building this on top of the existing Dropbox product, and the things they know they just can’t change.

You can find these interviews below, edited for brevity and clarity.

Greg Kumparak: Can you explain the new focus a bit?

Adam Nash: Sure! I think you know this already, but I run products and growth, so I’m gonna have a bit of a product bias to this whole thing. But Dropbox… one of its differentiating characteristics is really that when we built this utility, this “magic folder”, it kind of went everywhere.