Why is Dropbox reinventing itself?

According to Dropbox CEO Drew Houston, 80% of the product’s users rely on it, at least partially, for work.

It makes sense, then, that the company is refocusing to try and cement its spot in the workplace; to shed its image as “just” a file storage company (in a time when just about every big company has its own cloud storage offering) and evolve into something more immutably core to daily operations.

Earlier this week, Dropbox announced that the “new Dropbox” would be rolling out to all users. It takes the simple, shared folders that Dropbox is known for and turns them into what the company calls “Spaces” — little mini collaboration hubs for your team, complete with comment streams, AI for highlighting files you might need mid-meeting, and integrations into things like Slack, Trello and G Suite. With an overhauled interface that brings much of Dropbox’s functionality out of the OS and into its own dedicated app, it’s by far the biggest user-facing change the product has seen since launching 12 years ago.

Shortly after the announcement, I sat down with Dropbox VP of Product Adam Nash and CTO Quentin Clark . We chatted about why the company is changing things up, why they’re building this on top of the existing Dropbox product, and the things they know they just can’t change.

You can find these interviews below, edited for brevity and clarity.

Greg Kumparak: Can you explain the new focus a bit?

Adam Nash: Sure! I think you know this already, but I run products and growth, so I’m gonna have a bit of a product bias to this whole thing. But Dropbox… one of its differentiating characteristics is really that when we built this utility, this “magic folder”, it kind of went everywhere.

Enterprise software is hot — who would have thought?

Once considered the most boring of topics, enterprise software is now getting infused with such energy that it is arguably the hottest space in tech.

It’s been a long time coming. And it is the developers, software engineers and veteran technologists with deep experience building at-scale technologies who are energizing enterprise software. They have learned to build resilient and secure applications with open-source components through continuous delivery practices that align technical requirements with customer needs. And now they are developing application architectures and tools for at-scale development and management for enterprises to make the same transformation.

“Enterprise had become a dirty word, but there’s a resurgence going on and Enterprise doesn’t just mean big and slow anymore,” said JD Trask, co-founder of Raygun enterprise monitoring software. “I view the modern enterprise as one that expects their software to be as good as consumer software. Fast. Easy to use. Delivers value.”

The shift to scale out computing and the rise of the container ecosystem, driven largely by startups, is disrupting the entire stack, notes Andrew Randall, vice president of business development at Kinvolk.

In advance of TechCrunch’s first enterprise-focused event, TC Sessions: Enterprise, The New Stack examined the commonalities between the numerous enterprise-focused companies who sponsor us. Their experiences help illustrate the forces at play behind the creation of the modern enterprise tech stack. In every case, the founders and CTOs recognize the need for speed and agility, with the ultimate goal of producing software that’s uniquely in line with customer needs.

We’ll explore these topics in more depth at The New Stack pancake breakfast and podcast recording at TC Sessions: Enterprise. Starting at 7:45 a.m. on Sept. 5, we’ll be serving breakfast and hosting a panel discussion on “The People and Technology You Need to Build a Modern Enterprise,” with Sid Sijbrandij, founder and CEO, GitLab, and Frederic Lardinois, enterprise writer and editor, TechCrunch, among others. Questions from the audience are encouraged and rewarded, with a raffle prize awarded at the end.

Traditional virtual machine infrastructure was originally designed to help manage server sprawl for systems-of-record software — not to scale out across a fabric of distributed nodes. The disruptors transforming the historical technology stack view the application, not the hardware, as the main focus of attention. Companies in The New Stack’s sponsor network provide examples of the shift toward software that they aim to inspire in their enterprise customers. Portworx provides persistent state for containers; NS1 offers a DNS platform that orchestrates the delivery internet and enterprise applications; Lightbend combines the scalability and resilience of microservices architecture with the real-time value of streaming data.

“Application development and delivery have changed. Organizations across all industry verticals are looking to leverage new technologies, vendors and topologies in search of better performance, reliability and time to market,” said Kris Beevers, CEO of NS1. “For many, this means embracing the benefits of agile development in multicloud environments or building edge networks to drive maximum velocity.”

Enterprise software startups are delivering that value, while they embody the practices that help them deliver it.

The secrets to speed, agility and customer focus

Speed matters, but only if the end result aligns with customer needs. Faster time to market is often cited as the main driver behind digital transformation in the enterprise. But speed must also be matched by agility and the ability to adapt to customer needs. That means embracing continuous delivery, which Martin Fowler describes as the process that allows for the ability to put software into production at any time, with the workflows and the pipeline to support it.

Continuous delivery (CD) makes it possible to develop software that can adapt quickly, meet customer demands and provide a level of satisfaction with benefits that enhance the value of the business and the overall brand. CD has become a major category in cloud-native technologies, with companies such as CircleCI, CloudBees, Harness and Semaphore all finding their own ways to approach the problems enterprises face as they often struggle with the shift.

“The best-equipped enterprises are those [that] realize that the speed and quality of their software output are integral to their bottom line,” Rob Zuber, CTO of CircleCI, said.

Speed is also in large part why monitoring and observability have held their value and continue to be part of the larger dimension of at-scale application development, delivery and management. Better data collection and analysis, assisted by machine learning and artificial intelligence, allow companies to quickly troubleshoot and respond to customer needs with reduced downtime and tight DevOps feedback loops. Companies in our sponsor network that fit in this space include Raygun for error detection; Humio, which provides observability capabilities; InfluxData with its time-series data platform for monitoring; Epsagon, the monitoring platform for serverless architectures and Tricentis for software testing.

“Customer focus has always been a priority, but the ability to deliver an exceptional experience will now make or break a “modern enterprise,” said Wolfgang Platz, founder of Tricentis, which makes automated software testing tools. “It’s absolutely essential that you’re highly responsive to the user base, constantly engaging with them to add greater value. This close and constant collaboration has always been central to longevity, but now it’s a matter of survival.”

DevOps is a bit overplayed, but it still is the mainstay workflow for cloud-native technologies and critical to achieving engineering speed and agility in a decoupled, cloud-native architecture. However, DevOps is also undergoing its own transformation, buoyed by the increasing automation and transparency allowed through the rise of declarative infrastructure, microservices and serverless technologies. This is cloud-native DevOps. Not a tool or a new methodology, but an evolution of the longstanding practices that further align developers and operations teams — but now also expanding to include security teams (DevSecOps), business teams (BizDevOps) and networking (NetDevOps).

“We are in this constant feedback loop with our customers where, while helping them in their digital transformation journey, we learn a lot and we apply these learnings for our own digital transformation journey,” Francois Dechery, chief strategy officer and co-founder of CloudBees, said. “It includes finding the right balance between developer freedom and risk management. It requires the creation of what we call a continuous everything culture.”

Leveraging open-source components is also core in achieving speed for engineering. Open-source use allows engineering teams to focus on building code that creates or supports the core business value. Startups in this space include Tidelift and open-source security companies such as Capsule8. Organizations in our sponsor portfolio that play roles in the development of at-scale technologies include The Linux Foundation, the Cloud Native Computing Foundation and the Cloud Foundry Foundation.

“Modern enterprises … think critically about what they should be building themselves and what they should be sourcing from somewhere else,” said Chip Childers, CTO of Cloud Foundry Foundation . “Talented engineers are one of the most valuable assets a company can apply to being competitive, and ensuring they have the freedom to focus on differentiation is super important.”

You need great engineering talent, giving them the ability to build secure and reliable systems at scale while also the trust in providing direct access to hardware as a differentiator.

Is the enterprise really ready?

The bleeding edge can bleed too much for the likings of enterprise customers, said James Ford, an analyst and consultant.

“It’s tempting to live by mantras like ‘wow the customer,’ ‘never do what customers want (instead build innovative solutions that solve their need),’ ‘reduce to the max,’ … and many more,” said Bernd Greifeneder, CTO and co-founder of Dynatrace . “But at the end of the day, the point is that technology is here to help with smart answers … so it’s important to marry technical expertise with enterprise customer need, and vice versa.”

How the enterprise adopts new ways of working will affect how startups ultimately fare. The container hype has cooled a bit and technologists have more solid viewpoints about how to build out architecture.

One notable trend to watch: The role of cloud services through projects such as Firecracker. AWS Lambda is built on Firecracker, the open-source virtualization technology, built originally at Amazon Web Services . Firecracker serves as a way to get the speed and density that comes with containers and the hardware isolation and security capabilities that virtualization offers. Startups such as Weaveworks have developed a platform on Firecracker. OpenStack’s Kata containers also use Firecracker.

“Firecracker makes it easier for the enterprise to have secure code,” Ford said. It reduces the surface security issues. “With its minimal footprint, the user has control. It means less features that are misconfigured, which is a major security vulnerability.”

Enterprise startups are hot. How they succeed will determine how well they may provide a uniqueness in the face of the ever-consuming cloud services and at-scale startups that inevitably launch their own services. The answer may be in the middle with purpose-built architectures that use open-source components such as Firecracker to provide the capabilities of containers and the hardware isolation that comes with virtualization.

Hope to see you at TC Sessions: Enterprise. Get there early. We’ll be serving pancakes to start the day. As we like to say, “Come have a short stack with The New Stack!”

DigitalOcean launches managed MySQL and Redis database services

Half a year after launching its managed PostgreSQL service, upstart hosting and cloud services platform DigitalOcean today announced the launch of its managed MySQL and Redis database offerings, too.

Like most of the company’s latest releases, this move exemplifies DigitalOcean’s ambition to move beyond its discount hosting roots and to become a more fully-fledged cloud provider. Besides the database service and its core hosting products and infrastructure, the company now offers object and block storage and a Kubernetes engine, which itself can be used to run virtually any modern piece of cloud infrastructure. It’s unlikely to catch up with the hyperclouds anytime soon, but it’s good to have a competitor in the market.

“With the additions of MySQL and Redis, DigitalOcean now supports three of the most requested database offerings, making it easier for developers to build and run applications, rather than spending time on complex management,” said Shiven Ramji, DigitalOcean’s Senior VP of Product. “The developer is not just the DNA of DigitalOcean, but the reason for much of the company’s success. We must continue to build on this success and support developers with the services they need most on their journey towards simple app development.”

Pricing for the managed database services remains the same, no matter which engine you choose.

2019 08 19 1553

The new database services are now available in the company’s New York, Frankfurt and San Francisco data centers. Support for other database engines is also in the works. As the company notes, it selected MySQL and Redis because of popular demand from its developer community and it will do so for other engines as well. MySQL and Redis were the only services on DigitalOcean’s roadmap for 2019, though, so I don’t expect we’ll see any additional releases before the end of the year.

MIT built a better way to deliver high-quality video streams to multiple devices at once

Image via Getty Images / aurielaki

Depending on your connection and the size of your household, video streaming can get downright post-apocalyptic – bandwidth is the key resource, and everyone is fighting to get the most and avoid a nasty, pixelated picture. But a new way to control how bandwidth is distributed across multiple, simultaneous streams could mean peace across the land – even when a ton of devices are sharing the same connection and all streaming video at the same time.

Researchers at MIT’s Computer Science and Artificial Intelligence Lab created a system they call ‘Minerva’ that minimizes stutters due to buffering, and pixelation due to downgraded stream, which it believes could have huge potential benefits for streaming services like Netflix and Hulu that increasingly serve multiple members of a household at once. The underlying technology could be applied to larger areas, too, extending beyond the houseful and into neighbourhoods or even whole regions to mitigate the effects of less than idea streaming conditions.

Minerva works by taking into account the varying needs of different delivery devices streaming on a network – so it doesn’t treat a 4K Apple TV the same as an older smartphone with a display that can’t even show full HD output, for instance. It also considers the nature of the content, , which is important because live action sports require a heck of a lot more bandwidth to display in high quality when compared to say, an animated children’s TV show.

Video is then served to viewers based on its actual needs, instead of just being allocated more or less evenly across devices, and the Minerva system continually optimizes delivery speeds in accordance with their changing needs as the stream continues.

In real-world testing, Minerva as able to provide a quality cup equivalent to going from 720p to 1080p as much as a third of the time, and eliminated the need for rebuffing by almost 50 percent, which is a massive improvement when it comes to actually being able to seamlessly stream video content continuously. Plus, it can do all this without requiring any fundamental changes to network infrastructure, meaning a streaming provider could roll it out without having to require any changes on the part of users.

Hundreds of exposed Amazon cloud backups found leaking sensitive data

How safe are your secrets? If you used Amazon’s Elastic Block Storage snapshots, you might want to check your settings.

New research just presented at the Def Con security conference reveals how companies, startups and governments are inadvertently leaking their own files from the cloud.

You may have heard of exposed S3 buckets — those Amazon-hosted storage servers packed with customer data but often misconfigured and inadvertently set to “public” for anyone to access. But you may not have heard about exposed EBS snapshots, which poses as much, if not a greater, risk.

These elastic block storage (EBS) snapshots are the “keys to the kingdom,” said Ben Morris, a senior security analyst at cybersecurity firm Bishop Fox, in a call with TechCrunch ahead of his Def Con talk. EBS snapshots store all the data for cloud applications. “They have the secret keys to your applications and they have database access to your customers’ information,” he said.

“When you get rid of the hard disk for your computer, you know, you usually shredded or wipe it completely,” he said. “But these public EBS volumes are just left for anyone to take and start poking at.”

He said that all too often cloud admins don’t choose the correct configuration settings, leaving EBS snapshots inadvertently public and unencrypted. “That means anyone on the internet can download your hard disk and boot it up, attach it to a machine they control, and then start rifling through the disk to look for any kind of secrets,” he said.

Screen Shot 2019 08 09 at 2.45.51 PM

One of Morris’ Def Con slides explaining how EBS snapshots can be exposed. (Image: Ben Morris/Bishop Fox; supplied)

Morris built a tool using Amazon’s own internal search feature to query and scrape publicly exposed EBS snapshots, then attach it, make a copy and list the contents of the volume on his system.

“If you expose the disk for even just a couple of minutes, our system will pick it up and make a copy of it,” he said.

Screen Shot 2019 08 07 at 2.14.30 PM

Another slide noting the types of compromised data found using his research, often known as the “Wall of Sheep” (Image: Ben Morris/Bishop Fox; supplied)

It took him two months to build up a database of exposed data and just a few hundred dollars spent on Amazon cloud resources. Once he validates each snapshot, he deletes the data.

Morris found dozens of snapshots exposed publicly in one region alone, he said, including application keys, critical user or administrative credentials, source code and more. He found several major companies, including healthcare providers and tech companies.

He also found VPN configurations, which he said could allow him to tunnel into a corporate network. Morris said he did not use any credentials or sensitive data, as it would be unlawful.

Among the most damaging things he found, Morris said he found a snapshot for one government contractor, which he did not name, but provided data storage services to federal agencies. “On their website, they brag about holding this data,” he said, referring to collected intelligence from messages sent to and from the so-called Islamic State terror group to data on border crossings.

“Those are the kind of things I would definitely not want to be exposed to the public internet,” he said.

He estimates the figure could be as many as 1,250 exposures across all Amazon cloud regions.

Morris plans to release his proof-of-concept code in the coming weeks.

“I’m giving companies a couple of weeks to go through their own disks and make sure that they don’t have any accidental exposures,” he said.

DigitalOcean gets a new CEO and CFO

DigitalOcean, the cloud infrastructure service that made a name for itself by focusing on low-cost hosting options in its early days, today announced that it has appointed former SendGrid COO and CFO Yancey Spruill as its new CEO and former EnerNOC CFO Bill Sorenson as its new CFO. Spruill will replace Mark Templeton, who only joined the company a little more than a year ago and who had announced in May his decision to step down for personal reasons.

DigitalOcean is a brand I’ve followed and admired for a while — the leadership team has done a tremendous job building out the products, services and, most importantly, a community, that puts developer needs first,” said Spruill in today’s announcement. “We have a multi-billion dollar revenue opportunity in front of us and I’m looking forward to working closely with our strong leadership team to build upon the current strategy to drive DigitalOcean to the company’s full potential.”

Spruill does have a lot of experience, given that he was in CxO positions at SendGrid through both its IPO in 2017 and its sale to Twilio in 2019. He also previously held the CFO role at DigitalGlobe, which he also guided to an IPO.

In his announcement, Spruill notes that he expects DigitalOcean to focus on its core business, which currently has about 500,000 users (though it’s unclear how many of those are active, paying users). “My aspiration is for us to continue to provide everything you love about DO now, but to also enhance our offerings in a way that is meaningful, strategic and most helpful for you over time,” he writes.

Spruill’s history as CFO includes its fair share of IPOs and sales, but so does Sorenson’s. As CFO at EnerNOC, he guided that company to a sale to investor Enel Group. Before that, he led business intelligence firm Qlik to an IPO.

It’s not unusual for incoming CEOs and CFOs to have this kind of experience, but it does make you wonder what DigitalOcean’s future holds in store. The company isn’t as hyped as it once was and while it still offers one of the best user experiences for developers, it remains a relatively small player in the overall cloud game. That’s a growing market, but the large companies — the ones that bring in the majority of revenue — are looking to Amazon, Microsoft and Google for their cloud infrastructure. Even a small piece of the overall cloud pie can be quite lucrative, but I think DigitalOcean’s ambitions go beyond that.

Formget security lapse exposed thousands of sensitive user-uploaded documents

If you’ve used Formget in the past few years, there’s a good chance we know about it.

Formget bills itself as an online form maker and email marketing company based in Bhopal, India. The company allows its 43,000 customers to create online forms so others can submit their resumes or apply for a job, or provide proof of address or employment, buy goods online, and more.

How do we know? Because the company left one of its cloud storage servers online and exposed without a password.

A security researcher who asked not to be named found Formget’s exposed Amazon S3 storage bucket and informed TechCrunch in the hope of getting the data secured. Formget pulled the bucket offline overnight after we reached out to the company on Wednesday. But the company’s founder and chief executive Neeraj Agarwal did not respond to several emails and follow-ups requesting comment.

The storage bucket was packed with hundreds of thousands of files and documents. The storage bucket had a folder for each year dating back to 2013 contained sub-folders for each month, filled with user-uploaded documents.

Some of the files we reviewed contained highly sensitive information, including:

  • Scans of several passports — including U.S. passports — and other scanned documents, like pay checks, Social Security numbers, driver’s licenses, national identity cards, and more;
  • Letters from Veterans Affairs certifying former veterans of service-connected disability compensation, including the amounts paid;
  • Details of obtained loans and mortgages, including amounts, interest rates, and histories, as well as bank account statements, gas bills, military discharge from active duty forms and other similar proof of residency;

documents 1

Several proof-of-residency documents, including bank and loan statements, found on the exposed server. (Image: TechCrunch)

  • Several internal corporate documents, including cybersecurity assessment summaries for several banks and financial institutions labeled “confidential” and for “internal use only”;
  • UPS shipping labels, including names and phone numbers, and the shipping contents;

passports

Two passports of many documents exposed by Formget. (Image: TechCrunch)

  • Resumes, including names, postal and email addresses, phone numbers, education backgrounds and job histories.
  • Invoices from Google, Zoom, and even from Formget itself, for billed services — in some cases including the name, address and partial credit card numbers;
  • And several airline and hotel booking receipts.

These kinds of data exposures — where private data is mistakenly made public — has become a common security problem over the years. There have been several cases of inadvertent data exposures from changing storage server permissions to public. Earlier this year millions of mortgage documents were left exposed. Scraped Facebook data was up for grabs in a similar data leak. Last year, an entire Washington state internet provider left its “keys to the kingdom” exposed because of a configuration error.

Although companies often chalk up the exposures to human error, in reality it’s not so easy to inadvertently make private cloud data public.

One senior cloud security engineer who spoke to TechCrunch on background said that the major cloud services have worked hard to keep data safe by default.

“In the case of Amazon, the default settings on an S3 bucket are private — no direct unauthorized internet access is allowed,” the engineer said. Amazon also provides free tools for scanning a user’s cloud infrastructure to look for misconfigurations.

“When there are these reports in the news of massive leaks, it’s getting harder to point the blame at the cloud provider,” the engineer said. “On any installation in the past several years, developers have to go out of their way to expose these records.”

“Once an organization leaks data in a grossly negligent way like this, they have little to blame but themselves,” the engineer said.

UnitedMasters releases iPhone app for DIY cross-service music distribution

Alphabet-backed UnitedMasters, the music label distribution startup and record label alternative that offers artists 100 percent ownership of everything they create, launched its iPhone app today.

The iPhone app works like the service they used to offer only via the web, giving artists the chance to upload their own tracks (from iCloud, Dropbox or directly from text messages), then distribute them to a full range of streaming music platforms, including Spotify, Apple Music, Tidal and more. In exchange for this distribution, as well as analytics on how your music is performing, UnitedMasters takes a 10% share on revenue generated by tracks it distributes, but artists retain full ownership of the content they create.

UnitedMasters also works with brand partners, including Bose, the NBA and AT&T, to place tracks in marketing use across the brand’s properties and distributed content. Music creators are paid out via PayPal once they connect their accounts, and they can also tie-in their social accounts for connecting their overall online presence with their music.

UnitedMasters

Using the app, artists can create entire releases by uploading not only music tracks but also high-quality cover art, and by entering information like whether any producers participated in the music creation, and whether the tracks contain any explicit lyrics. You can also specific an exact desired release date, and UnitedMasters will do its best to distribute across services on that day, pending content approvals.

UnitedMasters was founded by former Interscope Records president Steve Stoute, and also has funding from Andreessen Horwitz and 20th Century Fox. It’s aiming to serve a new generation of artists who are disenfranchised by the traditional label model, but seeking distribution through the services where listeners actually spend their time, and using the iPhone as manage the entire process definitely fits with serving that customer base.

Capital One CTO George Brady will join us at TC Sessions: Enterprise

When you think of old, giant mainframes that sit in the basement of a giant corporation, still doing the same work they did 30 years ago, chances are you’re thinking about a financial institution. It’s the financial enterprises, though, that are often leading the charge in bringing new technologies and software development practices to their employees and customers. That’s in part because they are in a period of disruption that forces them to become more nimble. Often, this means leaving behind legacy technology and embracing the cloud.

At TC Sessions Enterprise, which is happening on September 5 in San Francisco, Capital One executive VP in charge of its technology operations, George Brady, will talk about the company’s journey from legacy hardware and software to embracing the cloud and open source, all while working in a highly regulated industry. Indeed, Capital One was among the first companies to embrace the Facebook-led Open Compute project and it’s a member of the Cloud Native Computing Foundation. It’s this transformation at Captial One that Brady is leading.

At our event, Brady will join a number of other distinguished panelists to specifically talk about his company’s journey to the cloud. There, Captial One is using serverless compute, for example, to power its Credit Offers API using AWS’s Lambda service, as well as a number of other cloud technologies.

Before joining Capital One in 2014 as its CTO in 2014, Brady ran Fidelity Investment’s global enterprise infrastructure team from 2009 to 2014 and served as Goldman Sachs’ head of global business applications infrastructure before that.

Currently, he leads cloud application and platform productization for Capital One. Part of that portfolio is Critical Stack, a secure container orchestration platform for the enterprise. Capital One’s goal with this work is to help companies across industries become more compliant, secure and cost-effective operating in the public cloud.

Early bird tickets are still on sale for $249, grab yours today before we sell out.

Student tickets are for just $75 – grab them here.

Google Drive beta test expands offline support to non-Google files in Chrome

Google Drive’s offline capabilities are getting an upgrade. Currently, you can use Google Chrome to make your Docs, Sheets, and Slides available offline. On Tuesday, the company announced the launch of a beta test that will expand offline capabilities to other content as well, including PDFs, images, Microsoft Office files, and other non-Google file formats.

The beta test, dubbed the “Google Drive Offline for Binary Content Beta,” is only open to admins of G Suite domains who have Drive File Stream enabled. Admins who had previously opted into the Alpha test for offline Docs, Sheets, and Slides will be automatically whitelisted for this new beta, Google notes.

Though the beta is limited for now, if Google is able to work out the bugs and ensure the stability of this new set of capabilities, it would naturally want to roll out support more broadly across not just its G Suite user base but to the consumer version of Google Drive, too.

Once the G Suite domain is enrolled, users will be able to enable offline from within the Drive or Docs settings, then sign into Chrome and right-click on files, then check “Make available offline.”

Offline preview will also work, once enabled. Plus, users are able to right-click and open the non-Google files in native applications, like Microsoft Office, to make them available offline.

ChromeOS isn’t currently supported in the beta, but it will be in the future, Google says.

The new beta addresses one of Google Drive’s more notable issues, especially in the workplace. A variety of work documents are not in Google file formats, and much of that work does need to be more easily available offline when employees are traveling and have limited connectivity. For now, users can sync their Google Docs, Sheets, Slides and Drawings files offline, or download files directly to their device. They can also use desktop client applications as a sync solution, if preferred.

Meanwhile, Google Drive competitor Dropbox is moving forward as an enterprise collaboration workspace, which even allows users to launch apps with shortcuts for G Suite, and more — including offering integrations with Zoom and Slack. Essentially, it’s becoming a portal to work tools instead of just a file storage platform.

G Suite has yet to kill Microsoft Office, which has 180 million monthly actives for Office 365 commercial. (Google says G Suite had 5M organizations as clients by year-end 2018). And on the consumer side, iCloud Drive is getting an upgrade in the new version of macOS, which will now support folder sharing in addition to file sharing — a much-needed feature that could convince more casual customers of Drive or Dropbox to make the switch.

Google didn’t say how long the best would run before public availability.

Android Police was first to spot the beta test, also noting the limitation in being a Chrome-only feature could still be an issue.