Internxt gets $1M to be ‘the Coinbase of decentralized storage’

Valencia-based startup Internxt has been quietly working on an ambitious plan to make decentralized cloud storage massively accessible to anyone with an Internet connection.

It’s just bagged $1M in seed funding led by Angels Capital, a European VC fund owned by Juan Roig (aka Spain’s richest grocer and second wealthiest billionaire), and Miami-based The Venture City. It had previously raised around half a million dollars via a token sale to help fund early development.

The seed funds will be put towards its next phase of growth — its month-to-month growth rate is 30% and it tells us it’s confident it can at least sustain that — including planning a big boost to headcount so it can accelerate product development.

The Spanish startup has spent most of its short life to date developing a decentralized infrastructure that it argues is both inherently more secure and more private than mainstream cloud-based apps (such as those offered by tech giants like Google).

This is because files are not only encrypted in a way that means it cannot access your data but information is also stored in a highly decentralized way, split into tiny shards which are then distributed across multiple storage locations, with users of the network contributing storage space (and being recompensed for providing that capacity with — you guessed it — crypto).

“It’s a distributed architecture, we’ve got servers all over the world,” explains founder and CEO Fran Villalba Segarra. “We leverage and use the space provided by professionals and individuals. So they connect to our infrastructure and start hosting data shards and we pay them for the data they host — which is also more affordable because we are not going through the traditional route of just renting out a data center and paying them for a fixed amount of space.

“It’s like the Airbnb model or Uber model. We’ve kind of democratized storage.”

Internxt clocked up three years of R&D, beginning in 2017, before launching its first cloud-based apps: Drive (file storage), a year ago — and now Photos (a Google Photos rival).

So far it’s attracting around a million active users without paying any attention to marketing, per Villalba Segarra.

Internxt Mail is the next product in its pipeline — to compete with Gmail and also ProtonMail, a pro-privacy alternative to Google’s freemium webmail client (and for more on why it believes it can offer an edge there read on).

Internxt Send (file transfer) is another product billed as coming soon.

“We’re working on a G-Suite alternative to make sure we’re at the level of Google when it comes to competing with them,” he adds.

The issue Internxt’s architecture is designed to solve is that files which are stored in just one place are vulnerable to being accessed by others. Whether that’s the storage provider itself (who may, like Google, have a privacy-hostile business model based on mining users’ data); or hackers/third parties who manage to break the provider’s security — and can thus grab and/or otherwise interfere with your files.

Security risks when networks are compromised can include ransomeware attacks — which have been on an uptick in recent years — whereby attackers that have penetrated a network and gained access to stored files then hold the information to ransom by walling off the rightful owner’s access (typically by applying their own layer of encryption and demanding payment to unlock the data).

The core conviction driving Internxt’s decentralization push is that files sitting whole on a server or hard drive are sitting ducks.

Its answer to that problem is an alternative file storage infrastructure that combines zero access encryption and decentralization — meaning files are sharded, distributed and mirrored across multiple storage locations, making them highly resilient against storage failures or indeed hack attacks and snooping.

The approach ameliorates cloud service provider-based privacy concerns because Internxt itself cannot access user data.

To make money its business model is simple, tiered subscriptions: With (currently) one plan covering all its existing and planned services — based on how much data you need. (It is also freemium, with the first 10GB being free.)

Internxt is by no means the first to see key user value in rethinking core Internet architecture.

Scotland’s MaidSafe has been trying to build an alternative decentralized Internet for well over a decade at this point — only starting alpha testing its alt network (aka, the Safe Network) back in 2016, after ten years of testing. Its long term mission to reinvent the Internet continues.

Another (slightly less veteran) competitor in the decentralized cloud storage space is Storj, which is targeting enterprise users. There’s also Filecoin and Sia — both also part of the newer wave of blockchain startups that sprung up after Bitcoin sparked entrepreneurial interest in cryptocurrencies and blockchain/decentralization.

How, then, is what Internxt’s doing different to these rival decentralized storage plays — all of which have been at this complex coal face for longer?

“We’re the only European based startup that’s doing this [except for MaidSafe, although it’s UK not EU based],” says Villalba Segarra, arguing that the European Union’s legal regime around data protection and privacy lends it an advantage vs U.S. competitors. “All the others, Storj, plus Sia, Filecoin… they’re all US-based companies as far as I’m aware.”

The other major differentiating factor he highlights is usability — arguing that the aforementioned competitors have been “built by developers for developers”. Whereas he says Internxt’s goal is be the equivalent of ‘Coinbase for decentralized storage’; aka, it wants to make a very complex technology highly accessible to non-technical Internet users.

“It’s a huge technology but in the blockchain space we see this all the time — where there’s huge potential but it’s very hard to use,” he tells TechCrunch. “That’s essentially what Coinbase is also trying to do — bringing blockchain to users, making it easier to use, easier to invest in cryptocurrency etc. So that’s what we’re trying to do at Internxt as well, bringing blockchain for cloud storage to the people. Making it easy to use with a very easy to use interface and so forth.

“It’s the only service in the distributed cloud space that’s actually usable — that’s kind of our main differentiating factor from Storj and all these other companies.”

“In terms of infrastructure it’s actually pretty similar to that of Sia or Storj,” he goes on — further likening Internxt’s ‘zero access’ encryption to Proton Drive’s architecture (aka, the file storage product from the makers of end-to-end encrypted email service ProtonMail) — which also relies on client side encryption to give users a robust technical guarantee that the service provider can’t snoop on your stuff. (So you don’t have to just trust the company not to violate your privacy.)

But while it’s also touting zero access encryption (it seems to be using off-the-shelf AES-256 encryption; it says it uses “military grade”, client-side, open source encryption that’s been audited by Spain’s S2 Grupo, a major local cybersecurity firm), Internxt takes the further step of decentralizing the encrypted bits of data too. And that means it can tout added security benefits, per Villalba Segarra.

“On top of that what we do is we fragment data and then distribute it around the world. So essentially what servers host are encrypted data shards — which is much more secure because if a hacker was ever to access one of these servers what they would find is encrypted data shards which are essentially useless. Not even we can access that data.

“So that adds a huge layer of security against hackers or third party [access] in terms of data. And then on top of that we build very nice interfaces with which the user is very used to using — pretty much similar to those of Google… and that also makes us very different from Storj and Sia.”

Storage space for Internxt users’ files is provided by users who are incentivized to offer up their unused capacity to host data shards with micropayments of crypto for doing so. This means capacity could be coming from an individual user connecting to Internxt with just their laptop — or a datacenter company with large amounts of unused storage capacity. (And Villalba Segarra notes that it has a number of data center companies, such as OVH, are connected to its network.)

“We don’t have any direct contracts [for storage provision]… Anyone can connect to our network — so datacenters with available storage space, if they want to make some money on that they can connect to our network. We don’t pay them as much as we would pay them if we went to them through the traditional route,” he says, likening this portion of the approach to how Airbnb has both hosts and guests (or Uber needs drivers and riders).

“We are the platform that connects both parties but we don’t host any data ourselves.”

Internxt uses a reputation system to manage storage providers — to ensure network uptime and quality of service — and also applies blockchain ‘proof of work’ challenges to node operators to make sure they’re actually storing the data they claim.

“Because of the decentralized nature of our architecture we really need to make sure that it hits a certain level of reliability,” he says. “So for that we use blockchain technology… When you’re storing data in your own data center it’s easier in terms of making sure it’s reliable but when you’re storing it in a decentralized architecture it brings a lot of benefits — such as more privacy or it’s also more affordable — but the downside is you need to make sure that for example they’re actually storing data.”

Payments to storage capacity providers are also made via blockchain tech — which Villalba Segarra says is the only way to scale and automate so many micropayments to ~10,000 node operators all over the world.

Discussing the issue of energy costs — given that ‘proof of work’ blockchain-based technologies are facing increased scrutiny over the energy consumption involved in carrying out the calculations — he suggests that Internxt’s decentralized architecture can be more energy efficient than traditional data centers because data shards are more likely to be located nearer to the requesting user — shrinking the energy required to retrieve packets vs always having to do so from a few centralized global locations.

“What we’ve seen in terms of energy consumption is that we’re actually much more energy efficient than a traditional cloud storage service. Why? Think about it, we mirror files and we store them all over the world… It’s actually impossible to access a file from Dropbox that is sent out from [a specific location]. Essentially when you access Dropbox or Google Drive and you download a file they’re going to be sending it out from their data center in Texas or wherever. So there’s a huge data transfer energy consumption there — and people don’t think about it,” he argues.

“Data center energy consumption is already 2%* of the whole world’s energy consumption if I’m not mistaken. So being able to use latency and being able to send your files from [somewhere near the user] — which is also going to be faster, which is all factored into our reputation system — so our algorithms are going to be sending you the files that are closer to you so that we save a lot of energy from that. So if you multiple that by millions of users and millions of terabytes that actually saves a lot of energy consumption and also costs for us.”

What about latency from the user’s point of view? Is there a noticeable lag when they try to upload or retrieve and access files stored on Internxt vs — for example — Google Drive?

Villalba Segarra says being able to store file fragments closer to the user also helps compensate for any lag. But he also confirms there is a bit of a speed difference vs mainstream cloud storage services.

“In terms of upload and download speed we’re pretty close to Google Drive and Dropbox,” he suggests. “Again these companies have been around for over ten years and their services are very well optimized and they’ve got a traditional cloud architecture which is also relatively simpler, easier to build and they’ve got thousands of [employees] so their services are obviously much better than our service in terms of speed and all that. But we’re getting really close to them and we’re working really fast towards bringing our speed [to that level] and also as many features as possible to our architecture and to our services.”

“Essentially how we see it is we’re at the level of Proton Drive or Tresorit in terms of usability,” he adds on the latency point. “And we’re getting really close to Google Drive. But an average user shouldn’t really see much of a difference and, as I said, we’re literally working as hard as possible to make our services as useable as those of Google. But we’re ages ahead of Storj, Sia, MaidSafe and so forth — that’s for sure.”

Internxt is doing all this complex networking with a team of just 20 people currently. But with the new seed funding tucked in its back pocket the plan now is to ramp up hiring over the next few months — so that it can accelerate product development, sustain its growth and keep pushing its competitive edge.

“By the time we do a Series A we should be around 100 people at Internxt,” says Villalba Segarra. “We are already preparing our Series A. We just closed our seed round but because of how fast we’re growing we are already being reached out to by a few other lead VC funds from the US and London.

“It will be a pretty big Series A. Potentially the biggest in Spain… We plan on growing until the Series A at at least a 30% month-to-month rate which is what we’ve been growing up until now.”

He also tells TechCrunch that the intention for the Series A is to do the funding at a $50M valuation.

“We were planning on doing it a year from now because we literally just closed our [seed] round but because of how many VCs are reaching out to us we may actually do it by the end of this year,” he says, adding: “But timeframe isn’t an issue for us. What matters most is being able to reach that minimum valuation.”

*Per the IEA, data centres and data transmission networks each accounted for around 1% of global electricity use in 2019

Gatheround raises millions from Homebrew, Bloomberg and Stripe’s COO to help remote workers connect

Remote work is no longer a new topic, as much of the world has now been doing it for a year or more because of the COVID-19 pandemic.

Companies — big and small — have had to react in myriad ways. Many of the initial challenges have focused on workflow, productivity and the like. But one aspect of the whole remote work shift that is not getting as much attention is the culture angle.

A 100% remote startup that was tackling the issue way before COVID-19 was even around is now seeing a big surge in demand for its offering that aims to help companies address the “people” challenge of remote work. It started its life with the name Icebreaker to reflect the aim of “breaking the ice” with people with whom you work.

“We designed the initial version of our product as a way to connect people who’d never met, kind of virtual speed dating,” says co-founder and CEO Perry Rosenstein. “But we realized that people were using it for far more than that.” 

So over time, its offering has evolved to include a bigger goal of helping people get together beyond an initial encounter –– hence its new name: Gatheround.

“For remote companies, a big challenge or problem that is now bordering on a crisis is how to build connection, trust and empathy between people that aren’t sharing a physical space,” says co-founder and COO Lisa Conn. “There’s no five-minute conversations after meetings, no shared meals, no cafeterias — this is where connection organically builds.”

Organizations should be concerned, Gatheround maintains, that as we move more remote, that work will become more transactional and people will become more isolated. They can’t ignore that humans are largely social creatures, Conn said.

The startup aims to bring people together online through real-time events such as a range of chats, videos and one-on-one and group conversations. The startup also provides templates to facilitate cultural rituals and learning & development (L&D) activities, such as all-hands meetings and workshops on diversity, equity and inclusion. 

Gatheround’s video conversations aim to be a refreshing complement to Slack conversations, which despite serving the function of communication, still don’t bring users face-to-face.

Image Credits: Gatheround

Since its inception, Gatheround has quietly built up an impressive customer base, including 28 Fortune 500s, 11 of the 15 biggest U.S. tech companies, 26 of the top 30 universities and more than 700 educational institutions. Specifically, those users include Asana, Coinbase, Fiverr, Westfield and DigitalOcean. Universities, academic centers and nonprofits, including Georgetown’s Institute of Politics and Public Service and Chan Zuckerberg Initiative, are also customers. To date, Gatheround has had about 260,000 users hold 570,000 conversations on its SaaS-based, video platform.

All its growth so far has been organic, mostly referrals and word of mouth. Now, armed with $3.5 million in seed funding that builds upon a previous $500,000 raised, Gatheround is ready to aggressively go to market and build upon the momentum it’s seeing.

Venture firms Homebrew and Bloomberg Beta co-led the company’s latest raise, which included participation from angel investors such as Stripe COO Claire Hughes Johnson, Meetup co-founder Scott Heiferman, Li Jin and Lenny Rachitsky. 

Co-founders Rosenstein, Conn and Alexander McCormmach describe themselves as “experienced community builders,” having previously worked on President Obama’s campaigns as well as at companies like Facebook, Change.org and Hustle. 

The trio emphasize that Gatheround is also very different from Zoom and video conferencing apps in that its platform gives people prompts and organized ways to get to know and learn about each other as well as the flexibility to customize events.

“We’re fundamentally a connection platform, here to help organizations connect their people via real-time events that are not just really fun, but meaningful,” Conn said.

Homebrew Partner Hunter Walk says his firm was attracted to the company’s founder-market fit.

“They’re a really interesting combination of founders with all this experience community building on the political activism side, combined with really great product, design and operational skills,” he told TechCrunch. “It was kind of unique that they didn’t come out of an enterprise product background or pure social background.”

He was also drawn to the personalized nature of Gatheround’s platform, considering that it has become clear over the past year that the software powering the future of work “needs emotional intelligence.”

“Many companies in 2020 have focused on making remote work more productive. But what people desire more than ever is a way to deeply and meaningfully connect with their colleagues,” Walk said. “Gatheround does that better than any platform out there. I’ve never seen people come together virtually like they do on Gatheround, asking questions, sharing stories and learning as a group.” 

James Cham, partner at Bloomberg Beta, agrees with Walk that the founding team’s knowledge of behavioral psychology, group dynamics and community building gives them an edge.

“More than anything, though, they care about helping the world unite and feel connected, and have spent their entire careers building organizations to make that happen,” he said in a written statement. “So it was a no-brainer to back Gatheround, and I can’t wait to see the impact they have on society.”

The 14-person team will likely expand with the new capital, which will also go toward helping adding more functionality and details to the Gatheround product.

“Even before the pandemic, remote work was accelerating faster than other forms of work,” Conn said. “Now that’s intensified even more.”

Gatheround is not the only company attempting to tackle this space. Ireland-based Workvivo last year raised $16 million and earlier this year, Microsoft  launched Viva, its new “employee experience platform.”

Wasabi scores $112M Series C on $700M valuation to take on cloud storage hyperscalers

Taking on Amazon S3 in the cloud storage game would seem to be a fool-hearty proposition, but Wasabi has found a way to build storage cheaply and pass the savings onto customers. Today the Boston-based startup announced a $112 million Series C investment on a $700 million valuation.

Fidelity Management & Research Company led the round with participation from previous investors. It reports that it has now raised $219 million in equity so far, along with additional debe financing, but it takes a lot of money to build a storage business.

CEO David Friend says that business is booming and he needed the money to keep it going. “The business has just been exploding. We achieved a roughly $700 million valuation on this round, so  you can imagine that business is doing well. We’ve tripled in each of the last three years and we’re ahead of plan for this year,” Friend told me.

He says that demand continues to grow and he’s been getting requests internationally. That was one of the primary reasons he went looking for more capital. What’s more, data sovereignty laws require that certain types of sensitive data like financial and healthcare be stored in-country, so the company needs to build more capacity where it’s needed.

He says they have nailed down the process of building storage, typically inside co-location facilities, and during the pandemic they actually became more efficient as they hired a firm to put together the hardware for them onsite. They also put channel partners like managed service providers (MSPs) and value added resellers (VARs) to work by incentivizing them to sell Wasabi to their customers.

Wasabi storage starts at $5.99 per terabyte per month. That’s a heck of a lot cheaper than Amazon S3, which starts at 0.23 per gigabyte for the first 50 terabytes or $23.00 a terabyte, considerably more than Wasabi’s offering.

But Friend admits that Wasabi still faces headwinds as a startup. No matter how cheap it is, companies want to be sure it’s going to be there for the long haul and a round this size from an investor with the pedigree of Fidelity will give the company more credibility with large enterprise buyers without the same demands of venture capital firms.

“Fidelity to me was the ideal investor. […] They don’t want a board seat. They don’t want to come in and tell us how to run the company. They are obviously looking toward an IPO or something like that, and they are just interested in being an investor in this business because cloud storage is a virtually unlimited market opportunity,” he said.

He sees his company as the typical kind of market irritant. He says that his company has run away from competitors in his part of the market and the hyperscalers are out there not paying attention because his business remains a fraction of theirs for the time being. While an IPO is far off, he took on an institutional investor this early because he believes it’s possible eventually.

“I think this is a big enough market we’re in, and we were lucky to get in at just the right time with the right kind of technology. There’s no doubt in my mind that Wasabi could grow to be a fairly substantial public company doing cloud infrastructure. I think we have a nice niche cut out for ourselves, and I don’t see any reason why we can’t continue to grow,” he said.

DigitalOcean says customer billing data accessed in data breach

DigitalOcean has emailed customers warning of a data breach involving customers’ billing data, TechCrunch has learned.

The cloud infrastructure giant told customers in an email on Wednesday, obtained by TechCrunch, that it has “confirmed an unauthorized exposure of details associated with the billing profile on your DigitalOcean account.” The company said the person “gained access to some of your billing account details through a flaw that has been fixed” over a two-week window between April 9 and April 22.

The email said customer billing names and addresses were accessed, as well as the last four digits of the payment card, its expiry date, and the name of the card-issuing bank. The company said that customers’ DigitalOcean accounts were “not accessed,” and passwords and account tokens were “not involved” in this breach.

“To be extra careful, we have implemented additional security monitoring on your account. We are expanding our security measures to reduce the likelihood of this kind of flaw occuring [sic] in the future,” the email said.

DigitalOcean said it fixed the flaw and notified data protection authorities, but it’s not clear what the apparent flaw was that put customer billing information at risk.

In a statement, DigitalOcean’s security chief Tyler Healy said 1% of billing profiles were affected by the breach, but declined to address our specific questions, including how the vulnerability was discovered and which authorities have been informed.

Companies with customers in Europe are subject to GDPR, and can face fines of up to 4% of their global annual revenue.

Last year, the cloud company raised $100 million in new debt, followed by another $50 million round, months after laying off dozens of staff amid concerns about the company’s financial health. In March, the company went public, raising about $775 million in its initial public offering. 

Solving the security challenges of public cloud

Experts believe the data-lake market will hit a massive $31.5 billion in the next six years, a prediction that has led to much concern among large enterprises. Why? Well, an increase in data lakes equals an increase in public cloud consumption — which leads to a soaring amount of notifications, alerts and security events.

Around 56% of enterprise organizations handle more than 1,000 security alerts every day and 70% of IT professionals have seen the volume of alerts double in the past five years, according to a 2020 Dark Reading report that cited research by Sumo Logic. In fact, many in the ONUG community are on the order of 1 million events per second. Yes, per second, which is in the range of tens of peta events per year.

Now that we are operating in a digitally transformed world, that number only continues to rise, leaving many enterprise IT leaders scrambling to handle these events and asking themselves if there’s a better way.

Why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

Compounding matters is the lack of a unified framework for dealing with public cloud security. End users and cloud consumers are forced to deal with increased spend on security infrastructure such as SIEMs, SOAR, security data lakes, tools, maintenance and staff — if they can find them — to operate with an “adequate” security posture.

Public cloud isn’t going away, and neither is the increase in data and security concerns. But enterprise leaders shouldn’t have to continue scrambling to solve these problems. We live in a highly standardized world. Standard operating processes exist for the simplest of tasks, such as elementary school student drop-offs and checking out a company car. But why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

The ONUG Collaborative had the same question. Security leaders from organizations such as FedEx, Raytheon Technologies, Fidelity, Cigna, Goldman Sachs and others came together to establish the Cloud Security Notification Framework. The goal is to create consistency in how cloud providers report security events, alerts and alarms, so end users receive improved visibility and governance of their data.

Here’s a closer look at the security challenges with public cloud and how CSNF aims to address the issues through a unified framework.

The root of the problem

A few key challenges are sparking the increased number of security alerts in the public cloud:

  1. Rapid digital transformation sparked by COVID-19.
  2. An expanded network edge created by the modern, work-from-home environment.
  3. An increase in the type of security attacks.

The first two challenges go hand in hand. In March of last year, when companies were forced to shut down their offices and shift operations and employees to a remote environment, the wall between cyber threats and safety came crashing down. This wasn’t a huge issue for organizations already operating remotely, but for major enterprises the pain points quickly boiled to the surface.

Numerous leaders have shared with me how security was outweighed by speed. Keeping everything up and running was prioritized over governance. Each employee effectively held a piece of the company’s network edge in their home office. Without basic governance controls in place or training to teach employees how to spot phishing or other threats, the door was left wide open for attacks.

In 2020, the FBI reported its cyber division was receiving nearly 4,000 complaints per day about security incidents, a 400% increase from pre-pandemic figures.

Another security issue is the growing intelligence of cybercriminals. The Dark Reading report said 67% of IT leaders claim a core challenge is a constant change in the type of security threats that must be managed. Cybercriminals are smarter than ever. Phishing emails, entrance through IoT devices and various other avenues have been exploited to tap into an organization’s network. IT teams are constantly forced to adapt and spend valuable hours focused on deciphering what is a concern and what’s not.

Without a unified framework in place, the volume of incidents will spiral out of control.

Where CSNF comes into play

CSNF will prove beneficial for cloud providers and IT consumers alike. Security platforms often require integration timelines to wrap in all data from siloed sources, including asset inventory, vulnerability assessments, IDS products and past security notifications. These timelines can be expensive and inefficient.

But with a standardized framework like CSNF, the integration process for past notifications is pared down and contextual processes are improved for the entire ecosystem, efficiently reducing spend and saving SecOps and DevSecOps teams time to focus on more strategic tasks like security posture assessment, developing new products and improving existing solutions.

Here’s a closer look at the benefits a standardized approach can create for all parties:

  • End users: CSNF can streamline operations for enterprise cloud consumers, like IT teams, and allows improved visibility and greater control over the security posture of their data. This enhanced sense of protection from improved cloud governance benefits all individuals.
  • Cloud providers: CSNF can eliminate the barrier to entry currently prohibiting an enterprise consumer from using additional services from a specific cloud provider by freeing up added security resources. Also, improved end-user cloud governance encourages more cloud consumption from businesses, increasing provider revenue and providing confidence that their data will be secure.
  • Cloud vendors: Cloud vendors that provide SaaS solutions are spending more on engineering resources to deal with increased security notifications. But with a standardized framework in place, these additional resources would no longer be necessary. Instead of spending money on such specific needs along with labor, vendors could refocus core staff on improving operations and products such as user dashboards and applications.

Working together, all groups can effectively reduce friction from security alerts and create a controlled cloud environment for years to come.

What’s next?

CSNF is in the building phase. Cloud consumers have banded together to compile requirements, and consumers continue to provide guidance as a prototype is established. The cloud providers are now in the process of building the key component of CSNF, its Decorator, which provides an open-source multicloud security reporting translation service.

The pandemic created many changes in our world, including new security challenges in the public cloud. Reducing IT noise must be a priority to continue operating with solid governance and efficiency, as it enhances a sense of security, eliminates the need for increased resources and allows for more cloud consumption. ONUG is working to ensure that the industry stays a step ahead of security events in an era of rapid digital transformation.

Grocery startup Mercato spilled years of data, but didn’t tell its customers

A security lapse at online grocery delivery startup Mercato exposed tens of thousands of customer orders, TechCrunch has learned.

A person with knowledge of the incident told TechCrunch that the incident happened in January after one of the company’s cloud storage buckets, hosted on Amazon’s cloud, was left open and unprotected.

The company fixed the data spill, but has not yet alerted its customers.

Mercato was founded in 2015 and helps over a thousand smaller grocers and specialty food stores get online for pickup or delivery, without having to sign up for delivery services like Instacart or Amazon Fresh. Mercato operates in Boston, Chicago, Los Angeles, and New York, where the company is headquartered.

TechCrunch obtained a copy of the exposed data and verified a portion of the records by matching names and addresses against known existing accounts and public records. The data set contained more than 70,000 orders dating between September 2015 and November 2019, and included customer names and email addresses, home addresses, and order details. Each record also had the user’s IP address of the device they used to place the order.

The data set also included the personal data and order details of company executives.

It’s not clear how the security lapse happened since storage buckets on Amazon’s cloud are private by default, or when the company learned of the exposure.

Companies are required to disclose data breaches or security lapses to state attorneys-general, but no notices have been published where they are required by law, such as California. The data set had more than 1,800 residents in California, more than three times the number needed to trigger mandatory disclosure under the state’s data breach notification laws.

It’s also not known if Mercato disclosed the incident to investors ahead of its $26 million Series A raise earlier this month. Velvet Sea Ventures, which led the round, did not respond to emails requesting comment.

In a statement, Mercato chief executive Bobby Brannigan confirmed the incident but declined to answer our questions, citing an ongoing investigation.

“We are conducting a complete audit using a third party and will be contacting the individuals who have been affected. We are confident that no credit card data was accessed because we do not store those details on our servers. We will continually inform all authoritative bodies and stakeholders, including investors, regarding the findings of our audit and any steps needed to remedy this situation,” said Brannigan.


Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

Risk startup LogicGate confirms data breach

Risk and compliance startup LogicGate has confirmed a data breach. But unless you’re a customer, you probably didn’t hear about it.

An email sent by LogicGate to customers earlier this month said on February 23 an unauthorized third-party obtained credentials to its Amazon Web Services-hosted cloud storage servers storing customer backup files for its flagship platform Risk Cloud, which helps companies to identify and manage their risk and compliance with data protection and security standards. LogicGate says its Risk Cloud can also help find security vulnerabilities before they are exploited by malicious hackers.

The credentials “appear to have been used by an unauthorized third party to decrypt particular files stored in AWS S3 buckets in the LogicGate Risk Cloud backup environment,” the email read.

“Only data uploaded to your Risk Cloud environment on or prior to February 23, 2021, would have been included in that backup file. Further, to the extent you have stored attachments in the Risk Cloud, we did not identify decrypt events associated with such attachments,” it added.

LogicGate did not say how the AWS credentials were compromised. An email update sent by LogicGate last Friday said the company anticipates finding the root cause of the incident by this week.

But LogicGate has not made any public statement about the breach. It’s also not clear if the company contacted all of its customers or only those whose data was accessed. LogicGate counts Capco, SoFi, and Blue Cross Blue Shield of Kansas City as customers.

We sent a list of questions, including how many customers were affected and if the company has alerted U.S. state authorities as required by state data breach notification laws. When reached, LogicGate chief executive Matt Kunkel confirmed the breach but declined to comment citing an ongoing investigation. “We believe it’s best to communicate developments directly to our customers,” he said.

Kunkel would not say, when asked, if the attacker also exfiltrated the decrypted customer data from its servers.

Data breach notification laws vary by state, but companies that fail to report security incidents can face heavy fines. Under Europe’s GDPR rules, companies can face fines of up to 4% of their annual turnover for violations.

In December, LogicGate secured $8.75 million in fresh funding, totaling more than $40 million since it launched in 2015.


Are you a LogicGate customer? Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

Aqua Security raises $135M at a $1B valuation for its cloud native security service

Aqua Security, a Boston- and Tel Aviv-based security startup that focuses squarely on securing cloud-native services, today announced that it has raised a $135 million Series E funding round at a $1 billion valuation. The round was led by ION Crossover Partners. Existing investors M12 Ventures, Lightspeed Venture Partners, Insight Partners, TLV Partners, Greenspring Associates and Acrew Capital also participated. In total, Aqua Security has now raised $265 million since it was founded in 2015.

The company was one of the earliest to focus on securing container deployments. And while many of its competitors were acquired over the years, Aqua remains independent and is now likely on a path to an IPO. When it launched, the industry focus was still very much on Docker and Docker containers. To the detriment of Docker, that quickly shifted to Kubernetes, which is now the de facto standard. But enterprises are also now looking at serverless and other new technologies on top of this new stack.

“Enterprises that five years ago were experimenting with different types of technologies are now facing a completely different technology stack, a completely different ecosystem and a completely new set of security requirements,” Aqua CEO Dror Davidoff told me. And with these new security requirements came a plethora of startups, all focusing on specific parts of the stack.

Image Credits: Aqua Security

What set Aqua apart, Dror argues, is that it managed to 1) become the best solution for container security and 2) realized that to succeed in the long run, it had to become a platform that would secure the entire cloud-native environment. About two years ago, the company made this switch from a product to a platform, as Davidoff describes it.

“There was a spree of acquisitions by CheckPoint and Palo Alto [Networks] and Trend [Micro],” Davidoff said. “They all started to acquire pieces and tried to build a more complete offering. The big advantage for Aqua was that we had everything natively built on one platform. […] Five years later, everyone is talking about cloud-native security. No one says ‘container security’ or ‘serverless security’ anymore. And Aqua is practically the broadest cloud-native security [platform].”

One interesting aspect of Aqua’s strategy is that it continues to bet on open source, too. Trivy, its open-source vulnerability scanner, is the default scanner for GitLab’s Harbor Registry and the CNCF’s Artifact Hub, for example.

“We are probably the best security open-source player there is because not only do we secure from vulnerable open source, we are also very active in the open-source community,” Davidoff said (with maybe a bit of hyperbole). “We provide tools to the community that are open source. To keep evolving, we have a whole open-source team. It’s part of the philosophy here that we want to be part of the community and it really helps us to understand it better and provide the right tools.”

In 2020, Aqua, which mostly focuses on mid-size and larger companies, doubled the number of paying customers and it now has more than half a dozen customers with an ARR of over $1 million each.

Davidoff tells me the company wasn’t actively looking for new funding. Its last funding round came together only a year ago, after all. But the team decided that it wanted to be able to double down on its current strategy and raise sooner than originally planned. ION had been interested in working with Aqua for a while, Davidoff told me, and while the company received other offers, the team decided to go ahead with ION as the lead investor (with all of Aqua’s existing investors also participating in this round).

“We want to grow from a product perspective, we want to grow from a go-to-market [perspective] and expand our geographical coverage — and we also want to be a little more acquisitive. That’s another direction we’re looking at because now we have the platform that allows us to do that. […] I feel we can take the company to great heights. That’s the plan. The market opportunity allows us to dream big.”

 

Project management service ZenHub raises $4.7M

ZenHub, the GitHub-centric project management service for development teams, today announced that it has raised a $4.7 million seed funding round from Canada’s BDC Capital and Ripple Ventures. This marks the first fundraise for the Vancouver, Canada-based startup after the team bootstrapped the service, which first launched back in 2014. Additional angel investors in this round include Adam Gross (former CEO of Heroku), Jiaona Zhang (VP Product at Webflow) and Oji Udezue (VP Product at Calendly).

In addition to announcing this funding round, the team also today launched its newest automation feature, which makes it easier for teams to plan the development sprints, something that is core to the Agile development process but often takes a lot of time and energy — something teams are better off spending on the actual development process.

“This is a really exciting kind of pivot point for us as a business and gives us a lot of ammunition, I think, to really go after our vision and mission a little bit more aggressively than we have even in the past,” ZenHub co-founder and CEO Aaron Upright told me. The team, he explained, used the beginning of the pandemic to spend a lot of time with customers to better understand how they were reacting to what was happening. In the process, customers repeatedly noted that development resources were getting increasingly expensive and that teams were being stretched even farther and under a lot of pressure.

ZenHub’s answer to this was to look into how it could automate more of the processes that constitute the most complex parts of Agile. Earlier this year, the company launched its first efforts in this area, with new tools for improving developer handoffs in GitHub and now, with the help of this new funding, it is putting the next pieces in place by helping teams automate their sprint planning.

Image Credits: ZenHub

“We thought about automation as an answer to [the problems development teams were facing] and that we could take an approach to automation and to help guide teams through some of the most complex and time-consuming parts of the Agile process,” Upright said. “We raised money so that we can really accelerate toward that vision. As a self-funded company, we could have gone down that path, albeit a little bit slower. But the opportunity that we saw in the market — really brought about by the pandemic, and teams working more remotely and this pressure to produce — we wanted to provide a solution much, much faster.”

The spring planning feature itself is actually pretty straightforward and allows project managers to allocate a certain number of story points (a core Agile metric to estimate the complexity of a given action item) to each sprint. ZenHub’s tool can then use that to automatically generate a list of the most highly prioritized items for the next sprint. Optionally, teams can also decide to roll over items that they didn’t finish during a given sprint into the next one.

Image Credits: ZenHub

With that, ZenHub Sprints can automate a lot of the standard sprint meetings and lets teams focus on thinking about the overall process. Of course, teams can always overrule the automated systems.

“There’s nothing more that developers hate than sitting around the table for eight hours, planning sprints, when really they all just want to be working on stuff,” Upright said.

With this new feature, sprints become a core feature of the ZenHub experience. Typically, project managers worked around this by assigning milestones in GitHub, but having a dedicated tool and these new automation features will make this quite a bit easier.

Coming soon, ZenHub will also build a new feature that will automate some parts of the software estimation process, too, by launching a new tool that will help teams more easily allocate story points to routing action items so that their discussions can focus on the more contentious ones.

Microsoft’s Dapr open-source project to help developers build cloud-native apps hits 1.0

Dapr, the Microsoft-incubated open-source project that aims to make it easier for developers to build event-driven, distributed cloud-native applications, hit its 1.0 milestone today, signifying the project’s readiness for production use cases. Microsoft launched the Distributed Application Runtime (that’s what “Dapr” stand for) back in October 2019. Since then, the project released 14 updates and the community launched integrations with virtually all major cloud providers, including Azure, AWS, Alibaba and Google Cloud.

The goal for Dapr, Microsoft Azure CTO Mark Russinovich told me, was to democratize cloud-native development for enterprise developers.

“When we go look at what enterprise developers are being asked to do — they’ve traditionally been doing client, server, web plus database-type applications,” he noted. “But now, we’re asking them to containerize and to create microservices that scale out and have no-downtime updates — and they’ve got to integrate with all these cloud services. And many enterprises are, on top of that, asking them to make apps that are portable across on-premises environments as well as cloud environments or even be able to move between clouds. So just tons of complexity has been thrown at them that’s not specific to or not relevant to the business problems they’re trying to solve.”

And a lot of the development involves re-inventing the wheel to make their applications reliably talk to various other services. The idea behind Dapr is to give developers a single runtime that, out of the box, provides the tools that developers need to build event-driven microservices. Among other things, Dapr provides various building blocks for things like service-to-service communications, state management, pub/sub and secrets management.

Image Credits: Dapr

“The goal with Dapr was: let’s take care of all of the mundane work of writing one of these cloud-native distributed, highly available, scalable, secure cloud services, away from the developers so they can focus on their code. And actually, we took lessons from serverless, from Functions-as-a-Service where with, for example Azure Functions, it’s event-driven, they focus on their business logic and then things like the bindings that come with Azure Functions take care of connecting with other services,” Russinovich said.

He also noted that another goal here was to do away with language-specific models and to create a programming model that can be leveraged from any language. Enterprises, after all, tend to use multiple languages in their existing code, and a lot of them are now looking at how to best modernize their existing applications — without throwing out all of their current code.

As Russinovich noted, the project now has more than 700 contributors outside of Microsoft (though the core commuters are largely from Microsoft) and a number of businesses started using it in production before the 1.0 release. One of the larger cloud providers that is already using it is Alibaba. “Alibaba Cloud has really fallen in love with Dapr and is leveraging it heavily,” he said. Other organizations that have contributed to Dapr include HashiCorp and early users like ZEISS, Ignition Group and New Relic.

And while it may seem a bit odd for a cloud provider to be happy that its competitors are using its innovations already, Russinovich noted that this was exactly the plan and that the team hopes to bring Dapr into a foundation soon.

“We’ve been on a path to open governance for several months and the goal is to get this into a foundation. […] The goal is opening this up. It’s not a Microsoft thing. It’s an industry thing,” he said — but he wasn’t quite ready to say to which foundation the team is talking.