Airwallex raises $200M at a $4B valuation to double down on business banking

Business, now more than ever before, is going digital, and today a startup that’s building a vertically integrated solution to meet business banking needs is announcing a big round of funding to tap into the opportunity. Airwallex — which provides business banking services both directly to businesses themselves, as well as via a set of APIs that power other companies’ fintech products — has raised $200 million, a Series E round of funding that values the Australian startup at $4 billion.

Lone Pine Capital is leading the round, with new backers G Squared and Vetamer Capital Management, and previous backers 1835i Ventures (formerly ANZi), DST Global, Salesforce Ventures and Sequoia Capital China, also participating.

The funding brings the total raised by Airwallex — which has head offices in Hong Kong and Melbourne, Australia — to date to $700 million, including a $100 million injection that closed out its Series D just six months ago.

Airwallex will be using the funding both to continue investing in its product and technology, as well as to continue its geographical expansion and to focus on some larger business targets. The company has started to make some headway into Europe and the UK and that will be one big focus, along with the U.S.

The quick succession of funding, and that rising valuation, underscore Airwallex’s traction to date around what CEO and co-founder Jack Zhang describes as a vertically integrated strategy.

That involves two parts. First, Airwallex has built all the infrastructure for the business banking services that it provides directly to businesses with a focus on small and medium enterprise customers. Second, it has packaged up that infrastructure into a set of APIs that a variety of other companies use to provide financial services directly to their customers without needing to build those services themselves — the so-called “embedded finance” approach.

“We want to own the whole ecosystem,” Zhang said to me. “We want to be like the Apple of business finance.”

That seems to be working out so far for Airwallex. Revenues were up almost 150% for the first half of 2021 compared to a year before, with the company processing more than US$20 billion for a global client portfolio that has quadrupled in size. In addition to tens of thousands of SMEs, it also, via APIs, powers financial services for other companies like GOAT, Papaya Global and Stake.

Airwallex got its start like many of the strongest startups do: it was built to solve a problem that the founders encountered themselves. In the case of Airwallex, Zhang tells me he had actually been working on a previous start-up idea. He wanted to build the “Blue Bottle Coffee” of Asia out of Hong Kong, and it involved buying and importing a lot of different materials, packaging and of course coffee from all around the world.

“We found that making payments as a small business was slow and expensive,” he said, since it involved banks in different countries and different banking systems, manual efforts to transfer money between them and many days to clear the payments. “But that was also my background — payments and trading — and so I decided that it was a much more fascinating problem for me to work on and resolve.”

Eventually one of his co-founders in the coffee effort came along, with the four co-founders of Airwallex ultimately including Zhang, along with Xijing Dai, Lucy Liu and Max Li.

It was 2014, and Airwallex got attention from VCs early on in part for being in the right place at the right time. A wave of startups building financial services for SMBs were definitely gaining ground in North America and Europe, filling a long-neglected hole in the technology universe, but there was almost nothing of the sort in the Asia Pacific region, and in those earlier days solutions were highly regionalized.

From there it was a no-brainer that starting with cross-border payments, the first thing Airwallex tackled, would soon grow into a wider suite of banking services involving payments and other cross-border banking services.

“In last 6 years, we’ve built more than 50 bank integrations and now offer payments 95 countries payments through a partner network,” he added, with 43 of those offering real-time transactions. From that, it moved on the bank accounts and “other primitive stuff” with card issuance and more, he said, eventually building an end-to-end payment stack. 

Airwallex has tens of thousands of customers using its financial services directly, and they make up about 40% of its revenues today. The rest is the interesting turn the company decided to take to expand its business.

Airwallex had built all of its technology from the ground up itself, and it found that — given the wave of new companies looking for more ways to engage customers and become their one-stop shop — there was an opportunity to package that tech up in a set of APIs and sell that on to a different set of customers, those who also provided services for small businesses. That part of the business now accounts for 60% of Airwallex’s business, Zhang said, and is growing faster in terms of revenues. (The SMB business is growing faster in terms of customers, he said.)

A lot of embedded finance startups that base their business around building tech to power other businesses tend to stay arm’s length from offering financial services directly to consumers. The explanation I have heard is that they do not wish to compete against their customers. Zhang said that Airwallex takes a different approach, by being selective about the customers they partner with, so that the financial services they offer would never be the kind that would not be in direct competition. The GOAT marketplace for sneakers, or Papaya Global’s HR platform are classic examples of this.

However, as Airwallex continues to grow, you can’t help but wonder whether one of those partners might like to gobble up all of Airwallex and take on some of that service provision role itself. In that context, it’s very interesting to see Salesforce Ventures returning to invest even more in the company in this round, given how widely the company has expanded from its early roots in software for salespeople into a massive platform providing a huge range of cloud services to help people run their businesses.

For now, it’s been the combination of its unique roots in Asia Pacific, plus its vertical approach of building its tech from the ground up, plus its retail acumen that has impressed investors and may well see Airwallex stay independent and grow for some time to come.

“Airwallex has a clear competitive advantage in the digital payments market,” said David Craver, MD at Lone Pine Capital, in a statement. “Its unique Asia-Pacific roots, coupled with its innovative infrastructure, products and services, speak volumes about the business’ global growth opportunities and its impressive expansion in the competitive payment providers space. We are excited to invest in Airwallex at this dynamic time, and look forward to helping drive the company’s expansion and success worldwide.”

Interview: Apple’s Head of Privacy details child abuse detection and Messages safety features

Last week, Apple announced a series of new features targeted at child safety on its devices. Though not live yet, the features will arrive later this year for users. Though the goals of these features are universally accepted to be good ones — the protection of minors and the limit of the spread of Child Sexual Abuse Material (CSAM), there have been some questions about the methods Apple is using.

I spoke to Erik Neuenschwander, Head of Privacy at Apple, about the new features launching for its devices. He shared detailed answers to many of the concerns that people have about the features and talked at length to some of the tactical and strategic issues that could come up once this system rolls out. 

I also asked about the rollout of the features, which come closely intertwined but are really completely separate systems that have similar goals. To be specific, Apple is announcing three different things here, some of which are being confused with one another in coverage and in the minds of the public. 

CSAM detection in iCloud Photos – A detection system called NeuralHash creates identifiers it can compare with IDs from the National Center for Missing and Exploited Children and other entities to detect known CSAM content in iCloud Photo libraries. Most cloud providers already scan user libraries for this information — Apple’s system is different in that it does the matching on device rather than in the cloud.

Communication Safety in Messages – A feature that a parent opts to turn on for a minor on their iCloud Family account. It will alert children when an image they are going to view has been detected to be explicit and it tells them that it will also alert the parent.

Interventions in Siri and search – A feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

For more on all of these features you can read our articles linked above or Apple’s new FAQ that it posted this weekend.

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could be about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Apple’s Communication Safety in Messages feature. Image Credits: Apple

There have also been questions about the on-device hashing of photos to create identifiers that can be compared with the database. Though NeuralHash is a technology that can be used for other kinds of features like faster search in photos, it’s not currently used for anything else on iPhone aside from CSAM detection. When iCloud Photos is disabled, the feature stops working completely. This offers an opt-out for people but at an admittedly steep cost given the convenience and integration of iCloud Photos with Apple’s operating systems.

Though this interview won’t answer every possible question related to these new features, this is the most extensive on-the-record discussion by Apple’s senior privacy member. It seems clear from Apple’s willingness to provide access and its ongoing FAQ’s and press briefings (there have been at least 3 so far and likely many more to come) that it feels that it has a good solution here. 

Despite the concerns and resistance, it seems as if it is willing to take as much time as is necessary to convince everyone of that. 

This interview has been lightly edited for clarity.

TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through user’s iCloud Photos. This system doesn’t change that either, it neither looks through data on the device, nor does it look through all photos in iCloud Photos. Instead what it does is gives us a new ability to identify accounts which are starting collections of known CSAM.

So the development of this new CSAM detection technology is the watershed that makes now the time to launch this. And Apple feels that it can do it in a way that it feels comfortable with and that is ‘good’ for your users?

That’s exactly right. We have two co-equal goals here. One is to improve child safety on the platform and the second is to preserve user privacy, And what we’ve been able to do across all three of the features, is bring together technologies that let us deliver on both of those goals.

Announcing the Communications safety in Messages features and the CSAM detection in iCloud Photos system at the same time seems to have created confusion about their capabilities and goals. Was it a good idea to announce them concurrently? And why were they announced concurrently, if they are separate systems?

Well, while they are [two] systems they are also of a piece along with our increased interventions that will be coming in Siri and search. As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation. So CSAM detection means that there’s already known CSAM that has been through the reporting process, and is being shared widely re-victimizing children on top of the abuse that had to happen to create that material in the first place. for the creator of that material in the first place. And so to do that, I think is an important step, but it is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: Apple

Governments and agencies worldwide are constantly pressuring all large organizations that have any sort of end-to-end or even partial encryption enabled for their users. They often lean on CSAM and possible terrorism activities as rationale to argue for backdoors or encryption defeat measures. Is launching the feature and this capability with on-device hash matching an effort to stave off those requests and say, look, we can provide you with the information that you require to track down and prevent CSAM activity — but without compromising a user’s privacy?

So, first, you talked about the device matching so I just want to underscore that the system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account. 

Now, why to do it is because, as you said, this is something that will provide that detection capability while preserving user privacy. We’re motivated by the need to do more for child safety across the digital ecosystem, and all three of our features, I think, take very positive steps in that direction. At the same time we’re going to leave privacy undisturbed for everyone not engaged in the illegal activity.

Does this, creating a framework to allow scanning and matching of on-device content, create a framework for outside law enforcement to counter with, ‘we can give you a list, we don’t want to look at all of the user’s data but we can give you a list of content that we’d like you to match’. And if you can match it with this content you can match it with other content we want to search for. How does it not undermine Apple’s current position of ‘hey, we can’t decrypt the user’s device, it’s encrypted, we don’t hold the key?’

It doesn’t change that one iota. The device is still encrypted, we still don’t hold the key, and the system is designed to function on on-device data. What we’ve designed has a device side component — and it has the device side component by the way, for privacy improvements. The alternative of just processing by going through and trying to evaluate users data on a server is actually more amenable to changes [without user knowledge], and less protective of user privacy.

Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature. I understand that it’s a complex attribute that a feature of the service has a portion where the voucher is generated on the device, but again, nothing’s learned about the content on the device. The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers which we’ve never done for iCloud Photos. It’s those sorts of systems that I think are more troubling when it comes to the privacy properties — or how they could be changed without any user insight or knowledge to do things other than what they were designed to do.

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference If pressured or asked by a government to compromise the system?

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government. 

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an Apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the US. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos. 

In recent years, Apple has often leaned into the fact that on-device processing preserves user privacy. And in nearly every previous case and I can think of that’s true. Scanning photos to identify their content and allow me to search them, for instance. I’d rather that be done locally and never sent to a server. However, in this case, it seems like there may actually be a sort of anti-effect in that you’re scanning locally, but for external use cases, rather than scanning for personal use — creating a ‘less trust’ scenario in the minds of some users. Add to this that every other cloud provider scans it on their servers and the question becomes why should this implementation being different from most others engender more trust in the user rather than less?

I think we’re raising the bar, compared to the industry standard way to do this. Any sort of server side algorithm that’s processing all users photos is putting that data at more risk of disclosure and is, by definition, less transparent in terms of what it’s doing on top of the user’s library. So, by building this into our operating system, we gain the same properties that the integrity of the operating system provides already across so many other features, the one global operating system that’s the same for all users who download it and install it, and so it in one property is much more challenging, even how it would be targeted to an individual user. On the server side that’s actually quite easy — trivial. To be able to have some of the properties and building it into the device and ensuring it’s the same for all users with the features enable give a strong privacy property. 

Secondly, you point out how use of on device technology is privacy preserving, and in this case, that’s a representation that I would make to you, again. That it’s really the alternative to where users’ libraries have to be processed on a server that is less private.

The things that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behavior, Apple gain no additional knowledge about any users cloud library. No user’s iCloud Library has to be processed as a result of this feature. Instead what we’re able to do is to create these cryptographic safety vouchers. They have mathematical properties that say, Apple will only be able to decrypt the contents or learn anything about the images and users specifically that collect photos that match illegal, known CSAM hashes, and that’s just not something anyone can say about a cloud processing scanning service, where every single image has to be processed in a clear decrypted form and run by routine to determine who knows what? At that point it’s very easy to determine anything you want [about a user’s images] versus our system only what is determined to be those images that match a set of known CSAM hashes that came directly from NCMEC and and other child safety organizations. 

Can this CSAM detection feature stay holistic when the device is physically compromised? Sometimes cryptography gets bypassed locally, somebody has the device in hand — are there any additional layers there?

I think it’s important to underscore how very challenging and expensive and rare this is. It’s not a practical concern for most users though it’s one we take very seriously, because the protection of data on the device is paramount for us. And so if we engage in the hypothetical where we say that there has been an attack on someone’s device: that is such a powerful attack that there are many things that that attacker could attempt to do to that user. There’s a lot of a user’s data that they could potentially get access to. And the idea that the most valuable thing that an attacker — who’s undergone such an extremely difficult action as breaching someone’s device — was that they would want to trigger a manual review of an account doesn’t make much sense. 

Because, let’s remember, even if the threshold is met, and we have some vouchers that are decrypted by Apple. The next stage is a manual review to determine if that account should be referred to NCMEC or not, and that is something that we want to only occur in cases where it’s a legitimate high value report. We’ve designed the system in that way, but if we consider the attack scenario you brought up, I think that’s not a very compelling outcome to an attacker.

Why is there a threshold of images for reporting, isn’t one piece of CSAM content too many?

We want to ensure that the reports that we make to NCMEC are high value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched, And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year. So, working against the idea that we do not have any interest in looking through users’ photo libraries outside those that are holding collections of known CSAM the threshold allows us to have high confidence that those accounts that we review are ones that when we refer to NCMEC, law enforcement will be able to take up and effectively investigate, prosecute and convict.

This Week in Apps: In-app events hit the App Store, TikTok tries Stories, Apple reveals new child safety plan

Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.

The app industry continues to grow, with a record 218 billion downloads and $143 billion in global consumer spend in 2020. Consumers last year also spent 3.5 trillion minutes using apps on Android devices alone. And in the U.S., app usage surged ahead of the time spent watching live TV. Currently, the average American watches 3.7 hours of live TV per day, but now spends four hours per day on their mobile devices.

Apps aren’t just a way to pass idle hours — they’re also a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus. In 2020, investors poured $73 billion in capital into mobile companies — a figure that’s up 27% year-over-year.

This Week in Apps offers a way to keep up with this fast-moving industry in one place, with the latest from the world of apps, including news, updates, startup fundings, mergers and acquisitions, and suggestions about new apps and games to try, too.

Do you want This Week in Apps in your inbox every Saturday? Sign up here: techcrunch.com/newsletters

Top Stories

Apple to scan for CSAM imagery

Apple announced a major initiative to scan devices for CSAM imagery. The company on Thursday announced a new set of features, arriving later this year, that will detect child sexual abuse material (CSAM) in its cloud and report it to law enforcement. Companies like Dropbox, Google and Microsoft already scan for CSAM in their cloud services, but Apple had allowed users to encrypt their data before it reached iCloud. Now, Apple’s new technology, NeuralHash, will run on users’ devices, tatformso detect when a users upload known CSAM imagery — without having to first decrypt the images. It even can detect the imagery if it’s been cropped or edited in an attempt to avoid detection.

Meanwhile, on iPhone and iPad, the company will roll out protections to Messages app users that will filter images and alert children and parents if sexually explicit photos are sent to or from a child’s account. Children will not be shown the images but will instead see a grayed-out image instead. If they try to view the image anyway through the link, they’ll be shown interruptive screens that explain why the material may be harmful and are warned that their parents will be notified.

Some privacy advocates pushed back at the idea of such a system, believing it could expand to end-to-end encrypted photos, lead to false positives, or set the stage for more on-device government surveillance in the future. But many cryptology experts believe the system Apple developed provides a good balance between privacy and utility, and have offered their endorsement of the technology. In addition, Apple said reports are manually reviewed before being sent to the National Center for Missing and Exploited Children (NCMEC).

The changes may also benefit iOS developers who deal in user photos and uploads, as predators will no longer store CSAM imagery on iOS devices in the first place, given the new risk of detection.

In-App Events appear on the App Store

Image Credits: Apple

Though not yet publicly available to all users, those testing the new iOS 15 mobile operating system got their first glimpse of a new App Store discovery feature this week: “in-app events.” First announced at this year’s WWDC, the feature will allow developers and Apple editors alike to showcase directly on the App Store upcoming events taking place inside apps.

The events can appear on the App Store homepage, on the app’s product pages or can be discovered through personalized recommendations and search. In some cases, editors will curate events to feature on the App Store. But developers will also be provided tools to submit their own in-app events. TikTok’s “Summer Camp” for creators was one of the first in-app events to be featured, where it received a top spot on the iPadOS 15 App Store.

Weekly News

Platforms: Apple

Apple expands support for student IDs on iPhone and Apple Watch ahead of the fall semester. Tens of thousands more U.S. and Canadian colleges will now support mobile student IDs in the Apple Wallet app, including Auburn University, Northern Arizona University, University of Maine, New Mexico State University and others.

Apple was accused of promoting scam apps in the App Store’s featured section. The company’s failure to properly police its store is one thing, but to curate an editorial list that actually includes the scams is quite another. One of the games rounded up under “Slime Relaxations,” an already iffy category to say the least, was a subscription-based slime simulator that locked users into a $13 AUD per week subscription for its slime simulator. One of the apps on the curated list didn’t even function, implying that Apple’s editors hadn’t even tested the apps they recommend.

Tax changes hit the App Store. Apple announced tax and price changes for apps and IAPs in South Africa, the U.K. and all territories using the Euro currency, all of which will see decreases. Increases will occur in Georgia and Tajikistan, due to new tax changes. Proceeds on the App Store in Italy will be increased to reflect a change to the Digital Services Tax effective rate.

Game Center changes, too. Apple said that on August 4, a new certificate for server-based Game Center verification will be available via the publicKeyUrl.

Fintech

Robinhood stock jumped more than 24% to $46.80 on Tuesday after initially falling 8% on its first day of trading last week, after which it had continued to trade below its opening price of $38.

Square’s Cash app nearly doubled its gross profit to $546 million in Q2, but also reported a $45 million impairment loss on its bitcoin holdings.

Coinbase’s app now lets you buy your cryptocurrency using Apple Pay. The company previously made its Coinbase Card compatible with Apple Pay in June.

Social

An anonymous app called Sendit, which relies on Snap Kit to function, is climbing the charts of the U.S. App Store after Snap suspended similar apps, YOLO and LMK. Snap was sued by the parent of child who was bullied through those apps, which led to his suicide. Sendit also allows for anonymity, and reviews compare it to YOLO. But some reviews also complained about bullying. This isn’t the first time Snap has been involved in a lawsuit related to a young person’s death related to its app. The company was also sued for its irresponsible “speed filter” that critics said encouraged unsafe driving. Three young men died using the filter, which captured them doing 123 mph.

TikTok is testing Stories. As Twitter’s own Stories integrations, Fleets, shuts down, TikTok confirmed it’s testing its own Stories product. The TikTok Stories appear in a left-hand sidebar and allow users to post ephemeral images or video that disappear in 24 hours. Users can also comment on Stories, which are public to their mutual friends and the creator. Stories on TikTok may make more sense than they did on Twitter, as TikTok is already known as a creative platform and it gives the app a more familiar place to integrate its effects toolset and, eventually, advertisements.

Facebook has again re-arranged its privacy settings. The company continually moves around where its privacy features are located, ostensibly to make them easier to find. But users then have to re-learn where to go to find the tools they need, after they had finally memorized the location. This time, the settings have been grouped into six top-level categories, but “privacy” settings have been unbundled from one location to be scattered among the other categories.

A VICE report details ban-as-a-service operations that allow anyone to harass or censor online creators on Instagram. Assuming you can find it, one operation charged $60 per ban, the listing says.

TikTok merged personal accounts with creator accounts. The change means now all non-business accounts on TikTok will have access to the creator tools under Settings, including Analytics, Creator Portal, Promote and Q&A. TikTok shared the news directly with subscribers of its TikTok Creators newsletter in August, and all users will get a push notification alerting them to the change, the company told us.

Discord now lets users customize their profile on its apps. The company added new features to its iOS and Android apps that let you add a description, links and emojis and select a profile color. Paid subscribers can also choose an image or GIF as their banner.

Twitter Spaces added a co-hosting option that allows up to two co-hosts to be added to the live audio chat rooms. Now Spaces can have one main host, two co-hosts and up to 10 speakers. Co-hosts have all the moderation abilities as hosts, but can’t add or remove others as co-hosts.

Messaging

Tencent reopened new user sign-ups for its WeChat messaging app, after having suspended registrations last week for unspecified “technical upgrades.” The company, like many other Chinese tech giants, had to address new regulations from Beijing impacting the tech industry. New rules address how companies handle user data collection and storage, antitrust behavior and other checks on capitalist “excess.” The gaming industry is now worried it’s next to be impacted, with regulations that would restrict gaming for minors to fight addiction.

WhatsApp is adding a new feature that will allow users to send photos and videos that disappear after a single viewing. The Snapchat-inspired feature, however, doesn’t alert you if the other person takes a screenshot — as Snap’s app does. So it may not be ideal for sharing your most sensitive content.

Telegram’s update expands group video calls to support up to 1,000 viewers. It also announced video messages can be recorded in higher quality and can be expanded, regular videos can be watched at 0.5 or 2x speed, screen sharing with sound is available for all video calls, including 1-on-1 calls, and more.

Streaming & Entertainment

American Airlines added free access to TikTok aboard its Viasat-equipped aircraft. Passengers will be able to watch the app’s videos for up to 30 minutes for free and can even download the app if it’s not already installed. After the free time, they can opt to pay for Wi-Fi to keep watching. Considering how easy it is to fall into multi-hour TikTok viewing sessions without knowing it, the addition of the addictive app could make long plane rides feel shorter. Or at least less painful.

Chinese TikTok rival Kuaishou saw stocks fall by more than 15% in Hong Kong, the most since its February IPO. The company is another victim of an ongoing market selloff triggered by increasing investor uncertainty related to China’s recent crackdown on tech companies. Beijing’s campaign to rein in tech has also impacted Tencent, Alibaba, Jack Ma’s Ant Group, food delivery company Meituan and ride-hailing company Didi. Also related, Kuaishou shut down its controversial app Zynn, which had been paying users to watch its short-form videos, including those stolen from other apps.

Twitch overtook YouTube in consumer spending per user in April 2021, and now sees $6.20 per download as of June compared with YouTube’s $5.60, Sensor Tower found.

Image Credits: Sensor Tower

Spotify confirmed tests of a new ad-supported tier called Spotify Plus, which is only $0.99 per month and offers unlimited skips (like free users get on the desktop) and the ability to play the songs you want, instead of only being forced to use shuffle mode.

The company also noted in a forum posting that it’s no longer working on AirPlay2 support, due to “audio driver compatibility” issues.

Mark Cuban-backed audio app Fireside asked its users to invest in the company via an email sent to creators which didn’t share deal terms. The app has yet to launch.

YouTube kicks off its $100 million Shorts Fund aimed at taking on TikTok by providing creators with cash incentives for top videos. Creators will get bonuses of $100 to $10,000 based on their videos’ performance.

Dating

Match Group announced during its Q2 earnings it plans to add to several of the company’s brands over the next 12 to 24 months audio and video chat, including group live video, and other livestreaming technologies. The developments will be powered by innovations from Hyperconnect, the social networking company that this year became Match’s biggest acquisition to date when it bought the Korean app maker for a sizable $1.73 billion. Since then, Match was spotted testing group live video on Tinder, but says that particular product is not launching in the near-term. At least two brands will see Hyperconnect-powered integrations in 2021.

Photos

The Photo & Video category on U.S. app stores saw strong growth in the first half of the year, a Sensor Tower report found. Consumer spend among the top 100 apps grew 34% YoY to $457 million in Q2 2021, with the majority of the revenue (83%) taking place on iOS.

Image Credits: Sensor Tower

Gaming

Epic Games revealed the host of its in-app Rift Tour event is Ariana Grande, in the event that runs August 6-8.

Pokémon GO influencers threatened to boycott the game after Niantic removed the COVID safety measures that had allowed people to more easily play while social distancing. Niantic’s move seemed ill-timed, given the Delta variant is causing a new wave of COVID cases globally.

Health & Fitness

Apple kicked out an app called Unjected from the App Store. The new social app billed itself as a community for the unvaccinated, allowing like-minded users to connect for dating and friendships. Apple said the app violated its policies for COVID-19 content.

Google Pay expanded support for vaccine cards. In Australia, Google’s payments app now allows users to add their COVID-19 digital certification to their device for easy access. The option is available through Google’s newly updated Passes API which lets government agencies distribute digital versions of vaccine cards.

COVID Tech Connect, a U.S. nonprofit initially dedicated to collecting devices like phones and tablets for COVID ICU patients, has now launched its own app. The app, TeleHome, is a device-agnostic, HIPAA-compliant way for patients to place a video call for free at a time when the Delta variant is again filling ICU wards, this time with the unvaccinated — a condition that sometimes overlaps with being low-income. Some among the working poor have been hesitant to get the shot because they can’t miss a day of work, and are worried about side effects. Which is why the Biden administration offered a tax credit to SMBs who offered paid time off to staff to get vaccinated and recover.

Popular journaling app Day One, which was recently acquired by WordPress.com owner Automattic, rolled out a new “Concealed Journals” feature that lets users hide content from others’ viewing. By tapping the eye icon, the content can be easily concealed on a journal by journal basis, which can be useful for those who write to their journal in public, like coffee shops or public transportation.

Edtech

Recently IPO’d language learning app Duolingo is developing a math app for kids. The company says it’s still “very early” in the development process, but will announce more details at its annual conference, Duocon, later this month.

Educational publisher Pearson launched an app that offers U.S. students access to its 1,500 titles for a monthly subscription of $14.99. the Pearson+ mobile app (ack, another +), also offers the option of paying $9.99 per month for access to a single textbook for a minimum of four months.

News & Reading

Quora jumps into the subscription economy. Still not profitable from ads alone, Quora announced two new products that allow its expert creators to monetize their content on its service. With Quora+ ($5/mo or $50/yr), subscribers can pay for any content that a creator paywalls. Creators can choose to enable a adaptive paywall that will use an algorithm to determine when to show the paywall. Another product, Spaces, lets creators write paywalled publications on Quora, similar to Substack. But only a 5% cut goes to Quora, instead of 10% on Substack.

Utilities

Google Maps on iOS added a new live location-sharing feature for iMessage users, allowing them to more easily show your ETA with friends and even how much battery life you have left. The feature competes with iMessage’s built-in location-sharing feature, and offers location sharing of 1 hour up to 3 days. The app also gained a dark mode.

Security & Privacy

Controversial crime app Citizen launched a $20 per month “Protect” service that includes live agent support (who can refer calls to 911 if need be). The agents can gather your precise location, alert your designated emergency contacts, help you navigate to a safe location and monitor the situation until you feel safe. The system of live agent support is similar to in-car or in-home security and safety systems, like those from ADT or OnStar, but works with users out in the real world. The controversial part, however, is the company behind the product: Citizen has been making headlines for launching private security fleets outside law enforcement, and recently offered a reward in a manhunt for an innocent person based on unsubstantiated tips.

Funding and M&A

🤝 Square announced its acquisition of the “buy now, pay later” giant AfterPay in a $29 billion deal that values the Australian firm at more than 30% higher than the stock’s last closing price of AUS$96.66. AfterPay has served over 16 million customers and nearly 100,000 merchants globally, to date, and comes at a time when the BNPL space is heating up. Apple has also gotten into the market recently with an Affirm partnership in Canada.

🤝 Gaming giant Zynga acquired Chinese game developer StarLark, the team behind the mobile golf game Golf Rival, from Betta Games for $525 million in both cash and stock. Golf Rival is the second-largest mobile golf game behind Playdemic’s Golf Clash, and EA is in the process of buying that studio for $1.4 billion.

💰  U.K.-based Humanity raised an additional $2.5 million for its app that claims to help slow down aging, bringing the total raise to date to $5 million. Backers include Calm’s co-founders, MyFitness Pal’s co-founder and others in the health space. The app works by benchmarking health advice against real-world data, to help users put better health practices into action.

💰 YELA, a Cameo-like app for the Middle East and South Asia, raised $2 million led by U.S. investors that include Tinder co-founder Justin Mateen and Sean Rad, general partner of RAD Fund. The app is focusing on signing celebrities in the regions it serves, where smartphone penetration is high and over 6% of the population is under 35.

💰 London-based health and wellness app maker Palta raised a $100 million Series B led by VNV Global. The company’s products include Flo.Health, Simple Fasting, Zing Fitness Coach and others, which reach a combined 2.4 million active, paid subscribers. The funds will be used to create more mobile subscription products.

🤝 Emoji database and Wikipedia-like site Emojipedia was acquired by Zedge, the makers of a phone personalization app offering wallpapers, ringtones and more to 35 million MAUs. Deal terms weren’t disclosed. Emojipedia says the deal provides it with more stability and the opportunity for future growth. For Zedge, the deal provides🤨….um, a popular web resource it thinks it can better monetize, we suspect.

💰 Mental health app Revery raised $2 million led by Sequoia Capital India’s Surge program for its app that combines cognitive behavioral therapy for insomnia with mobile gaming concepts. The company will focus on other mental health issues in the future.

💰 London-based Nigerian-operating fintech startup Kuda raised a $55 million Series B, valuing its mobile-first challenger bank at $500 million. The inside round was co-led by Valar Ventures and Target Global.

💰 Vietnamese payments provider VNLife raised $250 million in a round led by U.S.-based General Atlantic and Dragoneer Investment Group. PayPal Ventures and others also participated. The round values the business at over $1 billion.

Downloads

Mastodon for iPhone

Fans of decentralized social media efforts now have a new app. The nonprofit behind the open source decentralized social network Mastodon released an official iPhone app, aimed at making the network more accessible to newcomers. The app allows you to find and follow people and topics; post text, images, GIFs, polls, and videos; and get notified of new replies and reblogs, much like Twitter.

Xingtu

@_666eveITS SO COOL FRFR do u guys want a tutorial? #fypシ #醒图 #醒图app♬ original sound – Ian Asher

TikTok users are teaching each other how to switch over to the Chinese App Store in order to get ahold of the Xingtu app for iOS. (An Android version is also available.) The app offers advanced editing tools that let users edit their face and body, like FaceTune, apply makeup, add filters and more. While image-editing apps can be controversial for how they can impact body acceptance, Xingtu offers a variety of artistic filters which is what’s primarily driving the demand. It’s interesting to see the lengths people will go to just to get a few new filters for their photos — perhaps making a case for Instagram to finally update its Post filters instead of pretending no one cares about their static photos anymore.

Tweets

Facebook still dominating top charts, but not the No. 1 spot:  

Not cool, Apple: 

This user acquisition strategy: 

Maybe Stories don’t work everywhere: 

Apple says it will begin scanning iCloud Photos for child abuse images

Later this year, Apple will roll out a technology that will allow the company to detect and report known child sexual abuse material to law enforcement in a way it says will preserve user privacy.

Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

NeuralHash will land in iOS 15 and macOS Monterey, slated to be released in the next month or two, and works by converting the photos on a user’s iPhone or Mac into a unique string of letters and numbers, known as a hash. Any time you modify an image slightly, it changes the hash and can prevent matching. Apple says NeuralHash tries to ensure that identical and visually similar images — such as cropped or edited images — result in the same hash.

Before an image is uploaded to iCloud Photos, those hashes are matched on the device against a database of known hashes of child abuse imagery, provided by child protection organizations like the National Center for Missing & Exploited Children (NCMEC) and others. NeuralHash uses a cryptographic technique called private set intersection to detect a hash match without revealing what the image is or alerting the user.

The results are uploaded to Apple but cannot be read on their own. Apple uses another cryptographic principle called threshold secret sharing that allows it only to decrypt the contents if a user crosses a threshold of known child abuse imagery in their iCloud Photos. Apple would not say what that threshold was, but said — for example — that if a secret is split into a thousand pieces and the threshold is ten images of child abuse content, the secret can be reconstructed from any of those ten images.

Read more on TechCrunch

It’s at that point Apple can decrypt the matching images, manually verify the contents, disable a user’s account and report the imagery to NCMEC, which is then passed to law enforcement. Apple says this process is more privacy mindful than scanning files in the cloud as NeuralHash only searches for known and not new child abuse imagery. Apple said that there is a one in one trillion chance of a false positive, but there is an appeals process in place in the event an account is mistakenly flagged.

Apple has published technical details on its website about how NeuralHash works, which was reviewed by cryptography experts.

But despite the wide support of efforts to combat child sexual abuse, there is still a component of surveillance that many would feel uncomfortable handing over to an algorithm, and some security experts are calling for more public discussion before Apple rolls the technology out to users.

A big question is why now and not sooner. Apple said its privacy-preserving CSAM detection did not exist until now. But companies like Apple have also faced considerable pressure from the U.S. government and its allies to weaken or backdoor the encryption used to protect their users’ data to allow law enforcement to investigate serious crime.

Tech giants have refused efforts to backdoor their systems, but have faced resistance against efforts to further shut out government access. Although data stored in iCloud is encrypted in a way that even Apple cannot access it, Reuters reported last year that Apple dropped a plan for encrypting users’ full phone backups to iCloud after the FBI complained that it would harm investigations.

The news about Apple’s new CSAM detection tool, without public discussion, also sparked concerns that the technology could be abused to flood victims with child abuse imagery that could result in their account getting flagged and shuttered, but Apple downplayed the concerns and said a manual review would review the evidence for possible misuse.

Apple said NeuralHash will roll out in the U.S. at first, but would not say if, or when, it would be rolled out internationally. Until recently, companies like Facebook were forced to switch off its child abuse detection tools across the bloc after the practice was inadvertently banned. Apple said the feature is technically optional in that you don’t have to use iCloud Photos, but will be a requirement if users do. After all, your device belongs to you but Apple’s cloud does not.

Tech leaders can be the secret weapon for supercharging ESG goals

Environmental, social and governance (ESG) factors should be key considerations for CTOs and technology leaders scaling next generation companies from day one. Investors are increasingly prioritizing startups that focus on ESG, with the growth of sustainable investing skyrocketing.

What’s driving this shift in mentality across every industry? It’s simple: Consumers are no longer willing to support companies that don’t prioritize sustainability. According to a survey conducted by IBM, the COVID-19 pandemic has elevated consumers’ focus on sustainability and their willingness to pay out of their own pockets for a sustainable future. In tandem, federal action on climate change is increasing, with the U.S. rejoining the Paris Climate Agreement and a recent executive order on climate commitments.

Over the past few years, we have seen an uptick in organizations setting long-term sustainability goals. However, CEOs and chief sustainability officers typically forecast these goals, and they are often long term and aspirational — leaving the near and midterm implementation of ESG programs to operations and technology teams.

Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering.

CTOs are a crucial part of the planning process, and in fact, can be the secret weapon to help their organization supercharge their ESG targets. Below are a few immediate steps that CTOs and technology leaders can take to achieve sustainability and make an ethical impact.

Reducing environmental impact

As more businesses digitize and more consumers use devices and cloud services, the energy needed by data centers continues to rise. In fact, data centers account for an estimated 1% of worldwide electricity usage. However, a forecast from IDC shows that the continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide from 2021 through 2024.

Make compute workloads more efficient: First, it’s important to understand the links between computing, power consumption and greenhouse gas emissions from fossil fuels. Making your app and compute workloads more efficient will reduce costs and energy requirements, thus reducing the carbon footprint of those workloads. In the cloud, tools like compute instance auto scaling and sizing recommendations make sure you’re not running too many or overprovisioned cloud VMs based on demand. You can also move to serverless computing, which does much of this scaling work automatically.

Deploy compute workloads in regions with lower carbon intensity: Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering. While the compute capabilities of regions are similar, their carbon intensities typically vary. Some regions have access to more carbon-free energy production than others, and consequently the carbon intensity for each region is different.

So, choosing a cloud region with lower carbon intensity is often the simplest and most impactful step you can take. Alistair Scott, co-founder and CTO of cloud infrastructure startup Infracost, underscores this sentiment: “Engineers want to do the right thing and reduce waste, and I think cloud providers can help with that. The key is to provide information in workflow, so the people who are responsible for infraprovisioning can weigh the CO2 impact versus other factors such as cost and data residency before they deploy.”

Another step is to estimate your specific workload’s carbon footprint using open-source software like Cloud Carbon Footprint, a project sponsored by ThoughtWorks. Etsy has open-sourced a similar tool called Cloud Jewels that estimates energy consumption based on cloud usage information. This is helping them track progress toward their target of reducing their energy intensity by 25% by 2025.

Make social impact

Beyond reducing environmental impact, CTOs and technology leaders can have significant, direct and meaningful social impact.

Include societal benefits in the design of your products: As a CTO or technology founder, you can help ensure that societal benefits are prioritized in your product roadmaps. For example, if you’re a fintech CTO, you can add product features to expand access to credit in underserved populations. Startups like LoanWell are on a mission to increase access to capital for those typically left out of the financial system and make the loan origination process more efficient and equitable.

When thinking about product design, a product needs to be as useful and effective as it is sustainable. By thinking about sustainability and societal impact as a core element of product innovation, there is an opportunity to differentiate yourself in socially beneficial ways. For example, Lush has been a pioneer of package-free solutions, and launched Lush Lens — a virtual package app leveraging cameras on mobile phones and AI to overlay product information. The company hit 2 million scans in its efforts to tackle the beauty industry’s excessive use of (plastic) packaging.

Responsible AI practices should be ingrained in the culture to avoid social harms: Machine learning and artificial intelligence have become central to the advanced, personalized digital experiences everyone is accustomed to — from product and content recommendations to spam filtering, trend forecasting and other “smart” behaviors.

It is therefore critical to incorporate responsible AI practices, so benefits from AI and ML can be realized by your entire user base and that inadvertent harm can be avoided. Start by establishing clear principles for working with AI responsibly, and translate those principles into processes and procedures. Think about AI responsibility reviews the same way you think about code reviews, automated testing and UX design. As a technical leader or founder, you get to establish what the process is.

Impact governance

Promoting governance does not stop with the board and CEO; CTOs play an important role, too.

Create a diverse and inclusive technology team: Compared to individual decision-makers, diverse teams make better decisions 87% of the time. Additionally, Gartner research found that in a diverse workforce, performance improves by 12% and intent to stay by 20%.

It is important to reinforce and demonstrate why diversity, equity and inclusion is important within a technology team. One way you can do this is by using data to inform your DEI efforts. You can establish a voluntary internal program to collect demographics, including gender, race and ethnicity, and this data will provide a baseline for identifying diversity gaps and measuring improvements. Consider going further by baking these improvements into your employee performance process, such as objectives and key results (OKRs). Make everyone accountable from the start, not just HR.

These are just a few of the ways CTOs and technology leaders can contribute to ESG progress in their companies. The first step, however, is to recognize the many ways you as a technology leader can make an impact from day one.

Noetic Cyber emerges from stealth with $15M led by Energy Impact Partners

Noetic Cyber, a cloud-based continuous cyber asset management and controls platform, has launched from stealth with a Series A funding round of $15 million led by Energy Impact Partners.

The round was also backed by Noetic’s existing investors, TenEleven Ventures and GlassWing Ventures, and brings the total amount of funds raised by the startup to $20 million following a $5 million seed round. Shawn Cherian, a partner at Energy Impact Partners, will join the Noetic board, while Niloofar Razi Howe, a senior operating partner at the investment firm, will join Noetic’s advisory board.

“Noetic is a true market disruptor, offering an innovative way to fix the cyber asset visibility problem — a growing and persistent challenge in today’s threat landscape,” said Howe.

The Massachusetts-based startup claims to be taking a new approach to the cyber asset management problem. Unlike traditional solutions, Noetic is not agent-based, instead using API aggregation and correlation to draw insights from multiple security and IT management tools.

“What makes us different is that we’re putting orchestration and automation at the heart of the solution, so we’re not just showing security leaders that they have problems, but we’re helping them to fix them,” Paul Ayers, CEO and co-founder of Noetic Cyber tells TechCrunch.

Ayer was previously a top exec at PGP Corporation (acquired by Symantec for $370 million) and Vormetric (acquired by Thales for $400 million) and founded Noetic Cyber with Allen Roger and Allen Hadden, who have previously worked at cybersecurity vendors including Authentica, Raptor and Axent. All three were also integral to the development of Resilient Systems, which was acquired by IBM.

“The founding team’s experience in the security, orchestration, automation and response market gives us unique experience and insights to make automation a key pillar of the solution,” Ayers said. “Our model gives you the certainty to make automation possible, the goal is to find and fix problems continuously, getting assets back to a secure state.”

“The development of the technology has been impacted by the current cyber landscape, and the pandemic, as some of the market drivers we’ve seen around the adoption of cloud services, and the increased use of unmanaged devices by remote workers, are driving a great need for accurate cyber asset discovery and management.”

The company, which currently has 20 employees, says it plans to use the newly raised funds to double its headcount by the end of the year, as well as increase its go-to-market capability in the U.S. and the U.K. to grow its customer base and revenue growth.

“In terms of technology development, this investment allows us to continue to add development and product management talent to the team to build on our cyber asset management platform,” Ayers said. 

“The beauty of our approach is that it allows us to easily add more applications and use cases on top of our core asset visibility and management model. We will continue to add more connectors to support customer use cases and will be bringing a comprehensive controls package to market later in 2021, as well as a community edition in 2022.”

Nym gets $6M for its anonymous overlay mixnet to sell privacy as a service

Switzerland-based privacy startup Nym Technologies has raised $6 million, which is being loosely pegged as a Series A round.

Earlier raises included a $2.5M seed round in 2019. The founders also took in grant money from the European Union’s Horizon 2020 research fund during an earlier R&D phase developing the network tech.

The latest funding will be used to continue commercial development of network infrastructure which combines an old idea for obfuscating the metadata of data packets at the transport network layer (Mixnets) with a crypto inspired reputation and incentive mechanism to drive the required quality of service and support a resilient, decentralized infrastructure.

Nym’s pitch is it’s building “an open-ended anonymous overlay network that works to irreversibly disguise patterns in Internet traffic”.

Unsurprisingly, given its attention to crypto mechanics, investors in the Series A have strong crypto ties — and cryptocurrency-related use-cases are also where Nym expects its first users to come from — with the round led by Polychain Capital, with participation from a number of smaller European investors including Eden Block, Greenfield One, Maven11, Tioga, and 1kx.

Commenting in a statement, Will Wolf of Polychain Capital, said: “We’re incredibly excited to partner with the Nym team to further their mission of bringing robust, sustainable and permissionless privacy infrastructure to all Internet users. We believe the Nym network will provide the strongest privacy guarantees with the highest quality of service of any mixnet and thus may become a very valuable piece of core internet infrastructure.”

The Internet’s ‘original sin’ was that core infrastructure wasn’t designed with privacy in mind. Therefore the level of complicity involved in Mixnets — shuffling and delaying encrypted data packets in order to shield sender-to-recipient metadata from adversaries with a global view of a network — probably seemed like over engineering all the way back when the web’s scaffolding was being pieced together.

But then came Bitcoin and the crypto boom and — also in 2013 — the Snowden revelations which ripped the veil off the NSA’s ‘collect it all’ mantra, as Booz Allen Hamilton sub-contractor Ed risked it all to dump data on his own (and other) governments’ mass surveillance programs. Suddenly network level adversaries were front page news. And so was Internet privacy.

Since Snowden’s big reveal, there’s been a slow burn of momentum for privacy tech — with rising consumer awareness fuelling usage of services like e2e encrypted email and messaging apps. Sometimes in spurts and spikes, related to specific data breaches and scandals. Or indeed privacy-hostile policy changes by mainstream tech giants (hi Facebook!).

Legal clashes between surveillance laws and data protection rights are also causing growing b2b headaches, especially for US-based cloud services. While growth in cryptocurrencies is driving demand for secure infrastructure to support crypto trading.

In short, the opportunity for privacy tech, both b2b and consumer-facing, is growing. And the team behind Nym thinks conditions look ripe for general purpose privacy-focused networking tech to take off too.

Of course there is already a well known anonymous overlay network in existence: Tor, which does onion routing to obfuscate where traffic was sent from and where it ends up.

The node-hopping component of Nym’s network shares a feature with the Tor network. But Tor does not do packet mixing — and Nym’s contention is that a functional mixnet can provide even stronger network-level privacy.

It sets out the case on its website — arguing that “Tor’s anonymity properties can be defeated by an entity that is capable of monitoring the entire network’s ‘entry’ and ‘exit’ nodes” since it does not take the extra step of adding “timing obfuscation” or “decoy traffic” to obfuscate the patterns that could be exploited to deanonymize users.

“Although these kinds of attacks were thought to be unrealistic when Tor was invented, in the era of powerful government agencies and private companies, these kinds of attacks are a real threat,” Nym suggests, further noting another difference in that Tor’s design is “based on a centralized directory authority for routing”, whereas Nym fully decentralizes its infrastructure.

Proving that suggestion will be quite the challenge, of course. And Nym’s CEO is upfront in his admiration for Tor — saying it is the best technology for securing web browsing right now.

“Most VPNs and almost all cryptocurrency projects are not as secure or as private as Tor — Tor is the best we have right now for web browsing,” says Nym founder and CEO Harry Halpin. “We do think Tor made all the right decisions when they built the software — at the time there was no interest from venture capital in privacy, there was only interest from the US government. And the Internet was too slow to do a mixnet. And what’s happened is speed up 20 years, things have transformed.

“The US government is no longer viewed as a defender of privacy. And now — weirdly enough — all of a sudden venture capital is interested in privacy and that’s a really big change.”

With such a high level of complexity involved in what Nym’s doing it will, very evidently, need to demonstrate the robustness of its network protocol and design against attacks and vulnerabilities on an ongoing basis — such as those seeking to spot patterns or identify dummy traffic and be able to relink packets to senders and receivers.

The tech is open source but Nym confirms the plan is to use some of the Series A funding for an independent audit of new code.

It also touts the number of PhDs it’s hired to-date — and plans to hire a bunch more, saying it will be using the new round to more than double its headcount, including hiring cryptographers and developers, as well as marketing specialists in privacy.

The main motivation for the raise, per Halpin, is to spend on more R&D to explore — and (he hopes) — solve some of the more specific use-cases it’s kicking around, beyond the basic one of letting developers use the network to shield user traffic (a la Tor).

Nym’s whitepaper, for example, touts the possibility for the tech being used to enable users to prove they have the right to access a service without having to disclose their actual identity to the service provider.

Another big difference vs Tor is that Tor is a not-for-profit — whereas Nym wants to build a for-profit business around its Mixnet.

It intends to charge users for access to the network — so for the obfuscation-as-a-service of having their data packets mixed into a crowd of shuffled, encrypted and proxy node-hopped others.

But potentially also for some more bespoke services — with Nym’s team eyeing specific use-cases such as whether its network could offer itself as a ‘super VPN’ to the banking sector to shield their transactions; or provide a secure conduit for AI companies to carry out machine learning processing on sensitive data-sets (such as healthcare data) without risking exposing the information itself.

“The main reason we raised this Series A is we need to do more R&D to solve some of these use-cases,” says Halpin. “But what impressed Polychain was they said wow there’s all these people that are actually interested in privacy — that want to run these nodes, that actually want to use the software. So originally when we envisaged this startup we were imagining more b2b use-cases I guess and what I think Polychain was impressed with was there seemed to be demand from b2c; consumer demand that was much higher than expected.”

Halpin says they expect the first use-cases and early users to come from the crypto space — where privacy concerns routinely attach themselves to blockchain transactions.

The plan is to launch the software by the end of the year or early next, he adds.

“We will have at least some sort of chat applications — for example it’s very easy to use our software with Signal… so we do think something like Signal is an ideal use-case for our software — and we would like to launch with both a [crypto] wallet and a chat app,” he says. “Then over the next year or two — because we have this runway — we can work more on kind of higher speed applications. Things like try to find partnerships with browsers, with VPNs.”

At this (still fairly early) stage of the network’s development — an initial testnet was launched in 2019 — Nym’s eponymous network has amassed over 9,000 nodes. These distributed, crowdsourced providers are only earning a NYM reputation token for now, and it remains to be seen how much exchangeable crypto value they might earn in the future as suppliers of key infrastructure if/when usage takes off.

Why didn’t Mixnets as a technology take off before, though? After all the idea dates back to the 1980s. There’s a range of reasons, according to Halpin — issues with scalability being one of them one. And a key design “innovation” he points to vis-a-vis its implementation of Mixnet technology is the ability to keep adding nodes so the network is able to scale to meet demand.

Another key addition is that the Nym protocol injects dummy traffic packets into the shuffle to make it harder for adversaries to decode the path of any particular message — aiming to bolster the packet mixing process against vulnerabilities like correlation attacks.

While the Nym network’s crypto-style reputation and incentive mechanism — which works to ensure the quality of mixing (“via a novel proof of mixing scheme”, as its whitepaper puts it) — is another differentiating component Halpin flags.

“One of our core innovations is we scale by adding servers. And the question is how do we add servers? To be honest we added servers by looking at what everyone had learned about reputation and incentives from cryptocurrency systems,” he tells TechCrunch. “We copied that — those insights — and attached them to mix networks. So the combination of the two things ends up being pretty powerful.

“The technology does essentially three things… We mix packets. You want to think about an unencrypted packet like a card, an encrypted packet you flip over so you don’t know what the card says, you collect a bunch of cards and you shuffle them. That’s all that mixing is — it just randomly permutates the packets… Then you hand them to the next person, they shuffle them. You hand them to the third person, they shuffle them. And then they had the cards to whoever is at the end. And as long as different people gave you cards at the beginning you can’t distinguish those people.”

More generally, Nym also argues it’s an advantage to be developing mixnet technology that’s independent and general purpose — folding all sorts and types of traffic into a shuffled pack — suggesting it can achieve greater privacy for users’ packets in this pooled crowd vs similar tech offered by a single provider to only their own users (such as the ‘privacy relay’ network recently announced by Apple).

In the latter case, an attacker already knows that the relayed traffic is being sent by Apple users who are accessing iCloud services. Whereas — as a general purpose overlay layer — Nym can, in theory, provide contextual coverage to users as part of its privacy mix. So another key point is that the level of privacy available to Nym users scales as usage does.

Historical performance issues with bandwidth and latency are other reasons Halpin cites for Mixnets being largely left on the academic shelf. (There have been some other deployments, such as Loopix — which Nym’s whitepaper says its design builds on by extending it into a “general purpose incentivized mixnet architecture” — but it’s fair to say the technology hasn’t exactly gone mainstream.)

Nonetheless, Nym’s contention is the tech’s time is finally coming; firstly because technical challenges associated with Mixnets can be overcome — because of gains in Internet bandwidth and compute power; as well as through incorporating crypto-style incentives and other design tweaks it’s introducing (e.g. dummy traffic) — but also, and perhaps most importantly, because privacy concerns aren’t simply going to disappear.

Indeed, Halpin suggests governments in certain countries may ultimately decide their exposure to certain mainstream tech providers which are subject to state mass surveillance regimes — whether that’s the US version or China’s flavor or elsewhere —  simply isn’t tenable over the longer run and that trusting sensitive data to corporate VPNs based in countries subject to intelligence agency snooping is a fool’s game.

(And it’s interesting to note, for example, that the European Data Protection Supervisor is currently conducting a review of EU bodies use of mainstream US cloud services from AWS and Microsoft to check whether they are in compliance with last summer’s Schrems II ruling by the CJEU, which struck down the EU-US Privacy Shield deal, after again finding US surveillance law to be essentially incompatible with EU privacy rights… )

Nym is betting that some governments will — eventually — come looking for alternative technology solutions to the spying problem. Although government procurement cycles make that play a longer game.

In the near term, Halpin says they expect interest and usage for the metadata-obscuring tech to come from the crypto world where there’s a need to shield transactions from view of potential hackers.

“The websites that [crypto] people use — these exchanges — have also expressed interest,” he notes, flagging that Nym also took in some funding from Binance Labs, the VC arm of the cryptocurrency exchange, after it was chosen to go through the Lab’s incubator program in 2018.

The issue for crypto users is their networks are (relatively) small, per Halpin — which makes them vulnerable to deanonymization attacks.

“The thing with a small network is it’s easy for random people to observe this. For example people who want to hack your exchange wallet — which happens all the time. So what cryptocurrency exchanges and companies that deal with cryptocurrency are concerned about is typically they do not want the IP address of their wallet revealed for certain kinds of transactions,” he adds. “This is a real problem for cryptocurrency exchanges — and it’s not that their enemy is the NSA; their enemy could be — and almost always is — an unknown, often lone individual but highly skilled hacker. And these kinds of people can do network observations, on smaller networks like cryptocurrency networks, that are essentially are as powerful as what the NSA could do to the entire Internet.”

There are now a range of startups seeking to decentralize various aspects of Internet or common computing infrastructure — from file storage to decentralized DNS. And while some of these tout increased security and privacy as core benefits of decentralization — suggesting they can ‘fix’ the problem of mass surveillance by having an architecture that massively distributes data, Halpin argues that a privacy claim being routinely attached to decentralized infrastructure is misplaced. (He points to a paper he co-authored on this topic, entitled Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments.)

“Almost all of those projects gain decentralization at the cost of privacy,” he argues. “Because any decentralized system is easier to observe because the crowd has been spread out… than a centralized system — to a large extent. If the adversary is sufficiently powerful enough all the participants in the system. And historically we believe that most people who are interested in decentralization are not expects in privacy and underestimate how easy it is to observe decentalized systems — because most of these systems are actually pretty small.”

He points out there are “only” 10,000 full nodes in Bitcoin, for example, and a similar amount in Ethereum — while other, newer and more nascent decentralized services are likely to have fewer nodes, maybe even just a few hundred or thousand.

And while the Nym network has a similar amount of nodes to Bitcoin, the difference is it’s a mixnet too — so it’s not just decentralized but it’s also using multiple layers of encryption and traffic mixing and the various other obfuscation steps which he says “none of these other people do”.

“We assume the enemy is observing everything in our software,” he adds. “We are not what we call ‘security through obscurity’ — security through obscurity means you assume the enemy just can’t see everything; isn’t looking at your software too carefully; doesn’t know where all your servers are. But — realistically — in an age of mass surveillance, the enemy will know where all your services are and they can observe all the packets coming in, all the packets coming out. And that’s a real problem for decentralized networks.”

Post-Snowden, there’s certainly been growing interest in privacy by design — and a handful of startups and companies have been able to build momentum for services that promise to shield users’ data, such as DuckDuckGo (non-tracking search); Protonmail (e2e encrypted email); and Brave (privacy-safe browsing). Apple has also, of course, very successfully markets its premium hardware under a ‘privacy respecting’ banner.

Halpin says he wants Nym to be part of that movement; building privacy tech that can touch the mainstream.

“Because there’s so much venture capital floating into the market right now I think we have a once in a generation chance — just as everyone was excited about p2p in 2000 — we have a once in a generation chance to build privacy technology and we should build companies which natively support privacy, rather than just trying to bolt it on, in a half hearted manner, onto non-privacy respecting business models.

“Now I think the real question — which is why we didn’t raise more money — is, is there enough consumer and business demand that we can actually discover what the cost of privacy actually is? How much are people willing to pay for it and how much does it cost? And what we do is we do privacy on such a fundamental level is we say what is the cost of a privacy-enhanced byte or packet? So that’s what we’re trying to figure out: How much would people pay just for a privacy-enhanced byte and how much does just a privacy enhanced byte cost? And is this a small enough marginal cost that it can be added to all sorts of systems — just as we added TLS to all sorts of systems and encryption.”

GSA blocks senator from reviewing documents used to approve Zoom for government use

The General Services Administration has denied a senator’s request to review documents Zoom submitted to have its software approved for use in the federal government.

The denial was in response to a letter sent by Democratic senator Ron Wyden to the GSA in May, expressing concern that the agency cleared Zoom for use by federal agencies just weeks before a major security vulnerability was discovered in the app.

Wyden said the discovery of the bug raises “serious questions about the quality of FedRAMP’s audits.”

Zoom was approved to operate in government in April 2019 after receiving its FedRAMP authorization, a program operated by the GSA that ensures cloud services comply with a standardized set of security requirements designed to toughen the service from some of the most common threats. Without this authorization, federal agencies cannot use cloud products or technologies that are not cleared.

Months later, Zoom was forced to patch its Mac app after a security researcher found a flaw that could be abused to remotely switch on a user’s webcam without their permission. Apple was forced to intervene since users were still affected by the vulnerabilities even after uninstalling Zoom. As the pandemic spread and lockdowns were enforced, Zoom’s popularity skyrocketed — as did the scrutiny — including a technical analysis by reporters that found Zoom was not truly end-to-end encrypted as the company long claimed.

Wyden wrote to the GSA to say he found it “extremely concerning” that the security bugs were discovered after Zoom’s clearance. In the letter, the senator requested the documents known as the “security package,” which Zoom submitted as part of the FedRAMP authorization process, to understand how and why the app was cleared by GSA.

The GSA declined Wyden’s first request in July 2020 on the grounds that he was not a committee chair. In the new Biden administration, Wyden was named chair of the Senate Finance Committee and requested Zoom’s security package again.

But in a new letter sent to Wyden’s office late last month, GSA declined the request for the second time, citing security concerns.

“GSA’s refusal to share the Zoom audit with Congress calls into question the security of the other software products that GSA has approved for federal use.” Sen. Ron Wyden (D-OR)

“The security package you have requested contains highly sensitive proprietary and other confidential information relating to the security associated with the Zoom for Government product. Safeguarding this information is critical to maintaining the integrity of the offering and any government data it hosts,” said the GSA letter. “Based on our review, GSA believes that disclosure of the Zoom security package would create significant security risks.”

In response to the GSA’s letter, Wyden told TechCrunch that he was concerned that other flawed software may have been approved for use across the government.

“The intent of GSA’s FedRAMP program is good — to eliminate red tape so that multiple federal agencies don’t have to review the security of the same software. But it’s vitally important that whichever agency conducts the review do so thoroughly,” said Wyden. “I’m concerned that the government’s audit of Zoom missed serious cybersecurity flaws that were subsequently uncovered and exposed by security researchers. GSA’s refusal to share the Zoom audit with Congress calls into question the security of the other software products that GSA has approved for federal use.”

Of the people we spoke with who have first-hand knowledge of the FedRAMP process, either as a government employee or as a company going through the certification, FedRAMP was described as a comprehensive but by no means an exhaustive list of checks that companies have to meet in order to meet the security requirements of the federal government.

Others said that the process had its limits and would benefit from reform. One person with knowledge of how FedRAMP works said the process was not a complete audit of a product’s source code but akin to a checklist of best practices and meeting compliance requirements. Much of it relies on trusting the vendor, said the person, describing it like ” an honor system.” Another person said the FedRAMP process cannot catch every bug, as evidenced by executive action taken by President Biden this week aimed at modernizing and improving the FedRAMP process.

Most of the people we spoke to weren’t surprised that Wyden’s office was denied the request, citing the sensitivity of a company’s FedRAMP security package.

The people said that companies going through the certification process have to provide highly technical details about the security of their product, which if exposed would almost certainly be damaging to the company. Knowing where security weaknesses might be could tip off cyber-criminals, one of the people said. Companies often spend millions on improving their security ahead of a FedRAMP audit but companies wouldn’t risk going through the certification if they thought their trade secrets would get leaked, they added.

When asked by GSA why it objected to Wyden’s request, Zoom’s head of U.S. government relations Lauren Belive argued that handing over the security package “would set a dangerous precedent that would undermine the special trust and confidence” that companies place in the FedRAMP process.

GSA puts strict controls on who can access a FedRAMP security package. You need a federal government or military email address, which the senator’s office has. But the reason for GSA denying Wyden’s request still isn’t clear, and when reached a GSA spokesperson would not explain how a member of Congress would obtain a company’s FedRAMP security package

“GSA values its relationship with Congress and will continue to work with Senator Wyden and our committees of jurisdiction to provide appropriate information regarding our programs and operations,” said GSA spokesperson Christina Wilkes, adding:

“GSA works closely with private sector partners to provide a standardized approach to security authorizations for cloud services through the [FedRAMP]. Zoom’s FedRAMP security package and related documents provide detailed information regarding the security measures associated with the Zoom for Government product. GSA’s consistent practice with regard to sensitive security and trade secret information is to withhold the material absent an official written request of a congressional committee with jurisdiction, and pursuant to controls on further dissemination or publication of the information.”

GSA wouldn’t say which congressional committee had jurisdiction or whether Wyden’s role as chair of the Senate Finance Committee suffices, nor would the agency answer questions about the efficacy of the FedRAMP process raised by Wyden.

Zoom spokesperson Kelsey Knight said that cloud companies like Zoom “provide proprietary and confidential information to GSA as part of the FedRAMP authorization process with the understanding that it will be used only for their use in making authorization decisions. While we do not believe Zoom’s FedRAMP security package should be disclosed outside of this narrow purpose, we welcome conversations with lawmakers and other stakeholders about the security of Zoom for Government.”

Zoom said it has “engaged in security enhancements to continually improve its products,” and received FedRAMP reauthorization in 2020 and 2021 as part of its annual renewal. The company declined to say to what extent the Zoom app was audited as part of the FedRAMP process.

Over two dozen federal agencies use Zoom, including the Defense Department, Homeland Security, U.S. Customs and Border Protection, and the Executive Office of the President.

Swiss Post acquires e2e encrypted cloud services provider, Tresorit

Swiss Post, the former state-owned mail delivery firm which became a private limited company in 2013, diversifying into logistics, finance, transport and more (including dabbling in drone delivery) while retaining its role as Switzerland’s national postal service, has acquired a majority stake in Swiss-Hungarian startup Tresorit, an early European pioneer in end-to-end-encrypted cloud services.

Terms of the acquisition are not being disclosed. But Swiss Post’s income has been falling in recent years, as (snailmail) letter volumes continue to decline. And a 2019 missive warned its business needed to find new sources of income.

Tresorit, meanwhile, last raised back in 2018 — when it announced an €11.5M Series B round, with investors including 3TS Capital Partners and PortfoLion. Other backers of the startup include business angels and serial entrepreneurs like Márton Szőke, Balázs Fejes and Andreas Kemi. According to Crunchbase Tresorit had raised less than $18M over its decade+ run.

It looks like a measure of the rising store being put on data security that a veteran ‘household’ brand like Swiss Post sees strategic value in extending its suite of digital services with the help of a trusted startup in the e2e encryption space.

‘Zero access’ encryption was still pretty niche back when Tresorit got going over a decade ago but it’s essentially become the gold standard for trusted information security, with a variety of players now offering e2e encrypted services — to businesses and consumers.

Announcing the acquisition in a press release today, the pair said they will “collaborate to further develop privacy-friendly and secure digital services that enable people and businesses to easily exchange information while keeping their data secure and private”.

Tresorit will remain an independent company within Swiss Post Group, continuing to serve its global target regions of EU countries, the UK and the US, with the current management (founders), brand and service also slated to remain unchanged, per the announcement.

The 2011-founded startup sells what it brands as “ultra secure” cloud services — such as storage, file syncing and collaboration — targeted at business users (it has 10,000+ customers globally); all zipped up with a ‘zero access’ promise courtesy of a technical architecture that means Tresorit literally can’t decrypt customer data because it does not hold the encryption keys.

It said today that the acquisition will strengthen its business by supporting further expansion in core markets — including Germany, Austria and Switzerland. (The Swiss Post brand should obviously be a help there.)

The pair also said they see potential for Tresorit’s tech to expand Swiss Post’s existing digital product portfolio — which includes services like a “digital letter box” app (ePost) and an encrypted email offering. So it’s not starting from scratch here.

Commenting on the acquisition in a statement, Istvan Lam, co-founder and CEO of Tresorit, said: “From the very beginning, our mission has been to empower everyone to stay in control of their digital valuables. We are proud to have found a partner in Swiss Post who shares our values on security and privacy and makes us even stronger. We are convinced that this collaboration strengthens both companies and opens up new opportunities for us and our customers.”

Asked why the startup decided to sell at this point in its business development — rather than taking another path, such as an IPO and going public — Lam flagged Swiss Post’s ‘trusted’ brand and what he dubbed a “100% fit” on values and mission.

“Tresorit’s latest investment, our biggest funding round, happened in 2018. As usual with venture capital-backed companies, the lifecycle of this investment round is now beginning to come to an end,” he told TechCrunch.

“Going public via an IPO has also been on our roadmap and could have been a realistic scenario within the next 3-4 years. The reason we have decided to partner now with a strategic investor and collaborate with Swiss Post is that their core values and vision on data privacy is a 100% fit with our values and mission of protecting privacy. With the acquisition, we entered a long-term strategic partnership and are convinced that with Tresorit’s end-to-end encryption technology and the trusted brand of Swiss Post we will further develop services that help individuals and businesses exchange information securely and privately.”

“Tresorit has paved the way for true end-to-end encryption across the software industry over the past decade. With the acquisition of Tresorit, we are strategically expanding our competencies in digital data security and digital privacy, allowing us to further develop existing offers,” added Nicole Burth, a member of the Swiss Post Group executive board and head of communication services, in a supporting statement.

Switzerland remains a bit of a hub for pro-privacy startups and services, owing to a historical reputation for strong privacy laws.

However, as Republik reported earlier this year, state surveillance activity in the country has been stepping up — following a 2018 amendment to legislative powers that expanded intercept capabilities to cover digital comms.

Such encroachments are worrying but may arguably make e2e encryption even more important — as it can offer a technical barrier against state-sanctioned privacy intrusions.

At the same time, there is a risk that legislators perceive rising use of robust encryption as a threat to national security interests and their associated surveillance powers — meaning they could seek to counter the trend by passing even more expansive legislation that directly targets and or even outlaws the use of e2e encryption. (Australia has passed an anti-encryption law, for instance, while the UK cemented its mass surveillance capabilities back in 2016 — passing legislation which includes powers to compel companies to limit the use of encryption.)

At the European Union level, lawmakers have also recently been pushing an agenda of ‘lawful access’ to encrypted data — while simultaneously claiming to support the use of encryption on data security and privacy grounds. Quite how the EU will circle that square in legislative terms remains to be seen.

But there are also some more positive legal headwinds for European encryption startups like Tresorit: A ruling last summer by Europe’s top court dialled up the complexity of taking users’ personal data out of the region — certainly when people’s information is flowing to third countries like the US where it’s at risk from state agencies’ mass surveillance.

Asked if Tresorit has seen a rise in interest in the wake of the ‘Schrems II’ ruling, Lam told us: “We see the demand for European-based SaaS cloud services growing in the future. Being a European-based company has already been an important competitive advantage for us, especially among our business and enterprise customers.”

EU law in this area contains a quirk whereby the national security powers of Member States are not so clearly factored in vs third countries. And while Switzerland is not an EU Member it remains a closely associated country, being part of the bloc’s single market.

Nevertheless, questions over the sustainability of Switzerland’s EU data adequacy decision persist, given concerns that its growing domestic surveillance regime does not provide individuals with adequate redress remedies — and may therefore be violating their fundamental rights.

If Switzerland loses EU data adequacy it could impact the compliance requirements of digital services based in the country — albeit, again, e2e encryption could offer Swiss companies a technical solution to circumvent such legal uncertainty. So that still looks like good news for companies like Tresorit.

 

3 issues to resolve before switching to a subscription business model

In my role at CloudBlue, Fortune 500 companies often approach me for help with solving technology challenges while shifting to a subscription business model, only to realize that they have not taken crucial organizational steps necessary to ensure a successful transition.

Subscriptions scale better, enhance customer experience and hold the promise of recurring and more predictable revenue streams — a pretty enticing prospect for any business. This business model is predominant in software as a service (SaaS), but it is hard to find an industry that doesn’t have a successful subscription story. A growing number of companies in sectors ranging from automotive, airlines, gaming and health to wellness, education, professional development and home maintenance have been introducing subscription services in recent years.

Legacy companies accustomed to pay-as-you-go models may assume shifting to a subscription model is just a sales issue. They are wrong.

However, businesses should be aware that the subscription model is much more than simply putting a monthly or annual price tag on their offering. Executives cannot just layer a subscription model on top of an existing business. They need to change the entire operation process, onboard all stakeholders, recalibrate their strategy and create a subscription culture.

While 70% of business leaders believe subscriptions will be key to their future, only 55% of companies believe they’re ready for the transition. Before talking technology, which is an enabler, companies should first address the following core issues to holistically plan and switch to a recurring revenue model.

Get internal stakeholders involved

Legacy companies accustomed to pay-as-you-go models may assume shifting to a subscription model is just a sales issue. They are wrong. Such a migration will affect nearly all departments across an organization, from product development and manufacturing to finance, sales, marketing and customer service. Leaders must therefore get all stakeholders motivated for the change and empower them to actively prepare for the transformation. The better you prepare, the smoother the transition.

But as we know, people naturally do not like change, even if it is for their own good. So it can be a formidable task to secure the cooperation of all internal stakeholders, which, depending on the size of your company, could number in the thousands.