Marketers should plan for more DIY metrics as iOS 15 nears

Apple is planning to remove developer access to important user data as part of its iOS 15 release on Monday, leaving email marketers in a dilemma about how they will figure out metrics. To find out how the industry is approaching this problem, we spoke with Vivek Sharma, CEO of Movable Ink, a software company that helps marketers act on the data they’re collecting.

This conversation builds on our Extra Crunch post from August exploring how email marketers can prepare for Apple’s Mail Privacy Protection changes.

The game-changer for email marketers with this update is that as an Apple Mail user, you’ll have the option to hide your IP address.

How can marketers pivot their tactics to remain in control of their metrics? Sharma feels we’ll see more focus on downstream metrics rather than the open rate — on clicks, conversions and revenue. “That sounds great and everything, but you have less of that data. But by definition, that funnel kind of narrows; there are fewer people to get to at that point, so it might take you longer to know if something is working or not working for you.”

Help TechCrunch find the best growth marketers for startups.

Provide a recommendation in this quick survey and we’ll share the results with everybody.

Sharma says zero-party data is something that businesses have been focused on. “There are two components: There’s ‘open’ as a metric, and there’s some of the information you’re getting at open time, like the IP address, the time of day, and the inferred weather. Things like the IP address, time of date, etc. are perceived as data leakage. These are just a couple of the data points that marketers will lose access to. Therefore they are using first and zero-party data which they have already been investing in.”

The challenge, according to Sharma, is: How can marketers collect zero-party data in an interesting, visually appealing way, and then personalize its contents for every customer at scale?

One way that Movable Ink has collected zero-party data is displayed below:

Image Credits: Movable Ink

Sharma says, “Everything in here is a polling question: ‘What do you typically shop for?’ ‘What’s your shoe size?’ And they’re giving you loyalty points in return, so there’s an exchange of value happening here. They’re learning about you in a clear way and giving you an easy way to engage with the brand you’re interested in.”

Once you have the data, the question is: How do you use it? Below we see an example from JetBlue.

Image Credits: Movable Ink (opens in a new window)

Sharma outlines three takeaways from iOS 15 for email marketers:

  1. Focus on down-funnel metrics like clicks and conversions — that’s what it really comes down to and it’s the truest indicator of engagement.
  2. Invest in your zero and first-party data assets. True personalization is what people experience and what they see. You can do that from your zero and first-party assets.
  3. Email is a great channel to engage your customers, because it’s a mature one that’s been invested in. Email is an awesome channel for building a one-to-one relationship with your customer, and far more. It has gone through lots of changes over the last 10 or 15 years. The industry will evolve and we’ll find that balance of privacy and personalization.

Ireland probes TikTok’s handling of kids’ data and transfers to China

Ireland’s Data Protection Commission (DPC) has yet another ‘Big Tech’ GDPR probe to add to its pile: The regulator said yesterday it has opened two investigations into video sharing platform TikTok.

The first covers how TikTok handles children’s data, and whether it complies with Europe’s General Data Protection Regulation.

The DPC also said it will examine TikTok’s transfers of personal data to China, where its parent entity is based — looking to see if the company meets requirements set out in the regulation covering personal data transfers to third countries.

TikTok was contacted for comment on the DPC’s investigation.

A spokesperson told us:

“The privacy and safety of the TikTok community, particularly our youngest members, is a top priority. We’ve implemented extensive policies and controls to safeguard user data and rely on approved methods for data being transferred from Europe, such as standard contractual clauses. We intend to fully cooperate with the DPC.”

The Irish regulator’s announcement of two “own volition” enquiries follows pressure from other EU data protection authorities and consumers protection groups which have raised concerns about how TikTok handles’ user data generally and children’s information specifically.

In Italy this January, TikTok was ordered to recheck the age of every user in the country after the data protection watchdog instigated an emergency procedure, using GDPR powers, following child safety concerns.

TikTok went on to comply with the order — removing more than half a million accounts where it could not verify the users were not children.

This year European consumer protection groups have also raised a number of child safety and privacy concerns about the platform. And, in May, EU lawmakers said they would review the company’s terms of service.

On children’s data, the GDPR sets limits on how kids’ information can be processed, putting an age cap on the ability of children to consent to their data being used. The age limit varies per EU Member State but there’s a hard cap for kids’ ability to consent at 13 years old (some EU countries set the age limit at 16).

In response to the announcement of the DPC’s enquiry, TikTok pointed to its use of age gating technology and other strategies it said it uses to detect and remove underage users from its platform.

It also flagged a number of recent changes it’s made around children’s accounts and data — such as flipping the default settings to make their accounts privacy by default and limiting their exposure to certain features that intentionally encourage interaction with other TikTok users if those users are over 16.

While on international data transfers it claims to use “approved methods”. However the picture is rather more complicated than TikTok’s statement implies. Transfers of Europeans’ data to China are complicated by there being no EU data adequacy agreement in place with China.

In TikTok’s case, that means, for any personal data transfers to China to be lawful, it needs to have additional “appropriate safeguards” in place to protect the information to the required EU standard.

When there is no adequacy arrangement in place, data controllers can, potentially, rely on mechanisms like Standard Contractual Clauses (SCCs) or binding corporate rules (BCRs) — and TikTok’s statement notes it uses SCCs.

But — crucially — personal data transfers out of the EU to third countries have faced significant legal uncertainty and added scrutiny since a landmark ruling by the CJEU last year which invalidated a flagship data transfer arrangement between the US and the EU and made it clear that DPAs (such as Ireland’s DPC) have a duty to step in and suspend transfers if they suspect people’s data is flowing to a third country where it might be at risk.

So while the CJEU did not invalidate mechanisms like SCCs entirely they essentially said all international transfers to third countries must be assessed on a case-by-case basis and, where a DPA has concerns, it must step in and suspend those non-secure data flows.

The CJEU ruling means just the fact of using a mechanism like SCCs doesn’t mean anything on its own re: the legality of a particular data transfer. It also amps up the pressure on EU agencies like Ireland’s DPC to be pro-active about assessing risky data flows.

Final guidance put out by the European Data Protection Board, earlier this year, provides details on the so-called ‘special measures’ that a data controller may be able to apply in order to increase the level of protection around their specific transfer so the information can be legally taken to a third country.

But these steps can include technical measures like strong encryption — and it’s not clear how a social media company like TikTok would be able to apply such a fix, given how its platform and algorithms are continuously mining users’ data to customize the content they see and in order to keep them engaged with TikTok’s ad platform.

In another recent development, China has just passed its first data protection law.

But, again, this is unlikely to change much for EU transfers. The Communist Party regime’s ongoing appropriation of personal data, through the application of sweeping digital surveillance laws, means it would be all but impossible for China to meet the EU’s stringent requirements for data adequacy. (And if the US can’t get EU adequacy it would be ‘interesting’ geopolitical optics, to put it politely, were the coveted status to be granted to China…)

One factor TikTok can take heart from is that it does likely have time on its side when it comes to the’s EU enforcement of its data protection rules.

The Irish DPC has a huge backlog of cross-border GDPR investigations into a number of tech giants.

It was only earlier this month that Irish regulator finally issued its first decision against a Facebook-owned company — announcing a $267M fine against WhatsApp for breaching GDPR transparency rules (but only doing so years after the first complaints had been lodged).

The DPC’s first decision in a cross-border GDPR case pertaining to Big Tech came at the end of last year — when it fined Twitter $550k over a data breach dating back to 2018, the year GDPR technically begun applying.

The Irish regulator still has scores of undecided cases on its desk — against tech giants including Apple and Facebook. That means that the new TikTok probes join the back of a much criticized bottleneck. And a decision on these probes isn’t likely for years.

On children’s data, TikTok may face swifter scrutiny elsewhere in Europe: The UK added some ‘gold-plaiting’ to its version of the EU GDPR in the area of children’s data — and, from this month, has said it expects platforms meet its recommended standards.

It has warned that platforms that don’t fully engage with its Age Appropriate Design Code could face penalties under the UK’s GDPR. The UK’s code has been credited with encouraging a number of recent changes by social media platforms over how they handle kids’ data and accounts.

Biden’s new FTC nominee is a digital privacy advocate critical of Big Tech

President Biden made his latest nomination to the Federal Trade Commission this week, tapping digital privacy expert Alvaro Bedoya to join the agency as it takes a hard look at the tech industry.

Bedoya is the founding director of the Center on Privacy & Technology at Georgetown’s law school and previously served as chief counsel for former Senator Al Franken and the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Bedoya has worked on legislation addressing some of the most pressing privacy issues in tech, including stalkerware and facial recognition systems.

In 2016, Bedoya co-authored a report titled “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” a year-long investigation that dove deeply into the police use of facial recognition systems in the U.S. The 2016 report examined law enforcement’s reliance on facial recognition systems and biometric databases on a state level. It argued that regulations are desperately needed to curtail potential abuses and algorithmic failures before the technology inevitably becomes even more commonplace.

Bedoya also isn’t shy about calling out Big Tech. In a New York Times op-ed a few years ago, he took aim at Silicon Valley companies giving user privacy lip service in public while quietly funneling millions toward lobbyists to undermine consumer privacy. The new FTC nominee singled out Facebook specifically, pointing to the company’s efforts to undermine the Illinois Biometric Information Privacy Act, a state law that serves as one of the only meaningful checks on invasive privacy practices in the U.S.

Bedoya argued that the tech industry would have an easier time shaping a single, sweeping piece of privacy regulations with its lobbying efforts rather than a flurry of targeted, smaller bills. Antitrust advocates in Congress taking aim at tech today seem to have learned that same lesson as well.

“We cannot underestimate the tech sector’s power in Congress and in state legislatures,” Bedoya wrote. “If the United States tries to pass broad rules for personal data, that effort may well be co-opted by Silicon Valley, and we’ll miss our best shot at meaningful privacy protections.”

If confirmed, Bedoya would join big tech critic Lina Khan, a recent Biden FTC nominee who now chairs the agency. Khan’s focus on antitrust and Amazon in particular would dovetail with Bedoya’s focus on adjacent privacy concerns, making the pair a formidable regulatory presence as the Biden administration seeks to rein in some of the tech industry’s most damaging excesses.

Amazon partners with AXS to install Amazon One palm readers at entertainment venues

Amazon’s biometric scanner for retail, the Amazon One palm reader, is expanding beyond the e-commerce giant’s own stores. The company announced today it has acquired its initial third-party customer with ticketing company AXS, which will implement the Amazon One system at Denver, Colorado’s Red Rocks Amphitheatre as an option for contactless entry for event-goers.

This is the first time the Amazon One system will be used outside an Amazon-owned retail store, and the first time it’s used for entry into an entertainment venue. Amazon says it expects AXS to roll out the system to more venues in the future, but didn’t offer any specifics as to which ones or when.

At Red Rocks, guests will be able to associate their AXS Mobile ID with Amazon One at dedicated stations before they enter the amphitheatre, or they can enroll at a second station once inside in order to use the reader at future AXS events. The enrollment process takes about a minute and customers can choose to enroll either one or both palms. Once set up, ticketholders can use a dedicated entry line for Amazon One users.

“We are proud to work with Amazon to continue shaping the future of ticketing through cutting-edge innovation,” said Bryan Perez, CEO of AXS, in a statement. “We are also excited to bring Amazon One to our clients and the industry at a time when there is a need for fast, convenient, and contactless ticketing solutions. At AXS, we are continually deploying new technologies to develop secure and smarter ticketing offerings that improve the fan experience before, during, and after events,” he added.

Image Credits: Amazon

 

Amazon’s palm reader was first introduced amid the pandemic in September 2020, as a way for shoppers to pay at Amazon Go convenience stores using their palm. To use the system, customers would first insert their credit card then hover their palm over the device to associate their unique palm print with their payment mechanism. After setup, customers could enter the store just by holding their palm above the biometric scanner for a second or so. Amazon touted the system as a safer, “contactless” means of payment, as customers aren’t supposed to actually touch the reader. (Hopefully, that’s the case, considering the pandemic rages on.)

On the tech side, Amazon One uses computer vision technology to create the palm signatures, it said.

In the months that followed, Amazon expanded the biometric system to several more stores, including other Amazon Go convenience stores, Amazon Go Grocery stores, and its Amazon Books and Amazon 4-star stores. This April, it brought the system to select Whole Foods locations. To encourage more sign-ups, Amazon even introduced a $10 promotional credit to enroll your palm prints at its supported stores.

When palm prints are linked to Amazon accounts, the company is able to collect data from customers’ offline activity to target ads, offers, and recommendations over time. And the data remains with Amazon until a customer explicitly deletes it, or if the customer doesn’t use the feature for at least two years.

While the system offers an interesting take on contactless payments, Amazon’s track record in this area has raised privacy concerns. The company had in the past sold biometric facial recognition services to law enforcement in the U.S. Its facial recognition technology was the subject of a data privacy lawsuit. And it was found to be still storing Alexa voice data even after users deleted their audio files.

Amazon has responded by noting its palm print images are encrypted and sent to a secure area built for Amazon One in the cloud where Amazon creates the customers’ palm signatures. It also has noted it allows customers to unenroll from either a device or from its website, one.amazon.com once all transactions have been processed.

20 years later, unchecked data collection is part of 9/11’s legacy

Almost every American adult remembers, in vivid detail, where they were the morning of September 11, 2001. I was on the second floor of the West Wing of the White House, at a National Economic Council Staff meeting — and I will never forget the moment the Secret Service agent abruptly entered the room, shouting: “You must leave now. Ladies, take off your high heels and go!”

Just an hour before, as the National Economic Council White House technology adviser, I was briefing the deputy chief of staff on final details of an Oval Office meeting with the president, scheduled for September 13. Finally, we were ready to get the president’s sign-off to send a federal privacy bill to Capitol Hill — effectively a federal version of the California Privacy Rights Act, but stronger. The legislation would put guardrails around citizens’ data — requiring opt-in consent for their information to be shared, governing how their data could be collected and how it would be used.

But that morning, the world changed. We evacuated the White House and the day unfolded with tragedy after tragedy sending shockwaves through our nation and the world. To be in D.C. that day was to witness and personally experience what felt like the entire spectrum of human emotion: grief, solidarity, disbelief, strength, resolve, urgency … hope.

Much has been written about September 11, but I want to spend a moment reflecting on the day after.

When the National Economic Council staff came back into the office on September 12, I will never forget what Larry Lindsey, our boss at the time, told us: “I would understand it if some of you don’t feel comfortable being here. We are all targets. And I won’t appeal to your patriotism or faith. But I will — as we are all economists in this room — appeal to your rational self-interest. If we back away now, others will follow, and who will be there to defend the pillars of our society? We are holding the line here today. Act in a way that will make this country proud. And don’t abandon your commitment to freedom in the name of safety and security.”

There is so much to be proud of about how the country pulled together and how our government responded to the tragic events on September 11. First, however, as a professional in the cybersecurity and data privacy field, I reflect on Larry’s advice, and many of the critical lessons learned in the years that followed — especially when it comes to defending the pillars of our society.

Even though our collective memories of that day still feel fresh, 20 years have passed, and we now understand the vital role that data played in the months leading up to the 9/11 terrorist attacks. But, unfortunately, we failed to connect the dots that could have saved thousands of lives by holding intelligence data too closely in disparate locations. These data silos obscured the patterns that would have been clear if only a framework had been in place to share information securely.

So, we told ourselves, “Never again,” and government officials set out to increase the amount of intelligence they could gather — without thinking through significant consequences for not only our civil liberties but also the security of our data. So, the Patriot Act came into effect, with 20 years of surveillance requests from intelligence and law enforcement agencies crammed into the bill. Having been in the room for the Patriot Act negotiations with the Department of Justice, I can confidently say that, while the intentions may have been understandable — to prevent another terrorist attack and protect our people — the downstream negative consequences were sweeping and undeniable.

Domestic wiretapping and mass surveillance became the norm, chipping away at personal privacy, data security and public trust. This level of surveillance set a dangerous precedent for data privacy, meanwhile yielding marginal results in the fight against terrorism.

Unfortunately, the federal privacy bill that we had hoped to bring to Capitol Hill the very week of 9/11 — the bill that would have solidified individual privacy protections — was mothballed.

Over the subsequent years, it became easier and cheaper to collect and store massive amounts of surveillance data. As a result, tech and cloud giants quickly scaled up and dominated the internet. As more data was collected (both by the public and the private sectors), more and more people gained visibility into individuals’ private data — but no meaningful privacy protections were put in place to accompany that expanded access.

Now, 20 years later, we find ourselves with a glut of unfettered data collection and access, with behemoth tech companies and IoT devices collecting data points on our movements, conversations, friends, families and bodies. Massive and costly data leaks — whether from ransomware or simply misconfiguring a cloud bucket — have become so common that they barely make the front page. As a result, public trust has eroded. While privacy should be a human right, it’s not one that’s being protected — and everyone knows it.

This is evident in the humanitarian crisis we have seen in Afghanistan. Just one example: Tragically, the Taliban have seized U.S. military devices that contain biometric data on Afghan citizens who supported coalition forces — data that would make it easy for the Taliban to identify and track down those individuals and their families. This is a worst-case scenario of sensitive, private data falling into the wrong hands, and we did not do enough to protect it.

This is unacceptable. Twenty years later, we are once again telling ourselves, “Never again.” 9/11 should have been a reckoning of how we manage, share and safeguard intelligence data, but we still have not gotten it right. And in both cases — in 2001 and 2021 — the way we manage data has a life-or-death impact.

This is not to say we aren’t making progress: The White House and U.S. Department of Defense have turned a spotlight on cybersecurity and Zero Trust data protection this year, with an executive order to spur action toward fortifying federal data systems. The good news is that we have the technology we need to safeguard this sensitive data while still making it shareable. In addition, we can put contingency plans in place to prevent data that falls into the wrong hands. But, unfortunately, we just aren’t moving fast enough — and the slower we solve this problem of secure data management, the more innocent lives will be lost along the way.

Looking ahead to the next 20 years, we have an opportunity to rebuild trust and transform the way we manage data privacy. First and foremost, we have to put some guardrails in place. We need a privacy framework that gives individuals autonomy over their own data by default.

This, of course, means that public- and private-sector organizations have to do the technical, behind-the-scenes work to make this data ownership and control possible, tying identity to data and granting ownership back to the individual. This is not a quick or simple fix, but it’s achievable — and necessary — to protect our people, whether U.S. citizens, residents or allies worldwide.

To accelerate the adoption of such data protection, we need an ecosystem of free, accessible and open source solutions that are interoperable and flexible. By layering data protection and privacy in with existing processes and solutions, government entities can securely collect and aggregate data in a way that reveals the big picture without compromising individuals’ privacy. We have these capabilities today, and now is the time to leverage them.

Because the truth is, with the sheer volume of data that’s being gathered and stored, there are far more opportunities for American data to fall into the wrong hands. The devices seized by the Taliban are just a tiny fraction of the data that’s currently at stake. As we’ve seen so far this year, nation-state cyberattacks are escalating. This threat to human life is not going away.

Larry’s words from September 12, 2001, still resonate: If we back away now, who will be there to defend the pillars of our society? It’s up to us — public- and private-sector technology leaders — to protect and defend the privacy of our people without compromising their freedoms.

It’s not too late for us to rebuild public trust, starting with data. But, 20 years from now, will we look back on this decade as a turning point in protecting and upholding individuals’ right to privacy, or will we still be saying, “Never again,” again and again?

WhatsApp will finally let users encrypt their chat backups in the cloud

WhatsApp said on Friday it will give its two billion users the option to encrypt their chat backups to the cloud, taking a significant step to put a lid on one of the tricky ways private communication between individuals on the app can be compromised.

The Facebook-owned service has end-to-end encrypted chats between users for more than a decade. But users have had no option but to store their chat backup to their cloud — iCloud on iPhones and Google Drive on Android — in an unencrypted format.

Tapping these unencrypted WhatsApp chat backups on Google and Apple servers is one of the widely known ways law enforcement agencies across the globe have been able to access WhatsApp chats of suspect individuals for years.

Now WhatsApp says it is patching this weak link in the system.

“WhatsApp is the first global messaging service at this scale to offer end-to-end encrypted messaging and backups, and getting there was a really hard technical challenge that required an entirely new framework for key storage and cloud storage across operating systems,” said Facebook’s chief executive Mark Zuckerberg said in a post announcing the new feature.

Store your own encryption keys

The company said it has devised a system to enable WhatsApp users on Android and iOS to lock their chat backups with encryption keys. WhatsApp says it will offer users two ways to encrypt their cloud backups, and the feature is optional.

In the “coming weeks,” users on WhatsApp will see an option to generate a 64-digit encryption key to lock their chat backups in the cloud. Users can store the encryption key offline or in a password manager of their choice, or they can create a password that backs up their encryption key in a cloud-based “backup key vault” that WhatsApp has developed. The cloud-stored encryption key can’t be used without the user’s password, which isn’t known by WhatsApp.

(Image: WhatsApp/supplied)

“We know that some will prefer the 64-digit encryption key whereas others want something they can easily remember, so we will be including both options. Once a user sets their backup password, it is not known to us. They can reset it on their original device if they forget it,” WhatsApp said.

“For the 64-digit key, we will notify users multiple times when they sign up for end-to-end encrypted backups that if they lose their 64-digit key, we will not be able to restore their backup and that they should write it down. Before the setup is complete, we’ll ask users to affirm that they’ve saved their password or 64-digit encryption key.”

A WhatsApp spokesperson told TechCrunch that once an encrypted backup is created, previous copies of the backup will be deleted. “This will happen automatically and there is no action that a user will need to take,” the spokesperson added.

Potential regulatory pushback?

The move to introduce this added layer of privacy is significant and one that could have far-reaching implications.

End-to-end encryption remains a thorny topic of discussion as governments continue to lobby for backdoors. Apple was reportedly pressured to not add encryption to iCloud Backups after the FBI complained, and while Google has offered users the ability to encrypt their data stored in Google Drive, the company allegedly didn’t tell governments before it rolled out the feature.

When asked by TechCrunch whether WhatsApp, or its parent firm Facebook, had consulted with government bodies — or if it had received their support — during the development process of this feature, the company declined to discuss any such conversations.

“People’s messages are deeply personal and as we live more of our lives online, we believe companies should enhance the security they provide their users. By releasing this feature, we are providing our users with the option to add this additional layer of security for their backups if they’d like to, and we’re excited to give our users a meaningful advancement in the safety of their personal messages,” the company told TechCrunch.

WhatsApp also confirmed that it will be rolling out this optional feature in every market where its app is operational.  It’s not uncommon for companies to withhold privacy features for legal and regulatory reasons. Apple’s upcoming encrypted browsing feature, for instance, won’t be made available to users in certain authoritarian regimes, such as China, Belarus, Egypt, Kazakhstan, Saudi Arabia, Turkmenistan, Uganda, and the Philippines.

At any rate, Friday’s announcement comes days after ProPublica reported that private end-to-end encrypted conversations between two users can be read by human contractors when messages are reported by users.

“Making backups fully encrypted is really hard and it’s particularly hard to make it reliable and simple enough for people to use. No other messaging service at this scale has done this and provided this level of security for people’s messages,” Uzma Barlaskar, product lead for privacy at WhatsApp, told TechCrunch.

“We’ve been working on this problem for many years, and to build this, we had to develop an entirely new framework for key storage and cloud storage that can be used across the world’s largest operating systems and that took time.”

Have ‘The Privacy Talk’ with your business partners

As a parent of teenagers, I’m used to having tough, sometimes even awkward, conversations about topics that are complex but important. Most parents will likely agree with me when I say those types of conversations never get easier, but over time, you tend to develop a roadmap of how to approach the subject, how to make sure you’re being clear and how to answer hard questions.

And like many parents, I quickly learned that my children have just as much to teach me as I can teach them. I’ve learned that tough conversations build trust.

I’ve applied this lesson about trust-building conversations to an extremely important aspect of my role as the chief legal officer at Foursquare: Conducting “The Privacy Talk.”

The discussion should convey an understanding of how the legislative and regulatory environment are going to affect product offerings, including what’s being done to get ahead of that change.

What exactly is The Privacy Talk?

It’s the conversation that goes beyond the written, publicly posted privacy policy and dives deep into a customer, vendor, supplier or partner’s approach to ethics. This conversation seeks to convey and align the expectations that two companies must have at the beginning of a new engagement.

RFIs may ask a lot of questions about privacy compliance, information security and data ethics. But it’s no match for asking your prospective partner to hop on a Zoom to walk you through their broader approach. Unless you hear it firsthand, it can be hard to discern whether a partner is thinking strategically about privacy, if they are truly committed to data ethics and how compliance is woven into their organization’s culture.

What China’s new data privacy law means for US tech firms

China enacted a sweeping new data privacy law on August 20 that will dramatically impact how tech companies can operate in the country. Officially called the Personal Information Protection Law of the People’s Republic of China (PIPL), the law is the first national data privacy statute passed in China.

Modeled after the European Union’s General Data Protection Regulation, the PIPL imposes protections and restrictions on data collection and transfer that companies both inside and outside of China will need to address. It is particularly focused on apps using personal information to target consumers or offer them different prices on products and services, and preventing the transfer of personal information to other countries with fewer protections for security.

The PIPL, slated to take effect on November 1, 2021, does not give companies a lot of time to prepare. Those that already follow GDPR practices, particularly if they’ve implemented it globally, will have an easier time complying with China’s new requirements. But firms that have not implemented GDPR practices will need to consider adopting a similar approach. In addition, U.S. companies will need to consider the new restrictions on the transfer of personal information from China to the U.S.

Implementation and compliance with the PIPL is a much more significant task for companies that have not implemented GDPR principles.

Here’s a deep dive into the PIPL and what it means for tech firms:

New data handling requirements

The PIPL introduces perhaps the most stringent set of requirements and protections for data privacy in the world (this includes special requirements relating to processing personal information by governmental agencies that will not be addressed here). The law broadly relates to all kinds of information, recorded by electronic or other means, related to identified or identifiable natural persons, but excludes anonymized information.

The following are some of the key new requirements for handling people’s personal information in China that will affect tech businesses:

Extra-territorial application of the China law

Historically, China regulations have only been applied to activities inside the country. The PIPL is similar in applying the law to personal information handling activities within Chinese borders. However, similar to GDPR, it also expands its application to the handling of personal information outside China if the following conditions are met:

  • Where the purpose is to provide products or services to people inside China.
  • Where analyzing or assessing activities of people inside China.
  • Other circumstances provided in laws or administrative regulations.

For example, if you are a U.S.-based company selling products to consumers in China, you may be subject to the China data privacy law even if you do not have a facility or operations there.

Data handling principles

The PIPL introduces principles of transparency, purpose and data minimization: Companies can only collect personal information for a clear, reasonable and disclosed purpose, and to the smallest scope for realizing the purpose, and retain the data only for the period necessary to fulfill that purpose. Any information handler is also required to ensure the accuracy and completeness of the data it handles to avoid any negative impact on personal rights and interests.

UK dials up the spin on data reform, claiming ‘simplified’ rules will drive ‘responsible’ data sharing

The U.K. government has announced a consultation on plans to shake up the national data protection regime, as it looks at how to diverge from European Union rules following Brexit.

It’s also a year since the U.K. published a national data strategy in which said it wanted pandemic levels of data sharing to become Britain’s new normal.

The Department for Digital, Culture, Media and Sport (DCPS) has today trailed an incoming reform of the information commissioner’s office — saying it wants to broaden the ICO’s remit to “champion sectors and businesses that are using personal data in new, innovative and responsible ways to benefit people’s lives”; and promising “simplified” rules to encourage the use of data for research which “benefit’s people’s lives”, such as in the field of healthcare.

It also wants a new structure for the regulator — including the creation of an independent board and chief executive for the ICO, to mirror the governance structures of other regulators such as the Competition and Markets Authority, Financial Conduct Authority and Ofcom.

Additionally, it said the data reform consultation will consider how the new regime can help mitigate the risks around algorithmic bias — something the EU is already moving to legislate on, setting out a risk-based proposal for regulating applications of AI back in April.

Which means the U.K. risks being left lagging if it’s only going to concern itself with a narrow focus on “bias mitigation”, rather than considering the wider sweep of how AI is intersecting with and influencing its citizens’ lives.

In a press release announcing the consultation, DCMS highlights an artificial intelligence partnership involving Moorfields Eye Hospital and the University College London Institute of Ophthalmology, which kicked off back in 2016, as an example of the kinds of beneficial data sharing it wants to encourage. Last year the researchers reported that their AI had been able to predict the development of wet age-related macular degeneration more accurately than clinicians.

The partnership also involved (Google-owned) DeepMind and now Google Health — although the government’s PR doesn’t make mention of the tech giant’s involvement. It’s an interesting omission, given that DeepMind’s name is also attached to a notorious U.K. patient data-sharing scandal, which saw another London-based NHS Trust (the Royal Free) sanctioned by the ICO, in 2017, for improperly sharing patient data with the Google-owned company during the development phase of a clinician support app (which Google is now in the process of discontinuing).

DCMS may be keen to avoid spelling out that its goal for the data reforms — aka to “remove unnecessary barriers to responsible data use” — could end up making it easier for commercial entities like Google to get their hands on U.K. citizens’ medical records.

The sizeable public backlash over the most recent government attempt to requisition NHS users’ medical records — for vaguely defined “research” purposes (aka the “General Practice Data for Planning and Research”, or GPDPR, scheme) — suggests that a government-enabled big-health-data-free-for-all might not be so popular with U.K. voters.

“The government’s data reforms will provide clarity around the rules for the use of personal data for research purposes, laying the groundwork for more scientific and medical breakthroughs,” is how DCMS’ PR skirts the sensitive health data sharing topic.

Elsewhere there’s talk of “reinforc[ing] the responsibility of businesses to keep personal information safe, while empowering them to grow and innovate” — so that sounds like a yes to data security but what about individual privacy and control over what happens to your information?

The government seems to be saying that will depend on other aims — principally economic interests attached to the U.K.’s ability to conduct data-driven research or secure trade deals with other countries that don’t have the same (current) high U.K. standards of data protection.

There are some purely populist flourishes here too — with DCMS couching its ambition for a data regime “based on common sense, not box ticking” — and flagging up plans to beef up penalties for nuisance calls and text messages. Because, sure, who doesn’t like the sound of a crackdown on spam?

Except spam text messages and nuisance calls are a pretty quaint concern to zero in on in an era of apps and data-driven, democracy-disrupting mass surveillance — which was something the outgoing information commissioner raised as a major issue of concern during her tenure at the ICO.

The same populist anti-spam messaging has already been deployed by ministers to attack the need to obtain internet users’ consent for dropping tracking cookies — which the digital minister Oliver Dowden recently suggested he wants to do away with — for all but “high risk” purposes.

Having a system of rights wrapping people’s data that gives them a say over (and a stake in) how it can be used appears to be being reframed in the government’s messaging as irresponsible or even non-patriotic — with DCMS pushing the notion that such rights stand in the way of more important economic or highly generalized “social” goals.

Not that it has presented any evidence for that — or even that the U.K.’s current data protection regime got in the way of (the very ample) data sharing during COVID-19… While negative uses of people’s information are being condensed in DCMS’ messaging to the narrowest possible definition — of spam that’s visible to an individual — never mind how that person got targeted with the nuisance calls/spam texts in the first place.

The government is taking its customary “cake and eat it” approach to spinning its reform plan — claiming it will both “protect” people’s data while also trumpeting the importance of making it really easy for citizens’ information to be handed off to anyone who wants it, so long as they can claim they’re doing some kind of “innovation”, while also larding its PR with canned quotes dubbing the plan “bold” and “ambitious”.

So while DCMS’ announcement says the reform will “maintain” the U.K.’s (currently) world-leading data protection standards, it directly rows back — saying the new regime will (merely) “build on” a few broad-brush “key elements” of the current rules (specifically it says it will keep “principles around data processing, people’s data rights and mechanisms for supervision and enforcement”).

Clearly the devil will be in the detail of the proposals which are due to be published tomorrow morning. So expect more analysis to debunk the spin soon.

But in one specific trailed change DCMS says it wants to move away from a “one-size-fits-all” approach to data protection compliance — and “allow organisations to demonstrate compliance in ways more appropriate to their circumstances, while still protecting citizens’ personal data to a high standard”.

That implies that smaller data-mining operations — DCMS’s PR uses the example of a hairdresser’s but plenty of startups can employ fewer staff than the average barber’s shop — may be able to expect to get a pass to ignore those ‘high standards’ in the future.

Which suggests the U.K.’s “high standards” may, under Dowden’s watch, end up resembling more of a Swiss Cheese…

Data protection is a “how to, not a don’t do”…

The man who is likely to become the U.K.’s next information commissioner, New Zealand’s privacy commissioner John Edwards, was taking questions from a parliamentary committee earlier today, as MPs considered whether to support his appointment to the role.

If he’s confirmed in the job, Edwards will be responsible for implementing whatever new data regime the government cooks up.

Under questioning, he rejected the notion that the U.K.’s current data protection regime presents a barrier to data sharing — arguing that laws like GDPR should rather be seen as a “how to” and an “enabler” for innovation.

“I would take issue with the dichotomy that you presented [about privacy vs data-sharing],” he told the committee chair. “I don’t believe that policymakers and businesses and governments are faced with a choice of share or keep faith with data protection. Data protection laws and privacy laws would not be necessary if it wasn’t necessary to share information. These are two sides of the same coin.

“The UK DPA [data protection act] and UK GDPR they are a ‘how to’ — not a ‘don’t do’. And I think the UK and many jurisdictions have really finally learned that lesson through the COVID-19 crisis. It has been absolutely necessary to have good quality information available, minute by minute. And to move across different organizations where it needs to go, without friction. And there are times when data protection laws and privacy laws introduce friction and I think that what you’ve seen in the UK is that when it needs to things can happen quickly.”

He also suggested that plenty of economic gains could be achieved for the U.K. with some minor tweaks to current rules, rather than a more radical reboot being necessary. (Though clearly setting the rules won’t be up to him; his job will be enforcing whatever new regime is decided.)

“If we can, in the administration of a law which at the moment looks very much like the UK GDPR, that gives great latitude for different regulatory approaches — if I can turn that dial just a couple of points that can make the difference of billions of pounds to the UK economy and thousands of jobs so we don’t need to be throwing out the statute book and starting again — there is plenty of scope to be making improvements under the current regime,” he told MPs. “Let alone when we start with a fresh sheet of paper if that’s what the government chooses to do.”

TechCrunch asked another Edwards (no relation) — Newcastle University’s Lilian Edwards, professor of law, innovation and society — for her thoughts on the government’s direction of travel, as signalled by DCMS’ pre-proposal-publication spin, and she expressed similar concerns about the logic driving the government to argue it needs to rip up the existing standards.

“The entire scheme of data protection is to balance fundamental rights with the free flow of data. Economic concerns have never been ignored, and the current scheme, which we’ve had in essence since 1998, has struck a good balance. The great things we did with data during COVID-19 were done completely legally — and with no great difficulty under the existing rules — so that isn’t a reason to change them,” she told us.

She also took issue with the plan to reshape the ICO “as a quango whose primary job is to ‘drive economic growth’ ” — pointing out that DCMS’ PR fails to include any mention of privacy or fundamental rights, and arguing that “creating an entirely new regulator isn’t likely to do much for the ‘public trust’ that’s seen as declining in almost every poll.”

She also suggested the government is glossing over the real economic damage that would hit the U.K. if the EU decides its “reformed” standards are no longer essentially equivalent to the bloc’s. “[It’s] hard to see much concern for adequacy here; which will, for sure, be reviewed, to our detriment — prejudicing 43% of our trade for a few low value trade deals and some hopeful sell offs of NHS data (again, likely to take a wrecking ball to trust judging by the GPDPR scandal).”

She described the goal of regulating algorithmic bias as “applaudable” — but also flagged the risk of the U.K. falling behind other jurisdictions which are taking a broader look at how to regulate artificial intelligence.

Per DCMS’ press release, the government seems to be intending for an existing advisory body, called the Centre for Data Ethics and Innovation (CDEI), to have a key role in supporting its policymaking in this area — saying that the body will focus on “enabling trustworthy use of data and AI in the real-world”. However it has still not appointed a new CDEI chair to replace Roger Taylor — with only an interim chair appointment (and some new advisors) announced today.

“The world has moved on since CDEI’s work in this area,” argued Edwards. “We realise now that regulating the harmful effects of AI has to be considered in the round with other regulatory tools not just data protection. The proposed EU AI Regulation is not without flaw but goes far further than data protection in mandating better quality training sets, and more transparent systems to be built from scratch. If the UK is serious about regulating it has to look at the global models being floated but right now it looks like its main concerns are insular, short-sighted and populist.”

Patient data privacy advocacy group MedConfidential, which has frequently locked horns with the government over its approach to data protection, also queried DCMS’ continued attachment to the CDEI for shaping policymaking in such a crucial area — pointing to last year’s biased algorithm exam grading scandal, which happened under Taylor’s watch.

(NB: Taylor was also the Ofqual chair, and his resignation from that post in December cited a “difficult summer”, even as his departure from the CDEI leaves an awkward hole now… )

“The culture and leadership of CDEI led to the A-Levels algorithm, why should anyone in government have any confidence in what they say next?” said MedConfidential’s Sam Smith.

UK offers cash for CSAM detection tech targeted at E2E encryption

The U.K. government is preparing to spend over half a million dollars to encourage the development of detection technologies for child sexual exploitation material (CSAM) that can be bolted on to end-to-end encrypted messaging platforms to scan for the illegal material, as part of its ongoing policy push around internet and child safety.

In a joint initiative today, the Home Office and the Department for Digital, Media, Culture and Sport (DCMS) announced a “Tech Safety Challenge Fund” — which will distribute up to £425,000 (~$584,000) to five organizations (£85,000/$117,000 each) to develop “innovative technology to keep children safe in environments such as online messaging platforms with end-to-end encryption”.

A Challenge statement for applicants to the program adds that the focus is on solutions that can be deployed within E2E-encrypted environments “without compromising user privacy”.

“The problem that we’re trying to fix is essentially the blindfolding of law enforcement agencies,” a Home Office spokeswoman told us, arguing that if tech platforms go ahead with their “full end-to-end encryption plans, as they currently are… we will be completely hindered in being able to protect our children online”.

While the announcement does not name any specific platforms of concern, Home Secretary Priti Patel has previously attacked Facebook’s plans to expand its use of E2E encryption — warning in April that the move could jeopardize law enforcement’s ability to investigate child abuse crime.

Facebook-owned WhatsApp also already uses E2E encryption so that platform is already a clear target for whatever “safety” technologies might result from this taxpayer-funded challenge.

Apple’s iMessage and FaceTime are among other existing mainstream messaging tools which use E2E encryption.

So there is potential for very widespread application of any “child safety tech” developed through this government-backed challenge. (Per the Home Office, technologies submitted to the Challenge will be evaluated by “independent academic experts”. The department was unable to provide details of who exactly will assess the projects.)

Patel, meanwhile, is continuing to apply high-level pressure on the tech sector on this issue — including aiming to drum up support from G7 counterparts.

Writing in a paywalled op-ed in a Tory-friendly newspaper, The Telegraph, she trails a meeting she’ll be chairing today where she says she’ll push the G7 to collectively pressure social media companies to do more to address “harmful content on their platforms”.

“The introduction of end-to-end encryption must not open the door to even greater levels of child sexual abuse. Hyperbolic accusations from some quarters that this is really about governments wanting to snoop and spy on innocent citizens are simply untrue. It is about keeping the most vulnerable among us safe and preventing truly evil crimes,” she adds.

“I am calling on our international partners to back the UK’s approach of holding technology companies to account. They must not let harmful content continue to be posted on their platforms or neglect public safety when designing their products. We believe there are alternative solutions, and I know our law enforcement colleagues agree with us.”

In the op-ed, the Home Secretary singles out Apple’s recent move to add a CSAM detection tool to iOS and macOS to scan content on user’s devices before it’s uploaded to iCloud — welcoming the development as a “first step”.

“Apple state their child sexual abuse filtering technology has a false positive rate of 1 in a trillion, meaning the privacy of legitimate users is protected whilst those building huge collections of extreme child sexual abuse material are caught out. They need to see th[r]ough that project,” she writes, urging Apple to press ahead with the (currently delayed) rollout.

Last week the iPhone maker said it would delay implementing the CSAM detection system — following a backlash led by security experts and privacy advocates who raised concerns about vulnerabilities in its approach, as well as the contradiction of a “privacy-focused” company carrying out on-device scanning of customer data. They also flagged the wider risk of the scanning infrastructure being seized upon by governments and states that might order Apple to scan for other types of content, not just CSAM.

Patel’s description of Apple’s move as just a “first step” is unlikely to do anything to assuage concerns that once such scanning infrastructure is baked into E2E encrypted systems it will become a target for governments to widen the scope of what commercial platforms must legally scan for.

However the Home Office’s spokeswoman told us that Patel’s comments on Apple’s CSAM tech were only intended to welcome its decision to take action in the area of child safety — rather than being an endorsement of any specific technology or approach. (And Patel does also write: “But that is just one solution, by one company. Greater investment is essential.”)

The Home Office spokeswoman wouldn’t comment on which types of technologies the government is aiming to support via the Challenge fund, either, saying only that they’re looking for a range of solutions.

She told us the overarching goal is to support ”middleground” solutions — denying the government is trying to encourage technologists to come up with ways to backdoor E2E encryption.

In recent years in the U.K. GCHQ has also floated the controversial idea of a so-called “ghost protocol” — that would allow for state intelligence or law enforcement agencies to be invisibly CC’d by service providers into encrypted communications on a targeted basis. That proposal was met with widespread criticism, including from the tech industry, which warned it would undermine trust and security and threaten fundamental rights.

It’s not clear if the government has such an approach — albeit with a CSAM focus — in mind here now as it tries to encourage the development of “middleground” technologies that are able to scan E2E-encrypted content for specifically illegal stuff.

In another concerning development, earlier this summer, guidance put out by DCMS for messaging platforms recommended that they “prevent” the use of E2E encryption for child accounts altogether.

Asked about that, the Home Office spokeswoman told us the tech fund is “not too different” and “is trying to find the solution in between”.

“Working together and bringing academics and NGOs into the field so that we can find a solution that works for both what social media companies want to achieve and also make sure that we’re able to protect children,” she said, adding: “We need everybody to come together and look at what they can do.”

There is not much more clarity in the Home Office guidance to suppliers applying for the chance to bag a tranche of funding.

There it writes that proposals must “make innovative use of technology to enable more effective detection and/or prevention of sexually explicit images or videos of children”.

“Within scope are tools which can identify, block or report either new or previously known child sexual abuse material, based on AI, hash-based detection or other techniques,” it goes on, further noting that proposals need to address “the specific challenges posed by e2ee environments, considering the opportunities to respond at different levels of the technical stack (including client-side and server-side).”

General information about the Challenge — which is open to applicants based anywhere, not just in the U.K. — can be found on the Safety Tech Network website.

The deadline for applications is October 6.

Selected applicants will have five months, between November 2021 and March 2022 to deliver their projects.

When exactly any of the tech might be pushed at the commercial sector isn’t clear — but the government may be hoping that by keeping up the pressure on the tech sector platform giants will develop this stuff themselves, as Apple has been.

The Challenge is just the latest U.K. government initiative to bring platforms in line with its policy priorities — back in 2017, for example, it was pushing them to build tools to block terrorist content — and you could argue it’s a form of progress that ministers are not simply calling for E2E encryption to be outlawed, as they frequently have in the past.

That said, talk of “preventing” the use of E2E encryption — or even fuzzy suggestions of “in between” solutions — may not end up being so very different.

What is different is the sustained focus on child safety as the political cudgel to make platforms comply. That seems to be getting results.

Wider government plans to regulate platforms — set out in a draft Online Safety bill, published earlier this year — have yet to go through parliamentary scrutiny. But in one already baked in change, the country’s data protection watchdog is now enforcing a children’s design code which stipulates that platforms need to prioritize kids’ privacy by default, among other recommended standards.

The Age Appropriate Design Code was appended to the U.K.’s data protection bill as an amendment — meaning it sits under wider legislation that transposed Europe’s General Data Protection Regulation (GDPR) into law, which brought in supersized penalties for violations like data breaches. And in recent months a number of social media giants have announced changes to how they handle children’s accounts and data — which the ICO has credited to the code.

So the government may be feeling confident that it has finally found a blueprint for bringing tech giants to heel.