Launched with $17 million by two former Norwest investors, Tau Ventures is ready for its closeup

Amit Garg and Sanjay Rao have spent the bulk of their professional lives developing technology, founding startups and investing in startups at places like Google and Microsoft, HealthIQ, and Norwest Venture Partners.

Over their decade-long friendship the two men discussed working together on a venture fund, but the time was never right — until now. Since last August, the two men have been raising capital for their inaugural fund, Tau Ventures.

The name, like the two partners, is a bit wonky. Tau is two times pi and Garg and Rao chose it as the name for the partnership because it symbolizes their analytical approach to very early stage investing.

It’s a strange thing to launch a venture fund in a pandemic, but for Garg and Rao, the opportunity to provide very early stage investment capital into startups working on machine learning applications in healthcare, automation and business was too good to pass up.

Garg had spent twenty years in Silicon Valley working at Google and launching companies including HealthIQ. Over the years he’d amassed an investment portfolio that included the autonomous vehicle company, Nutonomy, BioBeatsGlookoCohero HealthTerapedeFigure1HealthifyMe,  Healthy.io and RapidDeploy.

Meanwhile, Rao, a Palo Alto, Calif. native, MIT alum, Microsoft product manager and founder of the Accelerate Labs accelerator in Palo Alto, Calif., said that it was important to give back to entrepreneurs after decades in the Valley honing skills as an operator.

Image credit: Tau Ventures

Both Rao and Garg acknowledge that there are a number of funds that have emerged focused on machine learning including Basis Set Ventures, SignalFire, Two Sigma Ventures, but these investors lack the direct company building experience that the two new investors have.

Garg, for instance, has actually built a hospital in India and has a deep background in healthcare. As an investor, he’s already seen an exit through his investment in Nutonomy, and both men have a deep understanding of the enterprise market — especially around security.

So far, the company has made three investments automation, another three in enterprise software, and five in healthcare.

The firm currently has $17 million in capital under management raised from institutional investors like the law firm Wilson Sonsini and a number of undisclosed family offices and individuals, according to Garg.

Much of that capital was committed after the pandemic hit, Garg said. “We started August 29th… and did the final close May 29th.”

The idea was to close the fund and start putting capital to work — especially in an environment where other investors were burdened with sorting out their existing portfolios, and not able to put capital to work as quickly.

“Our last investment was done entirely over Zoom and Google Meet,” said Rao.

That virtual environment extends to the firm’s shareholder meetings and conferences, some of which have attracted over 1,000 attendees, according to the partners.

UK class action style claim filed over Marriott data breach

A class action style suit has been filed in the UK against hotel group Marriott International over a massive data breach that exposed the information of some 500 million guests around the world, including around 30 million residents of the European Union, between July 2014 and September 2018.

The representative legal action against Marriott has been filed by UK resident, Martin Bryant, on behalf of millions of hotel guests domiciled in England & Wales who made reservations at hotel brands globally within the Starwood Hotels group, which is now part of Marriott International.

Hackers gained access to the systems of the Starwood Hotels group, starting in 2014, where they were able to help themselves to information such as guests’ names; email and postal addresses; telephone numbers; gender and credit card data. Marriott International acquired the Starwood Hotels group in 2016 — but the breach went undiscovered until 2018.

Bryant is being represented by international law firm, Hausfeld, which specialises in group actions.

Commenting in a statement, Hausfeld partner, Michael Bywell, said: “Over a period of several years, Marriott International failed to take adequate technical or organisational measures to protect millions of their guests’ personal data which was entrusted to them. Marriott International acted in clear breach of data protection laws specifically put in place to protect data subjects.”

“Personal data is increasingly critical as we live more of our lives online, but as consumers we don’t always realise the risks we are exposed to when our data is compromised through no fault of our own. I hope this case will raise awareness of the value of our personal data, result in fair compensation for those of us who have fallen foul of Marriott’s vast and long-lasting data breach, and also serve notice to other data owners that they must hold our data responsibly,” added Bryant in another supporting statement.

We’ve reached out to Marriott International for comment on the legal action.

A claim website for the action invites other eligible UK individuals to register their interest — and “hold Marriott to account for not securing your personal data”, as it puts it.

Here are the details of who is eligible to register their interest:

The ‘class’ of claimants on whose behalf the claim is brought includes all individuals who at any date prior to 10 September 2018 made a reservation online at a hotel operating under any of the following brands: W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, Element Hotels, Aloft Hotels, The Luxury Collection, Tribute Portfolio, Le Méridien Hotel & Resorts, Four Points by Sheraton, Design Hotels. In addition, any other brand owned and/or operated by Marriott International Inc or Starwood Hotels and Resorts Worldwide LLC. The individuals must have been resident in England and Wales at some point during the relevant period prior to 10 September 2018 and are resident in England and Wales at the date the claim was issued. They must also have been at least 18 years old at the date the claim was issued.

The claim is being brought as a representative action under Rule 19.6 of the Civil Procedure Rules, per a press release, which also notes that everyone with the same interest as Bryant is included in the claimant class unless they opt out.

Those eligible to participate face no fees or costs, nor do affected guests face any financial risk from the litigation — which is being fully funded by Harbour Litigation Funding, a global litigation funder.

The suit is the latest sign that litigation funders are willing to take a punt on representative actions in the UK as a route to obtaining substantial damages for data issues. Another class action style suit was announced last week, alongside a class action in the Netherlands — targeting tracking cookies operated by data broker giants, Oracle and Salesforce.

Both lawsuits follow a landmark decision by a UK appeals court last year which allowed a class action-style suit against Google’s use between 2011 and 2012 of tracking cookies to override iPhone users’ privacy settings in Apple’s Safari browser to proceed, overturning an earlier court decision to toss the case.

The other unifying factor is the existence of Europe’s General Data Protection Regulation (GDPR) framework which has opened the door to major fines for data protection violations. So even if EU regulators continue to lack uniform vigour in enforcing data protection law, there’s a chance the region’s courts will do the job for them if more litigation funders see value in bringing cases to them to pursue class damages for privacy violations.

The dates of the Marriott data breach means it falls under GDPR — which came into force in May 2018.

The UK’s data watchdog, the ICO, proposed a $123M fine for the security failing in July last year — saying then that the hotel operator had “failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems”.

However it has yet to hand down a final decision. Asked when the Marriott decision will be finalized, an ICO spokeswoman told us the “regulatory process” has been extended until September 30. No additional detail was offered to explain the delay.

Here’s the regulator’s statement in full:

Under Schedule 16 of the Data Protection Act 2018, Marriott has agreed to an extension of the regulatory process until 30 September. We will not be commenting until the regulatory process has concluded.

Oracle and Salesforce hit with GDPR class action lawsuits over cookie tracking consent

The use of third party cookies for ad tracking and targeting by data broker giants Oracle and Salesforce is the focus of class action style litigation announced today in the UK and the Netherlands.

The suits will argue that mass surveillance of Internet users to carry out real-time bidding ad auctions cannot possibly be compatible with strict EU laws around consent to process personal data.

The litigants believe the collective claims could exceed €10BN, should they eventually prevail in their arguments — though such legal actions can take several years to work their way through the courts.

In the UK, the case may also face some legal hurdles given the lack of an established model for pursuing collective damages in cases relating to data rights. Though there are signs that’s changing.

Non-profit foundation, The Privacy Collective, has filed one case today with the District Court of Amsterdam, accusing the two data broker giants of breaching the EU’s General Data Protection Regulation (GDPR) in their processing and sharing of people’s information via third party tracking cookies and other adtech methods.

The Dutch case, which is being led by law-firm bureau Brandeis, is the biggest-ever class action in The Netherlands related to violation of the GDPR — with the claimant foundation representing the interests of all Dutch citizens whose personal data has been used without their consent and knowledge by Oracle and Salesforce. 

A similar case is due to be filed later this month at the High Court in London England, which will make reference to the GDPR and the UK’s PECR (Privacy of Electronic Communications Regulation) — the latter governing the use of personal data for marketing communications. The case there is being led by law firm Cadwalader

Under GDPR, consent for processing EU citizens’ personal data must be informed, specific and freely given. The regulation also confers rights on individuals around their data — such as the ability to receive a copy of their personal information.

It’s those requirements the litigation is focused on, with the cases set to argue that the tech giants’ third party tracking cookies, BlueKai and Krux — trackers that are hosted on scores of popular websites, such as Amazon, Booking.com, Dropbox, Reddit and Spotify to name a few — along with a number of other tracking techniques are being used to misuse Europeans’ data on a massive scale.

Per Oracle marketing materials, its Data Cloud and BlueKai Marketplace provider partners with access to some 2BN global consumer profiles. (Meanwhile, as we reported in June, BlueKai suffered a data breach that exposed billions of those records to the open web.)

While Salesforce claims its marketing cloud ‘interacts’ with more than 3BN browsers and devices monthly.

Both companies have grown their tracking and targeting capabilities via acquisition for years; Oracle bagging BlueKai in 2014 — and Salesforce snaffling Krux in 2016.

 

Discussing the lawsuit in a telephone call with TechCrunch, Dr Rebecca Rumbul, class representative and claimant in England & Wales, said: “There is, I think, no way that any normal person can really give informed consent to the way in which their data is going to be processed by the cookies that have been placed by Oracle and Salesforce.

“When you start digging into it there are numerous, fairly pernicious ways in which these cookies can and probably do operate — such as cookie syncing, and the aggregation of personal data — so there’s really, really serious privacy concerns there.”

The real-time-bidding (RTB) process that the pair’s tracking cookies and techniques feed, enabling the background, high velocity trading of profiles of individual web users as they browse in order to run dynamic ad auctions and serve behavioral ads targeting their interests, has, in recent years, been subject to a number of GDPR complaints, including in the UK.

These complaints argue that RTB’s handling of people’s information is a breach of the regulation because it’s inherently insecure to broadcast data to so many other entities — while, conversely, GDPR bakes in a requirement for privacy by design and default.

The UK Information Commissioner’s Office has, meanwhile, accepted for well over a year that adtech has a lawfulness problem. But the regulator has so far sat on its hands, instead of enforcing the law — leaving the complainants dangling. (Last year, Ireland’s DPC opened a formal investigation of Google’s adtech, following a similar complaint, but has yet to issue a single GDPR decision in a cross-border complaint — leading to concerns of an enforcement bottleneck.)

The two lawsuits targeting RTB aren’t focused on the security allegation, per Rumbul, but are mostly concerned with consent and data access rights.

She confirms they opted to litigate rather than trying to try a regulatory complaint route as a way of exercising their rights given the “David vs Goliath” nature of bringing claims against the tech giants in question.

“If I was just one tiny person trying to complaint to Oracle and trying to use the UK Information Commissioner to achieve that… they simply do not have the resources to direct at one complaint from one person against a company like Oracle — in terms of this kind of scale,” Rumbul told TechCrunch.

“In terms of being able to demonstrate harm, that’s quite a lot of work and what you get back in recompense would probably be quite small. It certainly wouldn’t compensate me for the time I would spend on it… Whereas doing it as a representative class action I can represent everyone in the UK that has been affected by this.

“The sums of money then work — in terms of the depths of Oracle’s pockets, the costs of litigation, which are enormous, and the fact that, hopefully, doing it this way, in a very large-scale, very public forum it’s not just about getting money back at the end of it; it’s about trying to achieve more standardized change in the industry.”

“If Salesforce and Oracle are not successful in fighting this then hopefully that send out ripples across the adtech industry as a whole — encouraging those that are using these quite pernicious cookies to change their behaviours,” she added.

The litigation is being funded by Innsworth, a litigation funder which is also funding Walter Merricks’ class action for 46 million consumers against Mastercard in London courts. And the GDPR appears to be helping to change the class action landscape in the UK — as it allows individuals to take private legal action. The framework can also support third parties to bring claims for redress on behalf of individuals. While changes to domestic consumer rights law also appear to be driving class actions.

Commenting in a statement, Ian Garrard, managing director of Innsworth Advisors, said: “The development of class action regimes in the UK and the availability of collective redress in the EU/EEA mean Innsworth can put money to work enabling access to justice for millions of individuals whose personal data has been misused.”

A separate and still ongoing lawsuit in the UK, which is seeking damages from Google on behalf of Safari users whose privacy settings it historically ignored, also looks to have bolstered the prospects of class action style legal actions related to data issues.

While the courts initially tossed the suit last year, the appeals court overturned that ruling — rejecting Google’s argument that UK and EU law requires “proof of causation and consequential damage” in order to bring a claim related to loss of control of data.

The judge said the claimant did not need to prove “pecuniary loss or distress” to recover damages, and also allowed the class to proceed without all the members having the same interest.

Discussing that case, Rumbul suggests a pending final judgement there (likely next year) may have a bearing on whether the lawsuit she’s involved with can be taken forward in the UK.

“I’m very much hoping that the UK judiciary are open to seeing these kind of cases come forward because without these kinds of things as very large class actions it’s almost like closing the door on this whole sphere of litigation. If there’s a legal ruling that says that case can’t go forward and therefore this case can’t go forward I’d be fascinated to understand how the judiciary think we’d have any recourse to these private companies for these kind of actions,” she said.

Asked why the litigation has focused on Oracle and Saleforce, given there are so many firms involved in the adtech pipeline, she said: “I am not saying that they are necessarily the worst or the only companies that are doing this. They are however huge, huge international multimillion-billion dollar companies. And they specifically went out and purchased different bits of adtech software, like BlueKai, in order to bolster their presence in this area — to bolster their own profits.

“This was a strategic business decision that they made to move into this space and become massive players. So in terms of the adtech marketplace they are very, very big players. If they are able to be held to account for this then it will hopefully change the industry as a whole. It will hopefully reduce the places to hide for the other more pernicious cookie manufacturers out there. And obviously they have huge, huge revenues so in terms of targeting people who are doing a lot of harm and that can afford to compensate people these are the right companies to be targeting.”

Rumbul also told us The Privacy Collective is looking to collect stories from web users who feel they have experienced harm related to online tracking.

“There’s plenty of evidence out there to show that how these cookies work means you can have very, very egregious outcomes for people at an individual level,” she added. “Whether that can be related to personal finance, to manipulation of addictive behaviors, whatever, these are all very, very possible — and they cover every aspect of our lives.”

Consumers in England and Wales and the Netherlands are being encouraged to register their support of the actions via The Privacy Collective’s website.

In a statement, Christiaan Alberdingk Thijm, lead lawyer at Brandeis, said: “Your data is being sold off in real-time to the highest bidder, in a flagrant violation of EU data protection regulations. This ad-targeting technology is insidious in that most people are unaware of its impact or the violations of privacy and data rights it entails. Within this adtech environment, Oracle and Salesforce perform activities which violate European privacy rules on a daily basis, but this is the first time they are being held to account. These cases will draw attention to astronomical profits being made from people’s personal information, and the risks to individuals and society of this lack of accountability.”

“Thousands of organisations are processing billions of bid requests each week with at best inconsistent application of adequate technical and organisational measures to secure the data, and with little or no consideration as to the requirements of data protection law about international transfers of personal data. The GDPR gives us the tool to assert individuals’ rights. The class action means we can aggregate the harm done,” added partner Melis Acuner from Cadwalader in another supporting statement.

We reached out to Oracle and Salesforce for comment on the litigation.

Oracle EVP and general counsel, Dorian Daley, said:

The Privacy Collective knowingly filed a meritless action based on deliberate misrepresentations of the facts.  As Oracle previously informed the Privacy Collective, Oracle has no direct role in the real-time bidding process (RTB), has a minimal data footprint in the EU, and has a comprehensive GDPR compliance program. Despite Oracle’s fulsome explanation, the Privacy Collective has decided to pursue its shake-down through litigation filed in bad faith.  Oracle will vigorously defend against these baseless claims.

A spokeswoman for Salesforce sent us this statement:

At Salesforce, Trust is our #1 value and nothing is more important to us than the privacy and security of our corporate customers’ data. We design and build our services with privacy at the forefront, providing our corporate customers with tools to help them comply with their own obligations under applicable privacy laws — including the EU GDPR — to preserve the privacy rights of their own customers.

Salesforce and another Data Management Platform provider, have received a privacy related complaint from a Dutch group called The Privacy Collective. The claim applies to the Salesforce Audience Studio service and does not relate to any other Salesforce service.

Salesforce disagrees with the allegations and intends to demonstrate they are without merit.

Our comprehensive privacy program provides tools to help our customers preserve the privacy rights of their own customers. To read more about the tools we provide our corporate customers and our commitment to privacy, visit salesforce.com/privacy/products/

New Jersey court say police can force you to give up your phone’s passcode

New Jersey’s top court has ruled that police can compel suspects to give up their phone passcodes, and does not violate the Fifth Amendment.

The Fifth Amendment protects Americans from self-incrimination, including the well-known right to remain silent.

But the courts remain split on whether this applies to device passcodes. Both Indiana and Pennsylvania have ruled compelling a suspect to turn over their device’s passcode would violate the Fifth Amendment.

New Jersey’s Supreme Court thinks differently. In this week’s ruling, the court said the Fifth Amendment protects only against self-incriminating testimony — as in speech — and not the production of incriminating information.

Much of the legal debate is not about the passcodes, rather the information contained on the devices. Courts like Indiana found that compelling a suspect to turn over their passcode can give the government unfettered access to the suspect’s device, which may contain potentially incriminating information that the government might not have been aware of. The courts have likened this to a fishing expedition and ruled it unconstitutional.

But in the New Jersey case, the court said it’s a “foregone conclusion” that the phone’s data wouldn’t reveal anything the government didn’t already know.

Law enforcement have spent years trying to break into suspects’ phones, either using phone hacking technology with mixed results, or — in the case of modern phones — by using a suspect’s fingerprint or face to unlock their devices.

With courts divided on the matter, the final arbiter on the legality of whether police can compel a suspect to turn over their password will fall to the U.S. Supreme Court.


Send tips securely over Signal and WhatsApp to +1 646-755-8849 or send an encrypted email to: [email protected]

Legal clouds gather over US cloud services, after CJEU ruling

In the wake of yesterday’s landmark ruling by Europe’s top court — striking down a flagship transatlantic data transfer framework called Privacy Shield, and cranking up the legal uncertainty around processing EU citizens’ data in the U.S. in the process — Europe’s lead data protection regulator has fired its own warning shot at the region’s data protection authorities (DPAs), essentially telling them to get on and do the job of intervening to stop people’s data flowing to third countries where it’s at risk.

Countries like the U.S.

The original complaint that led to the Court of Justice of the EU (CJEU) ruling focused on Facebook’s use of a data transfer mechanism called Standard Contractual Clauses (SCCs) to authorize moving EU users’ data to the U.S. for processing.

Complainant Max Schrems asked the Irish Data Protection Commission (DPC) to suspend Facebook’s SCC data transfers in light of U.S. government mass surveillance programs. Instead, the regulator went to court to raise wider concerns about the legality of the transfer mechanism.

That in turn led Europe’s top judges to nuke the Commission’s adequacy decision, which underpinned the EU-U.S. Privacy Shield — meaning the U.S. no longer has a special arrangement greasing the flow of personal data from the EU. Yet, at the time of writing, Facebook is still using SCCs to process EU users’ data in the U.S. Much has changed, but the data hasn’t stopped flowing — yet.

Yesterday the tech giant said it would “carefully consider” the findings and implications of the CJEU decision on Privacy Shield, adding that it looked forward to “regulatory guidance.” It certainly didn’t offer to proactively flip a kill switch and stop the processing itself.

Ireland’s DPA, meanwhile, which is Facebook’s lead data regulator in the region, sidestepped questions over what action it would be taking in the wake of yesterday’s ruling — saying it (also) needed (more) time to study the legal nuances.

The DPC’s statement also only went so far as to say the use of SCCs for taking data to the U.S. for processing is “questionable” — adding that case by case analysis would be key.

The regulator remains the focus of sustained criticism in Europe over its enforcement record for major cross-border data protection complaints — with still zero decisions issued more than two years after the EU’s General Data Protection Regulation (GDPR) came into force, and an ever-growing backlog of open investigations into the data processing activities of platform giants.

In May, the DPC finally submitted to other DPAs for review its first draft decision on a cross-border case (an investigation into a Twitter security breach), saying it hoped the decision would be finalized in July. At the time of writing we’re still waiting for the bloc’s regulators to reach consensus on that.

The painstaking pace of enforcement around Europe’s flagship data protection framework remains a problem for EU lawmakers — whose two-year review last month called for uniformly “vigorous” enforcement by regulators.

The European Data Protection Supervisor (EDPS) made a similar call today, in the wake of the Schrems II ruling — which only looks set to further complicate the process of regulating data flows by piling yet more work on the desks of underfunded DPAs.

“European supervisory authorities have the duty to diligently enforce the applicable data protection legislation and, where appropriate, to suspend or prohibit transfers of data to a third country,” writes EDPS Wojciech Wiewiórowski, in a statement, which warns against further dithering or can-kicking on the intervention front.

“The EDPS will continue to strive, as a member of the European Data Protection Board (EDPB), to achieve the necessary coherent approach among the European supervisory authorities in the implementation of the EU framework for international transfers of personal data,” he goes on, calling for more joint working by the bloc’s DPAs.

Wiewiórowski’s statement also highlights what he dubs “welcome clarifications” regarding the responsibilities of data controllers and European DPAs — to “take into account the risks linked to the access to personal data by the public authorities of third countries.”

“As the supervisory authority of the EU institutions, bodies, offices and agencies, the EDPS is carefully analysing the consequences of the judgment on the contracts concluded by EU institutions, bodies, offices and agencies. The example of the recent EDPS’ own-initiative investigation into European institutions’ use of Microsoft products and services confirms the importance of this challenge,” he adds.

Part of the complexity of enforcement of Europe’s data protection rules is the lack of a single authority; a varied patchwork of supervisory authorities responsible for investigating complaints and issuing decisions.

Now, with a CJEU ruling that calls for regulators to assess third countries themselves — to determine whether the use of SCCs is valid in a particular use-case and country — there’s a risk of further fragmentation should different DPAs jump to different conclusions.

Yesterday, in its response to the CJEU decision, Hamburg’s DPA criticized the judges for not also striking down SCCs, saying it was “inconsistent” for them to invalidate Privacy Shield yet allow this other mechanism for international transfers. Supervisory authorities in Germany and Europe must now quickly agree how to deal with companies that continue to rely illegally on the Privacy Shield, the DPA warned.

In the statement, Hamburg’s data commissioner, Johannes Caspar, added: “Difficult times are looming for international data traffic.”

He also shot off a blunt warning that: “Data transmission to countries without an adequate level of data protection will… no longer be permitted in the future.”

Compare and contrast that with the Irish DPC talking about use of SCCs being “questionable,” case by case. (Or the U.K.’s ICO offering this bare minimum.)

Caspar also emphasized the challenge facing the bloc’s patchwork of DPAs to develop and implement a “common strategy” toward dealing with SCCs in the wake of the CJEU ruling.

In a press note today, Berlin’s DPA also took a tough line, warning that data transfers to third countries would only be permitted if they have a level of data protection essentially equivalent to that offered within the EU.

In the case of the U.S. — home to the largest and most used cloud services — Europe’s top judges yesterday reiterated very clearly that that is not in fact the case.

“The CJEU has made it clear that the export of data is not just about the economy but people’s fundamental rights must be paramount,” Berlin data commissioner Maja Smoltczyk said in a statement [which we’ve translated using Google Translate].

“The times when personal data could be transferred to the U.S. for convenience or cost savings are over after this judgment,” she added.

Both DPAs warned the ruling has implications for the use of cloud services where data is processed in other third countries where the protection of EU citizens’ data also cannot be guaranteed too, i.e. not just the U.S.

On this front, Smoltczyk name-checked China, Russia and India as countries EU DPAs will have to assess for similar problems.

“Now is the time for Europe’s digital independence,” she added.

Some commentators (including Schrems himself) have also suggested the ruling could see companies switching to local processing of EU users’ data. Though it’s also interesting to note the judges chose not to invalidate SCCs — thereby offering a path to legal international data transfers, but only provided the necessary protections are in place in that given third country.

Also issuing a response to the CJEU ruling today was the European Data Protection Board (EDPB). AKA the body made up of representatives from DPAs across the bloc. Chair Andrea Jelinek put out an emollient statement, writing that: “The EDPB intends to continue playing a constructive part in securing a transatlantic transfer of personal data that benefits EEA citizens and organisations and stands ready to provide the European Commission with assistance and guidance to help it build, together with the U.S., a new framework that fully complies with EU data protection law.”

Short of radical changes to U.S. surveillance law, it’s tough to see how any new framework could be made to legally stick, though. Privacy Shield’s predecessor arrangement, Safe Harbour, stood for around 15 years. Its shiny “new and improved” replacement didn’t even last five.

In the wake of the CJEU ruling, data exporters and importers are required to carry out an assessment of a country’s data regime to assess adequacy with EU legal standards before using SCCs to transfer data there.

“When performing such prior assessment, the exporter (if necessary, with the assistance of the importer) shall take into consideration the content of the SCCs, the specific circumstances of the transfer, as well as the legal regime applicable in the importer’s country. The examination of the latter shall be done in light of the non-exhaustive factors set out under Art 45(2) GDPR,” Jelinek writes.

“If the result of this assessment is that the country of the importer does not provide an essentially equivalent level of protection, the exporter may have to consider putting in place additional measures to those included in the SCCs. The EDPB is looking further into what these additional measures could consist of.”

Again, it’s not clear what “additional measures” a platform could plausibly deploy to “fix” the gaping lack of redress afforded to foreigners by U.S. surveillance law. Major legal surgery does seem to be required to square this circle.

Jelinek said the EDPB would be studying the judgement with the aim of putting out more granular guidance in the future. But her statement warns data exporters they have an obligation to suspend data transfers or terminate SCCs if contractual obligations are not or cannot be complied with, or else to notify a relevant supervisory authority if it intends to continue transferring data.

In her roundabout way, she also warns that DPAs now have a clear obligation to terminate SCCs where the safety of data cannot be guaranteed in a third country.

“The EDPB takes note of the duties for the competent supervisory authorities (SAs) to suspend or prohibit a transfer of data to a third country pursuant to SCCs, if, in the view of the competent SA and in the light of all the circumstances of that transfer, those clauses are not or cannot be complied with in that third country, and the protection of the data transferred cannot be ensured by other means, in particular where the controller or a processor has not already itself suspended or put an end to the transfer,” Jelinek writes.

One thing is crystal clear: Any sense of legal certainty U.S. cloud services were deriving from the existence of the EU-U.S. Privacy Shield — with its flawed claim of data protection adequacy — has vanished like summer rain.

In its place, a sense of déjà vu and a lot more work for lawyers.

Europe’s top court strikes down flagship EU-US data transfer mechanism

A highly anticipated ruling by Europe’s top court has just landed — striking down a flagship EU-US data flows arrangement called Privacy Shield.

The case — known colloquially as Schrems II (in reference to privacy activist and lawyer, Max Schrems, whose original complaints underpin the saga) — has a long and convoluted history. In a nutshell it concerns the clash of two very different legal regimes related to people’s digital data: On the one hand US surveillance law and on the other European data protection and privacy.

Putting a little more meat on the bones, the US’ prioritizing of digital surveillance — as revealed by the 2013 revelations of NSA whistleblower, Edward Snowden; and writ large in the breadth of data capture powers allowed by Section 702 of FISA (Foreign Intelligence Surveillance Act) — collides directly with European fundamental rights which give citizens rights to privacy and data protection.

The Schrems II case also directly concerns Facebook, while having much broader implications for how large scale data processing of EU citizens data can be done.

At specific issue are questions of legality around a European data transfer mechanism used by Facebook (and many other companies) for processing regional users’ data in the US — called Standard Contractual Clauses (SCCs).

Schrems challenged Facebook’s use of SCCs at the end of 2015, when he updated an earlier complaint on the same data transfer issue related to US government mass surveillance practices with Ireland’s data watchdog.

He asked the Irish Data Protection Commission (DPC) to suspend Facebook’s use of SCCs. Instead the regulator decided to take him and Facebook to court, saying it had concerns about the legality of the whole mechanism. Irish judges then referred a large number of nuanced legal questions to Europe’s top court, which brings us to today. It’s worth noting Facebook repeatedly tried and failed to block the reference to the Court of Justice.

The referral by the Irish High Court also looped in questions over a flagship European Commission data transfer agreement, called the EU-US Privacy Shield. This replaced a long standing EU-US data transfer agreement called Safe Harbor which was struck down by the CJEU in 2015 after an earlier challenge also lodged by Schrems. (Hence Schrems II.)

So part of the anticipation associated with this case has related to whether Europe’s top judges would choose to weigh in on the legality of Privacy Shield — a data transfer framework that’s being used by more than 5,300 companies at this point. And which the European Commission only put in place a handful of years ago.

Critics of the arrangement have maintained it does not resolve the fundamental clash between US surveillance and EU data protection — and in recent years, with the advent of the Trump administration, the Privacy Shield has looked increasingly precariously placed.

In the event the CJEU has sided with critics who have long maintained Privacy Shield is the equivalent of lipstick on a pig. Today is certainly not a good day for the European Commission (which also had a very bad day in court yesterday on a separate matter). We’ve reached out to the EU executive for comment on Schrems II.

Privacy Shield had also been under separate legal challenge — with the complainant in that case (La Quadrature du Net) arguing the mechanism breaches fundamental EU rights and does not provide adequate protection for EU citizens’ data. That case is now moot.

On SCCs, the CJEU has not taken issue with the mechanism itself — but judges impress the obligation on data controllers to carry out an assessment of the data protection afforded by the country where the data is to be taken. If the level is not equivalent to that offered by EU law then the controller has an obligation to suspend the data transfers.

In the case of SCCs used to take data to the US it’s not immediately clear what alternative exists, given judges have invalidated Privacy Shield on the grounds of the lack of protections afforded to EU citizens data in the country.

Commenting on the ruling in a statement, a jubilant Schrems said: “I am very happy about the judgment. At first sight it seems the Court has followed us in all aspects. This is a total blow to the Irish DPC and Facebook. It is clear that the US will have to seriously change their surveillance laws, if US companies want to continue to play a role on the EU market.”

We’ve also reached out to Facebook and the Irish DPC for comment.

This is a developing story… 

Germany tightens online hate speech rules to make platforms send reports straight to the feds

While a French online hate speech law has just been derailed by the country’s top constitutional authority on freedom of expression grounds, Germany is beefing up hate speech rules — passing a provision that will require platforms to send suspected criminal content directly to the Federal police at the point it’s reported by a user.

The move is part of a wider push by the German government to tackle a rise in right wing extremism and hate crime — which it links to the spread of hate speech online.

Germany’s existing Network Enforcement Act (aka the NetzDG law) came into force in the country in 2017, putting an obligation on social network platforms to remote hate speech within set deadlines as tight as 24 hours for easy cases — with fines of up to €50M should they fail to comply.

Yesterday the parliament passed a reform which extends NetzDG by placing a reporting obligation on platforms which requires them to report certain types of “criminal content” to the Federal Criminal Police Office.

A wider reform of the NetzDG law remains ongoing in parallel, that’s intended to bolster user rights and transparency, including by simplifying user notifications and making it easier for people to object to content removals and have successfully appealed content restored, among other tweaks. Broader transparency reporting requirements are also looming for platforms.

The NetzDG law has always been controversial, with critics warning from the get go that it would lead to restrictions on freedom of expression by incentivizing platforms to remove content rather than risk a fine. (Aka, the risk of ‘overblocking’.) In 2018 Human Rights Watch dubbed it a flawed law — critiquing it for being “vague, overbroad, and turn[ing] private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal”.

The latest change to hate speech rules is no less controversial: Now the concern is that social media giants are being co-opted to help the state build massive databases on citizens without robust legal justification.

A number of amendments to the latest legal reform were rejected, including one tabled by the Greens which would have prevented the personal data of the authors of reported social media posts from being automatically sent to the police.

The political party is concerned about the risk of the new reporting obligation being abused — resulting in data on citizens who have not in fact posted any criminal content ending up with the police.

It also argues there are only weak notification requirements to inform authors of flagged posts that their data has been passed to the police, among sundry other criticisms.

The party had proposed that only the post’s content would be transmitted directly to police who would have been able to request associated personal data from the platform should there be a genuine need to investigate a particular piece of content.

The German government’s reform of hate speech law follows the 2019 murder of a pro-refugee politician, Walter Lübcke, by neo nazis — which it said was preceded by targeted threats and hate speech online.

Earlier this month police staged raids on 40 hate speech suspects across a number of states who are accused of posting “criminally relevant comments” about Lübcke, per national media.

The government also argues that hate speech online has a chilling effect on free speech and a deleterious impact on democracy by intimidating those it targets — meaning they’re unable to freely express themselves or participate without fear in society.

At the pan-EU level, the European Commission has been pressing platforms to improve their reporting around hate speech takedowns for a number of years, after tech firms signed up to voluntary EU Code of Conduct on hate speech.

It is also now consulting on wider changes to platform rules and governance — under a forthcoming Digital Services Act which will consider how much liability tech giants should face for content they’re fencing.

Apple and Google update joint coronavirus tracing tech to improve user privacy and developer flexibility

Apple and Google have provided a number of updates about the technical details of their joint contact tracing system, which they’re now exclusively referring to as an “exposure notification” technology, since the companies say this is a better way to describe what they’re offering. The system is just one part of a contact tracing system, they note, not the entire thing. Changes include modifications made to the API that the companies say provide stronger privacy protections for individual users, and changes to how the API works that they claim will enable health authorities building apps that make use of it to develop more effective software.

The additional measures being implemented to protect privacy include changing the cryptography mechanism for generating the keys used to trace potential contacts. They’re no longer specifically bound to a 24-hour period, and they’re now randomly generated instead of derived from a so-called “tracing key” that was permanently attached to a device. In theory, with the old system, an advanced enough attack with direct access to the device could potentially be used to figure out how individual rotating keys were generated from the tracing key, though that would be very, very difficult. Apple and Google clarified that it was included for the sake of efficiency originally, but they later realized they didn’t actually need this to ensure the system worked as intended, so they eliminated it altogether.

The new method makes it even more difficult for a would-be bad actor to determine how the keys are derived, and then attempt to use that information to use them to track specific individuals. Apple and Google’s goal is to ensure this system does not link contact tracing information to any individual’s identity (except for the individual’s own use) and this should help further ensure that’s the case.

The companies will now also be encrypting any metadata associated with specific Bluetooth signals, including the strength of signal and other info. This metadata can theoretically be used in sophisticated reverse identification attempts, by comparing the metadata associated with a specific Bluetooth signal with known profiles of Bluetooth radio signal types as broken down by device and device generation. Taken alone, it’s not much of a risk in terms of exposure, but this additional step means it’s even harder to use that as one of a number of vectors for potential identification for malicious use.

It’s worth noting that Google and Apple say this is intended as a fixed length service, and so it has a built-in way to disable the feature at a time to be determined by regional authorities, on a case-by-case basis.

Finally on the privacy front, any apps built using the API will now be provided exposure time in five-minute intervals, with a maximum total exposure time reported of 30 minutes. Rounding these to specific five-minute duration blocks and capping the overall limit across the board helps ensure this info, too, is harder to link to any specific individual when paired with other metadata.

On the developer and health authority side, Apple and Google will now be providing signal strength information in the form of Bluetooth radio power output data, which will provide a more accurate measure of distance between two devices in the case of contact, particularly when used with existing received signal strength info from the corresponding device that the API already provides access to.

Individual developers can also set their own parameters in terms of how strong a signal is and what duration will trigger an exposure event. This is better for public health authorities because it allows them to be specific about what level of contact actually defines a potential contact, as it varies depending on geography in terms of the official guidance from health agencies. Similarly, developers can now determine how many days have passed since an individual contact event, which might alter their guidance to a user (i.e. if it’s already been 14 days, measures would be very different from if it’s been two).

Apple and Google are also changing the encryption algorithm used to AES, from the HMAC system they were previously using. The reason for this switch is that the companies have found that by using AES encryption, which can be accelerated locally using on-board hardware in many mobile devices, the API will be more energy efficiency and have less of a performance impact on smartphones.

As we reported Thursday, Apple and Google also confirmed that they’re aiming to distribute next week the beta seed version of the OS update that will support these devices. On Apple’s side, the update will support any iOS hardware released over the course of the past four years running iOS 13. On the Android side, it would cover around 2 billion devices globally, Android said.

Coronavirus tracing: Platforms versus governments

One key outstanding question is what will happen in the case of governments that choose to use centralized protocols for COVID-19 contact tracing apps, with proximity data uploaded to a central server — rather than opting for a decentralized approach, which Apple and Google are supporting with an API.

In Europe, the two major EU economies, France and Germany, are both developing contact tracing apps based on centralized protocols — the latter planning deep links to labs to support digital notification of COVID-19 test results. The U.K. is also building a tracing app that will reportedly centralize data with the local health authority.

This week Bloomberg reported that the French government is pressuring Apple to remove technical restrictions on Bluetooth access in iOS, with the digital minister, Cedric O, saying in an interview Monday: “We’re asking Apple to lift the technical hurdle to allow us to develop a sovereign European health solution that will be tied our health system.”

While a German-led standardization push around COVID-19 contact tracing apps, called PEPP-PT — that’s so far only given public backing to a centralized protocol, despite claiming it will support both approaches — said last week that it wants to see changes to be made to the Google-Apple API to accommodate centralized protocols.

Asked about this issue an Apple spokesman told us it’s not commenting on the apps/plans of specific countries. But the spokesman pointed back to a position on Bluetooth it set out in an earlier statement with Google — in which the companies write that user privacy and security are “central” to their design.

Judging by the updates to Apple and Google’s technical specifications and API framework, as detailed above, the answer to whether the tech giants will bow to government pressure to support state centralization of proximity social graph data looks to be a strong “no.”

The latest tweaks look intended to reinforce individual privacy and further shrink the ability of outside entities to repurpose the system to track people and/or harvest a map of all their contacts.

The sharpening of the Apple and Google’s nomenclature is also interesting in this regard — with the pair now talking about “exposure notification” rather than “contact tracing” as preferred terminology for the digital intervention. This shift of emphasis suggests they’re keen to avoid any risk of their role being (mis)interpreted as supporting broader state surveillance of citizens’ social graphs, under the guise of a coronavirus response.

Backers of decentralized protocols for COVID-19 contact tracing — such as DP-3T, a key influence for the Apple-Google joint effort that’s being developed by a coalition of European academics — have warned consistently of the risk of surveillance creep if proximity data is pooled on a central server.

Apple and Google’s change of terminology doesn’t bode well for governments with ambitions to build what they’re counter-branding as “sovereign” fixes — aka data grabs that do involve centralizing exposure data. Although whether this means we’re headed for a big standoff between certain governments and Apple over iOS security restrictions — à la Apple vs the FBI — remains to be seen.

Earlier today, Apple and Google’s EU privacy chiefs also took part in a panel discussion organized by a group of European parliamentarians, which specifically considered the question of centralized versus decentralized models for contact tracing.

Asked about supporting centralized models for contact tracing, the tech giants offered a dodge, rather than a clear “no.”

“Our goal is to really provide an API to accelerate applications. We’re not obliging anyone to use it as a solution. It’s a component to help make it easier to build applications,” said Google’s Dave Burke, VP of Android engineering.

“When we build something we have to pick an architecture that works,” he went on. “And it has to work globally, for all countries around the world. And when we did the analysis and looked at different approaches we were very heavily inspired by the DP-3T group and their approach — and that’s what we have adopted as a solution. We think that gives the best privacy preserving aspects of the contacts tracing service. We think it’s also quite rich in epidemiological data that we think can be derived from it. And we also think it’s very flexible in what it could do. [The choice of approach is] really up to every member state — that’s not the part that we’re doing. We’re just operating system providers and we’re trying to provide a thin layer of an API that we think can help accelerate these apps but keep the phone in a secure, private mode of operation.”

“That’s really important for the expectations of users,” Burke added. “They expect the devices to keep their data private and safe. And then they expect their devices to also work well.”

DP-3T’s Michael Veale was also on the panel — busting what he described as some of the “myths” about decentralized contacts tracing versus centralized approaches.

“The [decentralized] system is designed to provide data to epidemiologists to help them refine and improve the risk score — even daily,” he said. “This is totally possible. We can do this using advanced methods. People can even choose to provide additional data if they want to epidemiologists — which is not really required for improving the risk score but might help.”

“Some people think a decentralized model means you can’t have a health authority do that first call [to a person exposed to a risk of infection]. That’s not true. What we don’t do is we don’t tag phone numbers and identities like a centralized model can to the social network. Because that allows misuse,” he added. “All we allow is that at the end of the day the health authority receives a list separate from the network of whose phone number they can call.”

MEP Sophie in ‘t Veld, who organzied the online event, noted at the top of the discussion they had also invited PEPP-PT to join the call but said no one from the coalition had been able to attend the video conference.

Cookie consent still a compliance trash-fire in latest watchdog peek

The latest confirmation of the online tracking industry’s continued flouting of EU privacy laws which — at least on paper — are supposed to protect citizens from consent-less digital surveillance comes by via Ireland’s Data Protection Commission (DPC).

The watchdog did a sweep survey of around 40 popular websites last year — covering sectors including media and publishing; retail; restaurants and food ordering services; insurance; sport and leisure; and the public sector — and in a new report, published yesterday, it found almost all failing on a number of cookie and tracking compliance issues, with breaches ranging from minor to serious.

Twenty were graded ‘amber’ by the regulator, which signals a good response and approach to compliance but with at least one serious concern identified; twelve were graded ‘red’, based on very poor quality responses and a plethora of bad practices around cookie banners, setting multiple cookies without consent, badly designed cookies policies or privacy policies, and a lack of clarity about whether they understood the purposes of the ePrivacy legislation; while a further three got a borderline ‘amber to red’ grade.

Just two of the 38 controllers got a ‘green’ rating (substantially compliance with any concerns straightforward and easily remedied); and one more got a borderline ‘green to amber’ grade.

EU law means that if a data controller is relying on consent as the legal basis for tracking a user the consent must be specific, informed and freely given. Additional court rulings last year have further finessed guidance around online tracking — clarifying pre-checked consent boxes aren’t valid, for example.

Yet the DPC still found examples of cookie banners that offer no actual choice at all. Such as those which serve a dummy banner with a cookie notice that users can only meaningless click ‘Got it!’. (‘Gotcha data’ more like.. )

In fact the watchdog writes that it found ‘implied’ consent being relied upon by around two-thirds of the controllers, based on the wording of their cookie banners (e.g. notices such as: “by continuing to browse this site you consent to the use of cookies”) — despite this no longer meeting the required legal standard.

“Some appeared to be drawing on older, but no longer extant, guidance published by the DPC that indicated consent could be obtained ‘by implication’, where such informational notices were put in place,” it writes, noting that current guidance on its website “does not make any reference to implied consent, but it also focuses more on user controls for cookies rather than on controller obligations”.

Another finding was that all but one website set cookies immediately on landing — with “many” of these found to have no legal justification for not asking first, as the DPC determined they fall outside available consent exemptions in the relevant regulations.

It also identified widespread abuse of the concept of ‘strictly necessary’ where the use of trackers are concerned. “Many controllers categorised the cookies deployed on their websites as having a ‘necessary’ or ‘strictly necessary’ function, where the stated function of the cookie appeared to meet neither of the two consent exemption criteria set down in the ePrivacy Regulations/ePrivacy Directive,” it writes in the report. “These included cookies used to establish chatbot sessions that were set prior to any request by the user to initiate a chatbot function. In some cases, it was noted that the chatbot function on the websites concerned did not work at all.

“It was clear that some controllers may either misunderstand the ‘strictly necessary’ criteria, or that their definitions of what is strictly necessary are rather more expansive than the definitions provided in Regulation 5(5),” it adds.

Another problem the report highlights is a lack of tools for users to vary or withdraw their consent choices, despite some of the reviewed sites using so called ‘consent management platforms’ (CMPs) sold by third-party vendors.

This chimes with a recent independent study of CPMs — which earlier this year found illegal practices to be widespread, with “dark patterns and implied consent… ubiquitous”, as the researchers put it.

“Badly designed — or potentially even deliberately deceptive — cookie banners and consent-management tools were also a feature on some sites,” the DPC writes in its report, detailing some examples of Quantcast’s CPM which had been implemented in such a way as to make the interface “confusing and potentially deceptive” (such as unlabelled toggles and a ‘reject all’ button that had no effect).

Pre-checked boxes/sliders were also found to be common, with the DPC finding ten of the 38 controllers used them — despite ‘consent’ collected like that not actually being valid consent.

“In the case of most of the controllers, consent was also ‘bundled’ — in other words, it was not possible for users to control consent to the different purposes for which cookies were being used,” the DPC also writes. “This is not permitted, as has been clarified in the Planet49 judgment. Consent does not need to be given for each cookie, but rather for each purpose. Where a cookie has more than one purpose requiring consent, it must be obtained for all of those purposes separately.”

In another finding, the regulator came across instances of websites that had embedded tracking technologies, such as Facebook pixels, yet their operators did not list these in responses to the survey, listing only http browser cookies instead. The DPC suggests this indicates some controllers aren’t even aware of trackers baked into their own sites.

“It was not clear, therefore, whether some controllers were aware of some of the tracking elements deployed on their websites — this was particularly the case where small controllers had outsourced their website management and development to a third-part,” it writes.

The worst sector of its targeted sweep — in terms of “poor practices and, in particular, poor understanding of the ePrivacy Regulations and their purpose” — was the restaurants and food-ordering sector, per the report. (Though the finding is clearly based on a small sampling across multiple sectors.)

Despite encountering near blanket failure to actually comply with the law, the DPC, which also happens to be the lead regulator for much of big tech in Europe, has responded by issuing, er, further guidance.

This includes specifics such as pre-checked consent boxes must be removed; cookie banners can’t be designed to ‘nudge’ users to accept and a reject option must have equal prominence; and no non-necessary cookies be set on landing. It also stipulates there must always be a way for users to withdraw consent — and doing so should be as easy as consenting.

All stuff that’s been clear and increasingly so at least since the GDPR came into application in May 2018. Nonetheless the regulator is giving the website operators in question a further six months’ grace to get their houses in order — after which it has raised the prospect of actually enforcing the EU’s ePrivacy Directive and the General Data Protection Regulation.

“Where controllers fail to voluntarily make changes to their user interfaces and/or their processing, the DPC has enforcement options available under both the ePrivacy Regulations and the GDPR and will, where necessary, examine the most appropriate enforcement options in order to bring controllers into compliance with the law,” it warns.

The report is just the latest shot across the bows of the online tracking industry in Europe.

The UK’s Information Commission’s Office (ICO) has been issuing sternly worded blog posts for months. Its own report last summer found illegal profiling of Internet users by the programmatic ad industry to be rampant — also giving the industry six months to reform.

However the ICO still hasn’t done anything about the adtech industry’s legal blackhole — leading to privacy experts to denouncing the lack of any “substantive action to end the largest data breach ever recorded in the UK”, as one put it at the start of this year.

Ireland’s DPC, meanwhile, has yet to put the decision trigger on multiple cross-border investigations into the data-mining business practices of tech giants including Facebook and Google, following scores of GDPR complaints — including several targeting their legal base to process people’s data.

A two-year review of the pan-EU regulation, set for May 2020, provides one hard deadline that might concentrate minds.

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.

It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.

Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.

“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.

Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.

Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.

However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.

We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).

Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”

Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.

“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”

“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.

Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.

So — in other words — Brexit means, er, trust Google to look after your data.

“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.

“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”

Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.

The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.

So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.

It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)

Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.

Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.

Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future… 😬

We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?

Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.

This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.

In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.

Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.

In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.

Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.

Though it could be using all that personal stuff to help it build new products it can serve ads alongside.

Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.

The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.