Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.

It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.

Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.

“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.

Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.

Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.

However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.

We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).

Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”

Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.

“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”

“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.

Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.

So — in other words — Brexit means, er, trust Google to look after your data.

“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.

“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”

Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.

The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.

So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.

It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)

Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.

Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.

Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future… 😬

We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?

Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.

This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.

In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.

Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.

In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.

Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.

Though it could be using all that personal stuff to help it build new products it can serve ads alongside.

Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.

The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

California’s new privacy law is off to a rocky start

California’s new privacy law was years in the making.

The law, California’s Consumer Privacy Act — or CCPA — became law on January 1, allowing state residents to reclaim their right to access and control their personal data. Inspired by Europe’s GDPR, the CCPA is the largest statewide privacy law change in a generation. The new law lets users request a copy of the data that tech companies have on them, delete the data when they no longer want a company to have it, and demand that their data isn’t sold to third parties. All of this is much to the chagrin of the tech giants, some of which had spent millions to comply with the law and have many more millions set aside to deal with the anticipated influx of consumer data access requests.

But to say things are going well is a stretch.

Many of the tech giants that kicked and screamed in resistance to the new law have acquiesced and accepted their fate — at least until something different comes along. The California tech scene had more than a year to prepare, but some have made it downright difficult and — ironically — more invasive in some cases for users to exercise their rights, largely because every company has a different interpretation of what compliance should look like.

Alex Davis is just one California resident who tried to use his new rights under the law to make a request to delete his data. He vented his annoyance on Twitter, saying companies have responded to CCPA by making requests “as confusing and difficult as possible in new and worse ways.”

“I’ve never seen such deliberate attempts to confuse with design,” he told TechCrunch. He referred to what he described as “dark patterns,” a type of user interface design that tries to trick users into making certain choices, often against their best interests.

“I tried to make a deletion request but it bogged me down with menus that kept redirecting… things to be turned on and off,” he said.

Despite his frustration, Davis got further than others. Just as some companies have made it easy for users to opt-out of having their data sold by adding the legally required “Do not sell my info” links on their websites, many have not. Some have made it near-impossible to find these “data portals,” which companies set up so users can request a copy of their data or delete it altogether. For now, California companies are still in a grace period — but have until July when the CCPA’s enforcement provisions kick in. Until then, users are finding ways around it — by collating and sharing links to data portals to help others access their data.

“We really see a mixed story on the level of CCPA response right now,” said Jay Cline, who heads up consulting giant PwC’s data privacy practice, describing it as a patchwork of compliance.

PwC’s own data found that only 40% of the largest 600 U.S. companies had a data portal. Only a fraction, Cline said, extended their portals to users outside of California, even though other states are gearing up to push similar laws to the CCPA.

But not all data portals are created equally. Given how much data companies store on us — personal or otherwise — the risks of getting things wrong are greater than ever. Tech companies are still struggling to figure out the best way to verify each data request to access or delete a user’s data without inadvertently giving it away to the wrong person.

Last year, security researcher James Pavur impersonated his fiancee and tricked tech companies into turning over vast amounts of data about her, including credit card information, account logins and passwords and, in one case, a criminal background check. Only a few of the companies asked for verification. Two years ago, Akita founder Jean Yang described someone hacking into her Spotify account and requesting her account data as an “unfortunate consequence” of GDPR, which mandated companies operating on the continent allow users access to their data.

(Image: Twitter/@jeanqasaur)

The CCPA says companies should verify a person’s identity to a “reasonable degree of certainty.” For some that’s just an email address to send the data.

Others require sending in even more sensitive information just to prove it’s them.

Indeed, i360, a little-known advertising and data company, until recently asked California residents for a person’s full Social Security number. This recently changed to just the last four-digits. Verizon (which owns TechCrunch) wants its customers and users to upload their driver’s license or state ID to verify their identity. Comcast asks for the same, but goes the extra step by asking for a selfie before it will turn over any of a customer’s data.

Comcast asks for the same amount of information to verify a data request as the controversial facial recognition startup, Clearview AI, which recently made headlines for creating a surveillance system made up of billions of images scraped from Facebook, Twitter and YouTube to help law enforcement trace a person’s movements.

As much as CCPA has caused difficulties, it has helped forge an entirely new class of compliance startups ready to help large and small companies alike handle the regulatory burdens to which they are subject. Several startups in the space are taking advantage of the $55 billion expected to be spent on CCPA compliance in the next year — like Segment, which gives customers a consolidated view of the data they store; Osano which helps companies comply with CCPA; and Securiti, which just raised $50 million to help expand its CCPA offering. With CCPA and GDPR under their belts, their services are designed to scale to accommodate new state or federal laws as they come in.

Another startup, Mine, which lets users “take ownership” of their data by acting as a broker to allow users to easily make requests under CCPA and GDPR, had a somewhat bumpy debut.

The service asks users to grant them access to a user’s inbox, scanning for email subject lines that contain company names and using that data to determine which companies a user can request their data from or have their data deleted. (The service requests access to a user’s Gmail but the company claims it will “never read” users’ emails.) Last month during a publicity push, Mine inadvertently copied a couple of emailed data requests to TechCrunch, allowing us to see the names and email addresses of two requesters who wanted Crunch, a popular gym chain with a similar name, to delete their data.

(Screenshot: Zack Whittaker/TechCrunch)

TechCrunch alerted Mine — and the two requesters — to the security lapse.

“This was a mix-up on our part where the engine that finds companies’ data protection offices’ addresses identified the wrong email address,” said Gal Ringel, co-founder and chief executive at Mine. “This issue was not reported during our testing phase and we’ve immediately fixed it.”

For now, many startups have caught a break.

The smaller, early-stage startups that don’t yet make $25 million in annual revenue or store the personal data on more than 50,000 users or devices will largely escape having to immediately comply with CCPA. But it doesn’t mean startups can be complacent. As early-stage companies grow, so will their legal responsibilities.

“For those who did launch these portals and offer rights to all Americans, they are in the best position to be ready for these additional states,” said Cline. “Smaller companies in some ways have an advantage for compliance if their products or services are commodities, because they can build in these controls right from the beginning,” he said.

CCPA may have gotten off to a bumpy start, but time will tell if things get easier. Just this week, California’s attorney general Xavier Becerra released newly updated guidance aimed at trying to “fine tune” the rules, per his spokesperson. It goes to show that even California’s lawmakers are still trying to get the balance right.

But with the looming threat of hefty fines just months away, time is running out for the non-compliant.

Ancestry lays off 6% of staff as consumer genetic testing market continues to decline

Excitement in the consumer genetic testing market continues to show signs of slowing down.

In the past two weeks layoffs have hit two of the biggest consumer genetic testing services — 23andme and Ancestry — with the latter announcing that it would slash its staff by 6% earlier today, in a blog post.

CNBC first reported the news.

In her blogpost announcing the layoffs, Ancestry chief executive Margo Georgiadis wrote:

… over the last 18 months, we have seen a slowdown in consumer demand across the entire DNA category. The DNA market is at an inflection point now that most early adopters have entered the category. Future growth will require a continued focus on building consumer trust and innovative new offerings that deliver even greater value to people. Ancestry is well positioned to lead that innovation to inspire additional discoveries in both Family History and Health.

Today we made targeted changes to better position our business to these marketplace realities. These are difficult decisions and impact 6 percent of our workforce. Any changes that affect our people are made with the utmost care. We’ve done so in service to sharpening our focus and investment on our core Family History business and the long-term opportunity with AncestryHealth.

The move from Ancestry follows job cuts at 23andMe in late January, which saw 100 staffers lose their jobs (or roughly 14% of its workforce.

The genetic testing company Illumina has been warning of softness in the direct to consumer genetic testing market, as Business Insider reported last August.

“We have previously based our DTC expectations on customer forecasts, but given unanticipated market softness, we are taking an even more cautious view of the opportunity in the near-term,” the company’s chief executive Francis deSouza said in a second quarter earnings call.

Consumers seem to be waking up to the privacy concerns over how genetic tests can be used.

“You can cancel your credit card. You can’t change your DNA,” Matt Mitchell, the director of digital safety and privacy for the advocacy organization Tactical Tech, told Business Insider earlier in the year.

And privacy laws in the U.S. have not caught up with the reality of how DNA testing is being used (and could potentially be abused), according to privacy experts and legal scholars.

“In the US we have taken to protecting genetic information separately rather than using more general privacy laws, and most of the people who’ve looked at it have concluded that’s a really bad idea,” Mark Rothstein, a law professor at Brandeis and the director of the University of Louisville’s Institute for Bioethics, Health Policy and Law, told Wired in May.

The investigation into the “Golden State Killer” and the eventual arrest of Joseph James DeAngelo thanks to DNA evidence collected from an open source genealogy site called GEDMatch likely helped focus consumers thinking on the issue.

In that case a relative of DeAngelo’s had uploaded their information onto the site and investigators found a close match with DNA at the crime scene. That information was then correlated with other details to eventually center on DeAngelo as a suspect in the crimes.

While consumer genetic testing services may be struggling, investors still see increasing promise in clinical genetics testing, with companies like the publicly traded InVitae seeing its share price rally and the privately held company, Color, raising roughly $75 million in new capital from investors led by T. Rowe Price.

 

Mass surveillance for national security does conflict with EU privacy rights, court advisor suggests

Mass surveillance regimes in the UK, Belgium and France which require bulk collection of digital data for a national security purpose may be at least partially in breach of fundamental privacy rights of European Union citizens, per the opinion of an influential advisor to Europe’s top court issued today.

Advocate general Campos Sánchez-Bordona’s (non-legally binding) opinion, which pertains to four references to the Court of Justice of the European Union (CJEU), takes the view that EU law covering the privacy of electronic communications applies in principle when providers of digital services are required by national laws to retain subscriber data for national security purposes.

A number of cases related to EU states’ surveillance powers and citizens’ privacy rights are dealt with in the opinion, including legal challenges brought by rights advocacy group Privacy International to bulk collection powers enshrined in the UK’s Investigatory Powers Act; and a La Quadrature du Net (and others’) challenge to a 2015 French decree related to specialized intelligence services.

At stake is a now familiar argument: Privacy groups contend that states’ bulk data collection and retention regimes have overreached the law, becoming so indiscriminately intrusive as to breach fundamental EU privacy rights — while states counter-claim they must collect and retain citizens’ data in bulk in order to fight national security threats such as terrorism.

Hence, in recent years, we’ve seen attempts by certain EU Member States to create national frameworks which effectively rubberstamp swingeing surveillance powers — that then, in turn, invite legal challenge under EU law.

The AG opinion holds with previous case law from the CJEU — specifically the Tele2 Sverige and Watson judgments — that “general and indiscriminate retention of all traffic and location data of all subscribers and registered users is disproportionate”, as the press release puts it.

Instead the recommendation is for “limited and discriminate retention” — with also “limited access to that data”.

“The Advocate General maintains that the fight against terrorism must not be considered solely in terms of practical effectiveness, but in terms of legal effectiveness, so that its means and methods should be compatible with the requirements of the rule of law, under which power and strength are subject to the limits of the law and, in particular, to a legal order that finds in the defence of fundamental rights the reason and purpose of its existence,” runs the PR in a particularly elegant passage summarizing the opinion.

The French legislation is deemed to fail on a number of fronts, including for imposing “general and indiscriminate” data retention obligations, and for failing to include provisions to notify data subjects that their information is being processed by a state authority where such notifications are possible without jeopardizing its action.

Belgian legislation also falls foul of EU law, per the opinion, for imposing a “general and indiscriminate” obligation on digital service providers to retain data — with the AG also flagging that its objectives are problematically broad (“not only the fight against terrorism and serious crime, but also defence of the territory, public security, the investigation, detection and prosecution of less serious offences”).

The UK’s bulk surveillance regime is similarly seen by the AG to fail the core “general and indiscriminate collection” test.

There’s a slight carve out for national legislation that’s incompatible with EU law being, in Sánchez-Bordona’s view, permitted to maintain its effects “on an exceptional and temporary basis”. But only if such a situation is justified by what is described as “overriding considerations relating to threats to public security or national security that cannot be addressed by other means or other alternatives, but only for as long as is strictly necessary to correct the incompatibility with EU law”.

If the court follows the opinion it’s possible states might seek to interpret such an exceptional provision as a degree of wiggle room to keep unlawful regimes running further past their legal sell-by-date.

Similarly, there could be questions over what exactly constitutes “limited” and “discriminate” data collection and retention — which could encourage states to push a ‘maximal’ interpretation of where the legal line lies.

Nonetheless, privacy advocates are viewing the opinion as a positive sign for the defence of fundamental rights.

In a statement welcoming the opinion, Privacy International dubbed it “a win for privacy”. “We all benefit when robust rights schemes, like the EU Charter of Fundamental Rights, are applied and followed,” said legal director, Caroline Wilson Palow. “If the Court agrees with the AG’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”

The CJEU will issue its ruling at a later date — typically between three to six months after an AG opinion.

The opinion comes at a key time given European Commission lawmakers are set to rethink a plan to update the ePrivacy Directive, which deals with the privacy of electronic communications, after Member States failed to reach agreement last year over an earlier proposal for an ePrivacy Regulation — so the AG’s view will likely feed into that process.

The opinion may also have an impact on other legislative processes — such as the talks on the EU e-evidence package and negotiations on various international agreements on cross-border access to e-evidence — according to Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo.

“It is worth noting that, under Article 4(2) of the Treaty on the European Union, “national security remains the sole responsibility of each Member State”. Yet, the advocate general’s opinion suggests that this provision does not exclude that EU data protection rules may have direct implications for national security,” Tosoni also pointed out. 

“Should the Court decide to follow the opinion… ‘metadata’ such as traffic and location data will remain subject to a high level of protection in the European Union, even when they are accessed for national security purposes.  This would require several Member States — including Belgium, France, the UK and others — to amend their domestic legislation.”

Cookie consent tools are being used to undermine EU privacy rules, study suggests

Most cookie consent pop-ups served to Internet users in the European Union — ostensibly seeking permission to track people’s web activity — are likely to be flouting regional privacy laws, a new study by researchers at MIT, UCL and Aarhus University suggests.

“The results of our empirical survey of CMPs [consent management platforms] today illustrates the extent to which illegal practices prevail, with vendors of CMPs turning a blind eye to — or worse, incentivising — clearly illegal configurations of their systems,” the researchers argue, adding that: “Enforcement in this area is sorely lacking.”

Their findings, published in a paper entitled Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence, chime with another piece of research we covered back in August — which also concluded a majority of the current implementations of cookie notices offer no meaningful choice to Europe’s Internet users — even though EU law requires one.

When consent is being relied upon as the legal basis for processing web users’ personal data, the bar for valid (i.e. legal) consent that’s set by the EU’s General Data Protection Regulation (GDPR) is clear: It must be informed, specific and freely given.

Recent jurisprudence by the Court of Justice of the European Union also further crystalized the law around cookies, making it clear that consent must be actively signalled — meaning a digital service cannot infer consent to tracking by indirect actions (such as the pop-up being closed by the user without a response or ignored in favor of interacting with the service).

Many websites use a so-called CMP to solicit consent to tracking cookies. But if it’s configured to contain pre-ticked boxes that opt users into sharing data by default — requiring an affirmative user action to opt out — any gathered ‘consent’ also isn’t legal.

Consent to tracking must also be obtained prior to a digital service dropping or accessing a cookie; Only service-essential cookies can be deployed without asking first.

All of which means — per EU law — it should be equally easy for website visitors to choose not to be tracked as to agree to their personal data being processed.

However the Dark Patterns after the GDPR study found that’s very far from the case right now.

“We found that dark patterns and implied consent are ubiquitous,” the researchers write in summary, saying that only slightly more than one in ten (11.8%) of the CMPs they looked at “meet the minimal requirements that we set based on European law” — which they define as being “if it has no optional boxes pre-ticked, if rejection is as easy as acceptance, and if consent is explicit”.

For the study, the researchers scraped the top 10,000 UK websites, as ranked by Alexa, to gather data on the most prevalent CMPs in the market — which are made by five companies: QuantCast, OneTrust, TrustArc, Cookiebot, and Crownpeak — and analyzed how the design and configurations of these tools affected Internet users’ choices. (They obtained a data set of 680 CMP instances via their method — a sample they calculate is representative of at least 57% of the total population of the top 10k sites that run a CMP, given prior research found only around a fifth do so.)

Implicit consent — aka (illegally) inferring consent via non-affirmative user actions (such as the user visiting or scrolling on the website or a failure to respond to a consent pop-up or closing it without a response) — was found to be common (32.5%) among the studied sites.

“Popular CMP implementation wizards still allow their clients to choose implied consent, even when they have already indicated the CMP should check whether the visitor’s IP is within the geographical scope of the EU, which should be mutually exclusive,” they note, arguing that: “This raises significant questions over adherence with the concept of data protection by design in the GDPR.”

They also found that the vast majority of CMPs make rejecting all tracking “substantially more difficult than accepting it” — with a majority (50.1%) of studied sites not having a ‘reject all’ button. While only a tiny minority (12.6%) of sites had a ‘reject all’ button accessible with the same or fewer number of clicks as an ‘accept all’ button.

Or, to put it another way, ‘Ohhai dark pattern design‘…

“An ‘accept all’ button was never buried in a second layer,” the researchers go on to point out, also finding that “74.3% of reject all buttons were one layer deep, requiring two clicks to press; 0.9% of them were two layers away, requiring at minimum three.”

Pre-ticked boxes were found to be widely deployed in the studied CMPs as well — despite such a setting not being legally valid. (On this they found: “56.2% of sites pre-ticked optional vendors or purposes/categories, with 54.1% of sites pre-ticking optional purposes, 32.3% pre-ticking optional categories, and 30.3% pre-ticking both”.)

They also point out that the high number of third-party trackers routinely being used by sites poses a major problem for the EU consent model — given it requires a “prohibitively long time” for users to become clearly informed enough to be able to legally consent.

The exact number of third party trackers they found being packed like sardines into CMPs varied — with between tens and several hundreds in play depending on the site.

Fifty-eight was the lowest number they encountered. While the highest instance was 542 vendors — on an implementation of QuantCast’s CMP. (And, well, just imagine the ‘friction’ involved in manually unticking all those, assuming that was one of the sites that also lacked a ‘reject all’ button… )

Sites relied on a large number of third party trackers, which would take a prohibitively long time for users to inform themselves about clearly. Out of the 85.4% of sites that did list vendors (e.g. third party trackers) within the CMP, there was a median number of 315 vendors (low. quartile 58, upp. quartile 542). Different CMP vendors have different average numbers of vendors, with the highest being QuantCast at 542… 75% of sites had over 58 vendors. 76.47% of sites provide some descriptions of their vendors. The mean total length of these descriptions per site is 7,985 words: roughly 31.9 minutes of reading for the average 250 words-per-minute reader, not counting interaction time to e.g. unfold collapsed boxes or navigating to and reading specific privacy policies of a vendor.

A second part of the research involved a field experiment involving 40 participants to investigate how the eight most common CMP designs affect Internet users’ consent choices.

“We found that notification style (banner or barrier) has no effect [on consent choice]; removing the opt-out button from the first page increases consent by 22–23 percentage points; and providing more granular controls on the first page decreases consent by 8–20 percentage points,” they write in summary on that.

They argue this portion of the study supports the notion that two of the most common consent interface designs – “not showing a ‘reject all’ button on the first page; and showing bulk options before showing granular control” – make it more likely for users to provide consent, thereby “violating the [GDPR] principle of “freely given””.

They also make reference to “qualitative reflections” of the participants in the paper — which were obtained via  survey after individuals’ consent choices had been registered during the field study — suggesting these responses “put into question the entire notice-and-consent model not because of specific design decisions but merely because an action is required before the user can accomplish their main task and because they appear too frequently if they are shown on a website-by-website basis”.

So, in other words, just the fact of interrupting a web user to ask them to make a choice may itself apply substantial enough pressure that it might render any resulting ‘consent’ invalid.

The study’s finding of the prevalence of manipulative designs and configurations intended to nudge or even force consent suggests Internet users in Europe are not actually benefiting from a legal framework that’s supposed to protection their digital data from unwanted exploitation — and are rather being subject to a lot of noisy, distracting and disingenuous ‘consent theatre’.

Cookie notices not only generate friction and frustration for the average Internet user, as they try to go about their daily business online, but the current situation is creating a faux veneer of compliance — atop what is actually a massive trampling of rights via what amounts to digital daylight robbery of people’s data at scale.

The problem here is that EU regulators have for years looked the other way where online tracking is concerned, failing entirely to enforce the on-paper standard.

Enforcement is indeed sorely lacking, as the researchers note. (Industry lobbying/political pressure, limited resources, risk aversion and regulatory capture, and a legacy of inaction around digital rights are all likely to blame.)

And while the GDPR only started being applied in May 2018, Europe has had regulations on data-gathering mechanisms like cookies for approaching two decades — with the paper pointing out that an amendment to the ePrivacy Directive all the way back in 2002 made it a requirement that “storing or accessing information on a user’s device not ‘strictly necessary’ for providing an explicitly requested service requires both clear and comprehensive information and opt-in consent”.

Asked about the research findings, lead author, Midas Nouwens, questioned why CMP vendors are selling so called ‘compliance’ tools that allow for non-compliant configurations in the first place.

“It’s sad, but I don’t think anyone is surprised anymore by how few pop-ups comply with the GDPR,” he told TechCrunch. “What is shocking is how non-compliant interface designs are allowed by the companies that provide consent pop-ups. Why do they let their clients count scrolling as consent or bury the decline button somewhere on the third page?”

“Enforcement is really the next big challenge if we don’t want the GDPR to go down the same path as the ePrivacy directive,” he added. “Since enforcement agencies have limited resources, focusing on the popular consent pop-up providers could be a much more effective strategy than targeting individual websites.

“Unfortunately, while we wait for enforcement, the dark patterns in these pop-ups are still manipulating people into being tracked.”

Another of the researchers behind the paper, Michael Veale, a lecturer in digital rights and regulation at UCL, also expressed shock that CMP vendors are allowing their tools to be configured in ways which are clearly intended to manipulate Internet users — thereby flouting the law.

In the paper the researchers urge regulators to take a smarter approach to tackling such widespread violation, such as by making use of automated tools “to expedite discovery and enforcement” of non-compliant cookie notices, and suggest they work “further upstream” — such as by placing requirements on the vendors of CMPs “to only allow compliant designs to be placed on the market”.

“It’s shocking to see how many of the large providers of consent pop-ups allow their systems to be misconfigured, such as through implicit consent, in ways that clearly infringe data protection law,” Veale told us, adding: “I suspect data protection authorities see this widespread illegality and are not sure exactly where to start. Yet if they do not start enforcing these guidelines, it’s unclear when this widespread illegality will start to stop.”

“This study even overestimates compliance, as we don’t focus on what actually happens to the tracking when you click on these buttons, which other recent studies have emphasised in many cases mislead individuals and do nothing at all,” he also pointed out.

We reached out to the UK’s data protection watchdog, the ICO, for a response to the research — and a spokeswoman pointed us to this cookie advice blog post it published last year, saying the advice it contains “still stands”.

In the blog Ali Shah, the ICO’s head of technology policy, suggests there could be some (albeit limited) action from the regulator this year to clean up cookie consent, with Shah writing that: “Cookie compliance will be an increasing regulatory priority for the ICO in the future. However, as is the case with all our powers, any future action would be proportionate and risk-based.”

While European citizens wait for data protection regulators to take meaningful action over systematic breaches of the GDPR — including those attached to consent-less tracking of web users — there is one step European web users can take to shrink the pain of cookie consent pop-ups: The researchers behind the study have built an open source browser extension that can automatically answer pop-ups based on user-customizable preferences.

It’s called Consent-o-Matic — and there are versions available for Firefox and Chrome.

At release the tool can automatically respond to cookie banners built by the five big CMP suppliers (QuantCast, OneTrust, TrustArc, Cookiebot, and Crownpeak).

But being as it’s open source, the hope is others will build on it to expand the types of pop-ups it’s able to auto-respond to. In the absence of a legally enforced ‘Do Not Track’ browser standard this is about as good as it gets for Internet users desperately seeking easier agency over the online tracking industry.

In a Twitter thread last month announcing the tool, Nouwens described the project as making use of “adversarial interoperability” as a pro-privacy tactic.

“Automating consent and privacy preferences is not new (DNT and P3P), but this project uses adversarial interoperability, rather than rely on industry self-regulation or buy-in from fundamentally opposed stakeholders (browsers, advertisers, publishers),” he observed.

However he added one caveat, reminding users to be on their guard for further non-compliance from the data suckers — pointing to the earlier research paper also flagged by Veale which found a small portion of sites (~7%) entirely ignore responses to cookie pop-ups and track users regardless of response.

So sometimes even a seamlessly automated ‘no’ to tracking might still sum to being tracked…

Adtech told to keep calm and fix its ‘lawfulness’ problem

Six months after warning that the real-time bidding (RTB) component of programmatic online advertising is wildly out of control — i.e. in a breaking the law sense — the UK’s data protection watchdog has marked half a year’s regulatory inaction with a blog post that entreats the adtech industry to come up with a solution to an “industry problem”. 

Casual readers of the ICO’s pre-Christmas message for European law-flouting adtech might be forgiven for thinking it looks a lot like the regulator telling the industry to ‘keep calm and carry on regulating yourselves’.

More informed readers, who understand that RTB is a process which (currently) entails systematic, privacy-eviscerating high velocity trading of people’s personal data for the purpose of targeting them with ads, might feel moved to point out that self-regulation is a core part of why adtech is in the abject mess it’s in.

Ergo, a data protection regulator calling for more of the same systemic failure does look rather, uh, uninspiring.

In the mildly worded blog post, Simon McDougall, the ICO’s executive director for technology and innovation — who does not appear to work anywhere near an enforcement department — includes such grand suggestions for adtech law-breakers as: “keep engaging with your trade associations”.

You’ll have to forgive us for not being overly convinced such a step will lead to any paradigm tilts to privacy — or “solutions that combine innovation and privacy”, as McDougall puts it — given episodes like this.

Another of the big ideas he has for the industry to get with the legal program is to suggest people working in adtech “challenge” senior management to “review their approach”.

Now we know employee activism is rather in vogue right now — at least at certain monopolistic tech giants who’ve scaled so big, and employ such large armies of lawyers, they’re essentially immune to moral and societal operational norms — but we’re not sure it’s the greatest look for the UK’s data watchdog to be encouraging adtech professionals to put their own jobs on the line instead of, y’know, doing its job and enforcing the law.

It’s possible that McDougall, a relatively recent recruit to the regulator, may not yet know it from his perch in the “technology and innovation” unit, but the ICO does have a powerful toolbox at its disposal these days. Including the ability, under the pan-EU General Data Protection Regulation framework, to levy fines of up to 4% of global turnover on entities it finds seriously violating the law.

It can also order a stop to law-violating data processing. And what better way to end the mass-scale privacy violations attached to programmatic advertising than by ordering personal data be stripped out of RTB requests, you might wonder?

It wouldn’t mean an end to being able to target ads online. Contextual targeting doesn’t require personal data — and has been used successfully by the likes of non-tracking search engine DuckDuckGo for years (and profitably so). It would just mean an end to the really creepy, stalkerish stuff. The stuff consumers hate — which also serves up horribly damaging societal effects, given that the mass profiling of Internet users enables push-button discrimination and exploitation of the vulnerable at vast scale.

Microtargeted ads are also, as we now know all too well, a pre-greased electronic conduit for attacks on democracy and society — enabling the spread of malicious disinformation.

The societal stakes couldn’t be higher. Yet the ICO appears content to keep calm and let the adtech industry carry on — no enforcement just biannual reminders of “concerns” about “lawfulness”.

To wit: “We have significant concerns about the lawfulness of the processing of special category data which we’ve seen in the industry, and the lack of explicit consent for that processing,” as McDougall admits in the post.

“We also have concerns about whether reliance on contractual clauses to justify onward data sharing is sufficient to comply with the law. We have not seen case studies that appear to adequately justify this.”

Set tone to: ‘Oopsy’.

The title of the ICO’s blog post — Adtech and the data protection debate – where next? — also incorporates contradictory framing as if to imply there is “debate” as to whether the industry needs to comply with data protection law. (Given the ICO’s own findings of “concern” that framing is itself concerning.)

So what can the adtech industry expect the ICO to actually do if it continues to fail to embed a “privacy by design approach in its use of RTB” (another of the blog post’s big suggestions) — and therefore keeps on, er, breaking the law?

Well, the ICO plans to make like a sponge over the “coming weeks”, per McDougall, who says it will spend time “absorbing all the information gathered and the rich conversations we’ve had throughout the year” and then shift into first gear — where it will be “evaluating all of the options available to us”.

No rush, eh.

A “further update” will then be put out in “early 2020” which will set out the ICO’s position — third time lucky perhaps?!

This update, we are informed, will also include “any action we’re taking”. So possibly still nothing, then.

“The future of RTB is both in the balance and in the hands of all the organisations involved,” McDougall writes — as if regulatory enforcement requires industry buy in.

UK taxpayers should be forgiven for wondering what exactly their data protection regulator is for at this point. Hopefully they’ll find out in a few months’ time.

European parliament’s NationBuilder contract under investigation by data regulator

Europe’s lead data regulator has issued its first ever sanction of an EU institution — taking enforcement action against the European parliament over its use of US-based digital campaign company, NationBuilder, to process citizens’ voter data ahead of the spring elections.

NationBuilder is a veteran of the digital campaign space — indeed, we first covered the company back in 2011— which has become nearly ubiquitous for digital campaigns in some markets.

But in recent years European privacy regulators have raised questions over whether all its data processing activities comply with regional data protection rules, responding to growing concern around election integrity and data-fuelled online manipulation of voters.

The European parliament had used NationBuilder as a data processor for a public engagement campaign to promote voting in the spring election, which was run via a website called thistimeimvoting.eu.

The website collected personal data from more than 329,000 people interested in the EU election campaign — data that was processed on behalf of the parliament by NationBuilder.

The European Data Protection Supervisor (EDPS), which started an investigation in February 2019, acting on its own initiative — and “taking into account previous controversy surrounding this company” as its press release puts it — found the parliament had contravened regulations governing how EU institutions can use personal data related to the selection and approval of sub-processors used by NationBuilder.

The sub-processors in question are not named. (We’ve asked for more details.)

The parliament received a second reprimand from the EDPS after it failed to publish a compliant Privacy Policy for the thistimeimvoting website within the deadline set by the EDPS. Although the regulator says it acted in line with its recommendations in the case of both sanctions.

The EDPS also has an ongoing investigation into whether the Parliament’s use of the voter mobilization website, and related processing operations of personal data, were in accordance with rules applicable to EU institutions (as set out in Regulation (EU) 2018/1725).

The enforcement actions had not been made public until a hearing earlier this week — when assistant data protection supervisor, Wojciech Wiewiórowski, mentioned the matter during a Q&A session in front of MEPs.

He referred to the investigation as “one of the most important cases we did this year”, without naming the data processor. “Parliament was not able to create the real auditing actions at the processor,” he told MEPs. “Neither control the way the contract has been done.”

“Fortunately nothing bad happened with the data but we had to make this contract terminated the data being erased,” he added.

When TechCrunch asked the EDPS for more details about this case on Tuesday a spokesperson told us the matter is “still ongoing” and “being finalized” and that it would communicate about it soon.

Today’s press release looks to be the upshot.

Provided canned commentary in the release Wiewiórowski writes:

The EU parliamentary elections came in the wake of a series of electoral controversies, both within the EU Member States and abroad, which centred on the the threat posed by online manipulation. Strong data protection rules are essential for democracy, especially in the digital age. They help to foster trust in our institutions and the democratic process, through promoting the responsible use of personal data and respect for individual rights. With this in mind, starting in February 2019, the EDPS acted proactively and decisively in the interest of all individuals in the EU to ensure that the European Parliament upholds the highest of standards when collecting and using personal data. It has been encouraging to see a good level of cooperation developing between the EDPS and the European Parliament over the course of this investigation.

One question that arises is why no firmer sanction has been issued to the European parliament — beyond a (now public) reprimand, some nine months after the investigation began.

Another question is why the matter was not more transparently communicated to EU citizens.

The EDPS’ PR emphasizes that its actions “are not limited to reprimands”, without explaining why the two enforcements thus far didn’t merit tougher action. (At the time of writing the EDPS had not responded to questions about why no fines have so far been issued.)

There may be more to come, though.

The regulator says it will “continue to check the parliament’s data protection processes” — revealing that the European Parliament has finished informing individuals of a revised intention to retain personal data collected by the thistimeimvoting website until 2024.

“The outcome of these checks could lead to additional findings,” it warns, adding that it intends to finalise the investigation by the end of this year.

Asked about the case, a spokeswoman for the European parliament told us that the thistimeimvoting campaign had been intended to motivate EU citizens to participate in the democratic process, and that it used a mix of digital tools and traditional campaigning techniques in order to try to reach as many potential voters as possible. 

She said NationBuilder had been used as a customer relations management platform to support staying in touch with potential voters — via an offer to interested citizens to sign up to receive information from the parliament about the elections (including events and general info).

Subscribers were also asked about their interests — which allowed the parliament to send personalized information to people who had signed up.

Some of the regulatory concerns around NationBuilder have centered on how it allows campaigns to match data held in their databases (from people who have signed up) with social media data that’s publicly available, such as an unlocked Twitter account or public Facebook profile.

TechCrunch understands the European parliament was not using this feature.

In 2017 in France, after an intervention by the national data watchdog, NationBuilder suspended the data matching tool in the market.

The same feature has attracted attention from the UK’s Information Commissioner — which warned last year that political parties should be providing a privacy notice to individuals whose data is collected from public sources such as social media and matched. Yet aren’t.

“The ICO is concerned about political parties using this functionality without adequate information being provided to the people affected,” the ICO said in the report, while stopping short of ordering a ban on the use of the matching feature.

Its investigation confirmed that up to 200 political parties or campaign groups used NationBuilder during the 2017 UK general election.

Another US court says police cannot force suspects to turn over their passwords

The highest court in Pennsylvania has ruled that the state’s law enforcement cannot force suspects to turn over their passwords that would unlock their devices.

The state’s Supreme Court said compelling a password from a suspect is a violation of the Fifth Amendment, a constitutional protection that protects suspects from self-incrimination.

It’s not an surprising ruling, given other state and federal courts have almost always come to the same conclusion. The Fifth Amendment grants anyone in the U.S. the right to remain silent, which includes the right to not turn over information that could incriminate them in a crime. These days, those protections extend to the passcodes that only a device owner knows.

But the ruling is not expected to affect the ability by police to force suspects to use their biometrics — like their face or fingerprints — to unlock their phone or computer.

Because your passcode is stored in your head and your biometrics are not, prosecutors have long argued that police can compel a suspect into unlocking a device with their biometrics, which they say are not constitutionally protected. The court also did not address biometrics. In a footnote of the ruling, the court said it “need not address” the issue, blaming the U.S. Supreme Court for creating “the dichotomy between physical and mental communication.”

Peter Goldberger, president of the ACLU of Pennsylvania, who presented the arguments before the court, said it was “fundamental” that suspects have the right to “to avoid self-incrimination.”

Despite the spate of rulings in recent years, law enforcement have still tried to find their way around compelling passwords from suspects. The now-infamous Apple-FBI case saw the federal agency try to force the tech giant to rewrite its iPhone software in an effort to beat the password on the handset of the terrorist Syed Rizwan Farook, who with his wife killed 14 people in his San Bernardino workplace in 2015. Apple said the FBI’s use of the 200-year-old All Writs Act would be “unduly burdensome” by putting potentially every other iPhone at risk if the rewritten software leaked or was stolen.

The FBI eventually dropped the case without Apple’s help after the agency paid hackers to break into the phone.

Brett Max Kaufman, a senior staff attorney at the ACLU’s Center for Democracy, said the Pennsylvania case ruling sends a message to other courts to follow in its footsteps.

“The court rightly rejects the government’s effort to create a giant, digital-age loophole undermining our time-tested Fifth Amendment right against self-incrimination,” he said. “The government has never been permitted to force a person to assist in their own prosecution, and the courts should not start permitting it to do so now simply because encrypted passwords have replaced the combination lock.”

“We applaud the court’s decision and look forward to more courts to follow in the many pending cases to be decided next,” he added.

A US federal court finds suspicionless searches of phones at the border is illegal

A federal court in Boston has ruled that the government is not allowed to search travelers’ phones and devices at the U.S. border without first having reasonable suspicion of a crime.

That’s a significant victory for civil liberties advocates who have said that the government’s own rules that allow its border agents to search electronic devices at the border are unconstitutional.

The court said that the government’s policies on warrantless searches of devices without reasonable suspicion “violate the Fourth Amendment,” which provides constitutional protections against warrantless searches and seizures, the court said.

The case was brought by 11 travelers — ten of which are U.S. citizens — with support from the American Civil Liberties Union and the Electronic Frontier Foundation, who said border agents searched their smartphones and laptops without a warrant, or any suspicion of wrongdoing or criminal activity. But the travelers said the government was overreaching its powers.

The border remains a bizarre legal space, where the government asserts powers that it cannot claim against citizens or residents within the United States. The government has long said it doesn’t need a warrant to search devices at the border.

Any data collected by Customs & Border Protection without a warrant can still be shared with federal, state, local and foreign law enforcement.

Esha Bhandari, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, said the ruling “significantly advances” protections under the Fourth Amendment.

“This is a great day for travelers who now can cross the international border without fear that the government will, in the absence of any suspicion, ransack the extraordinarily sensitive information we all carry in our electronic devices,” said Sophia Cope, a senior staff attorney at the EFF.

Millions of travelers arrive into the U.S. every day. Last year, border officials searched 33,000 travelers’ devices — a fourfold increase since 2015 — without any need for reasonable suspicion. In recent months, travelers have been told to inform the government of any social media handles they have, all of which are subject to inspection. But some have been denied entry to the U.S. for content on their phones shared by other people.

Earlier this year, a federal appeals court found that traffic enforcement officers using chalk to mark car tires was deemed unconstitutional.

A spokesperson for Customs & Border Protection did not immediately comment.

EU-US Privacy Shield passes third Commission ‘health check’ — but litigation looms

The third annual review of the EU-US Privacy Shield data transfer mechanism has once again been nodded through by Europe’s executive.

This despite the EU parliament calling last year for the mechanism to be suspended.

The European Commission also issued US counterparts with a compliance deadline last December — saying the US must appoint a permanent ombudsperson to handle EU citizens’ complaints, as required by the arrangement, and do so by February.

This summer the US senate finally confirmed Keith Krach — under secretary of state for economic growth, energy, and the environment — in the ombudsperson role.

The Privacy Shield arrangement was struck between EU and US negotiators back in 2016 — as a rushed replacement for the prior Safe Harbor data transfer pact which in fall 2015 was struck down by Europe’s top court following a legal challenge after NSA whistleblower Edward Snowden revealed US government agencies were liberally helping themselves to digital data from Internet companies.

At heart is a fundamental legal clash between EU privacy rights and US national security priorities.

The intent for the Privacy Shield framework is to paper over those cracks by devising enough checks and balances that the Commission can claim it offers adequate protection for EU citizens personal data when taken to the US for processing, despite the lack of a commensurate, comprehensive data protection region. But critics have argued from the start that the mechanism is flawed.

Even so around 5,000 companies are now signed up to use Privacy Shield to certify transfers of personal data. So there would be major disruption to businesses were it to go the way of its predecessor — as has looked likely in recent years, since Donald Trump took office as US president.

The Commission remains a staunch defender of Privacy Shield, warts and all, preferring to support data-sharing business as usual than offer a pro-active defence of EU citizens’ privacy rights.

To date it has offered little in the way of objection about how the US has implemented Privacy Shield in these annual reviews, despite some glaring flaws and failures (for example the disgraced political data firm, Cambridge Analytica, was a signatory of the framework, even after the data misuse scandal blew up).

The Commission did lay down one deadline late last year, regarding the ongoing lack of a permanent ombudsperson. So it can now check that box.

It also notes approvingly today that the final two vacancies on the US’ Privacy and Civil Liberties Oversight Board have been filled, meaning it’s fully-staffed for the first time since 2016.

Commenting in a statement, commissioner for justice, consumers and gender equality, Věra Jourová, added: “With around 5,000 participating companies, the Privacy Shield has become a success story. The annual review is an important health check for its functioning. We will continue the digital diplomacy dialogue with our U.S. counterparts to make the Shield stronger, including when it comes to oversight, enforcement and, in a longer-term, to increase convergence of our systems.”

Its press release characterizes US enforcement action related to the Privacy Shield as having “improved” — citing the Federal Trade Commission taking enforcement action in a grand total of seven cases.

It also says vaguely that “an increasing number” of EU individuals are making use of their rights under the Privacy Shield, claiming the relevant redress mechanisms are “functioning well”. (Critics have long suggested the opposite.)

The Commission is recommending further improvements too though, including that the US expand compliance checks such as concerning false claims of participation in the framework.

So presumably there’s a bunch of entirely fake compliance claims going unchecked, as well as actual compliance going under-checked…

“The Commission also expects the Federal Trade Commission to further step up its investigations into compliance with substantive requirements of the Privacy Shield and provide the Commission and the EU data protection authorities with information on ongoing investigations,” the EC adds.

All these annual Commission reviews are just fiddling around the edges, though. The real substantive test for Privacy Shield which will determine its long term survival is looming on the horizon — from a judgement expected from Europe’s top court next year.

In July a hearing took place on a key case that’s been dubbed Schrems II. This is a legal challenge which initially targeted Facebook’s use of another EU data transfer mechanism but has been broadened to include a series of legal questions over Privacy Shield — now with the Court of Justice of the European Union.

There is also a separate litigation directly targeting Privacy Shield that was brought by a French digital rights group which argues it’s incompatible with EU law on account of US government mass surveillance practices.

The Commission’s PR notes the pending litigation — writing that this “may also have an impact on the Privacy Shield”. “A hearing took place in July 2019 in case C-311/18 (Schrems II) and, once the Court’s judgement is issued, the Commission will assess its consequences for the Privacy Shield,” it adds.

So, tl;dr, today’s third annual review doesn’t mean Privacy Shield is out of the legal woods.