Big tech has 2 elephants in the room: Privacy and competition

The question of how policymakers should respond to the power of big tech didn’t get a great deal of airtime at TechCrunch Disrupt last week, despite a number of investigations now underway in the United States (hi, Google).

It’s also clear that attention- and data-monopolizing platforms compel many startups to use their comparatively slender resources to find ways to compete with the giants — or hope to be acquired by them.

But there’s clearly a nervousness among even well-established tech firms to discuss this topic, given how much their profits rely on frictionless access to users of some of the gatekeepers in question.

Dropbox founder and CEO Drew Houston evinced this dilemma when TechCrunch Editor-in-Chief Matthew Panzarino asked him if Apple’s control of the iOS App Store should be “reexamined” by regulators or whether it’s just legit competition.

“I think it’s an important conversation on a bunch of dimensions,” said Houston, before offering a circular and scrupulously balanced reply in which he mentioned the “ton of opportunity” app stores have unlocked for third-party developers, checking off some of Apple’s preferred talking points like “being able to trust your device” and the distribution the App Store affords startups.

“They also are a huge competitive advantage,” Houston added. “And so I think the question of … how do we make sure that there’s still a level playing field and so that owning an app store isn’t too much of an advantage? I don’t know where it’s all going to end up. I do think it’s an important conversation to be had.”

Rep. Zoe Lofgren (D-CA) said the question of whether large tech companies are too powerful needs to be reframed.

“Big per se is not bad,” she told TC’s Zack Whittaker. “We need to focus on whether competitors and consumers are being harmed. And, if that’s the case, what are the remedies?”

In recent years, U.S. lawmakers have advanced their understanding of digital business models — making great strides since Facebook’s Mark Zuckerberg answered a question two years ago about how his platform makes money: “Senator, we sell ads.”

A House antitrust subcommittee hearing in July 2020 that saw the CEOs of Google, Facebook, Amazon and Apple answer awkward questions and achieved a higher dimension of detail than the big tech hearings of 2018.

Nonetheless, there still seems to be a lack of consensus among lawmakers over how exactly to grapple with big tech, even though the issue elicits bipartisan support, as was in plain view during a Senate Judiciary Committee interrogation of Google’s ad business earlier this month.

On stage, Lofgren demonstrated some of this tension by discouraging what she called “bulky” and “lengthy” antitrust investigations, making a general statement in favor of “innovation” and suggesting a harder push for overarching privacy legislation. She also advocated at length for inalienable rights for U.S. citizens so platform manipulators can’t circumvent rules with their own big data holdings and some dark pattern design.

Ireland’s data watchdog slammed for letting adtech carry on ‘biggest breach of all time’

A dossier of evidence detailing how the online ad targeting industry profiles Internet users’ intimate characteristics without their knowledge or consent has been published today by the Irish Council for Civil Liberties (ICCL), piling more pressure on the country’s data watchdog to take enforcement action over what complainants contend is the “biggest data breach of all time”.

The publication follows a now two-year-old complaint lodged with Ireland’s Data Protection Commission (DPC) claiming unlawful exploitation of personal data via the programmatic advertising Real-Time Bidding (RTB) process — including dominant RTB systems devised by Google and the Internet Advertising Bureau (IAB).

The Irish DPC opened an investigation into Google’s online Ad Exchange in May 2019, following a complaint filed by Dr Johnny Ryan (then at Brave, now a senior fellow at the ICCL) in September 2018 — but two years on that complaint, like so many major cross-border GDPR cases, remains unresolved.

And, indeed, multiple RTB complaints have been filed with regulators across the EU but none have yet been resolved. It’s a major black mark against the bloc’s flagship data protection framework.

“September 2020 marks two years since my formal complaint to the Irish Data Protection Commission about the “Real-Time Bidding” data breach. This submission demonstrates the consequences of two years of failure to enforce,” writes Ryan in the report.

Among hair-raising highlights in the ICCL dossier are that:

  • Google’s RTB system sends data to 968 companies;
  • that a data broker company which uses RTB data to profile people influenced the 2019 Polish Parliamentary Election by targeting LGBTQ+ people; 
  • that a profile built by a data broker with RTB data allows users of Google’s system to target 1,200 people in Ireland profiled in a “Substance abuse” category, with other health condition profiles offered by the same data broker available via Google reported to include “Diabetes”, “Chronic Pain”, and “Sleep Disorders”;
  • that the IAB’s RTB system allows users to target 1,300 people in Ireland profiled in a “AIDS & HIV” category, based on a data broker profile build with RTB data, while other categories from the same data broker include “Incest & Abuse Support”, “Brain Tumor”, “Incontinence”, and “Depression”;
  • that a data broker that gathers RTB data tracked the movements of people in Italy to see if they observed the Covid-19 lockdown;
  • that a data broker that illicitly profiled Black Lives Matters protesters in the US has also been allowed to gather RTB data about Europeans;
  • that the industry template for profiles includes intimate personal characteristics such as “Infertility”, “STD”, and “Conservative” politics;

Under EU data protection law, personal information that relates to highly sensitive and intimate topics — such as health, sexuality and politics — is what’s known as special category personal data. Processing this type of information generally requires explicit consent from users — with only very narrow exceptions, such as for protecting the vital interests of the data subjects (and serving behavioral ads clearly wouldn’t meet such a bar).

So it’s hard to see how the current practices of the targeted ad industry can possibly be compliant with EU law, in spite of the massive scale on which Internet users’ data is being processed.

In the report, the ICCL estimates that just three ad exchanges (OpenX, IndexExchange and PubMatic) have made around 113.9 trillion RTB broadcasts in the past year.

“Google’s RTB system now sends people’s private data to more companies, and from more websites than when the DPC was notified two years ago,” it writes. “A single ad exchange using the IAB RTB system now sends 120 billion RTB broadcasts in a day, an increase of 140% over two years ago when the DPC was notified.”

“Real-Time Bidding operates behind the scenes on websites and apps. It constantly broadcasts the private things we do and watch online, and where we are in the real-world, to countless companies. As a result, we are all an open book to data broker companies, and others, who can build intimate dossiers about each of us,” it adds. 

Reached for a response to the report, Google sent us the following statement:

We enforce strict privacy protocols and standards to protect people’s personal information, including industry-leading safeguards on the use of data for real-time bidding. We do not allow advertisers to select ads based on sensitive personal data and we do not share people’s sensitive personal data, browsing histories or profiles with advertisers. We perform audits of ad buyers on Google’s ad exchange and if we find breaches of our policies we take action.

We also reached out to the IAB Europe for comment on the report. A spokeswoman told us it would issue a response tomorrow.

Responding to the ICCL submission, the DPC’s deputy commissioner Graham Doyle sent this statement: “Extensive recent updates and correspondence on this matter, including a meeting, have been provided by the DPC. The investigation has progressed and a full update on the next steps provided to the concerned party.”

However in a follow up to Doyle’s remarks, Ryan told TechCrunch he has “no idea” what the DPC is referring to when it mentions a “full update”. On “next steps” he said the regulator informed him it will produce a document setting out what it believes the issues are — within four weeks of its letter, dated September 15.

Ryan expressed particular concern that the DPC’s enquiry does not appear to cover security — which is the crux of the RTB complaints, since GDPR’s security principle puts an obligation on processors to ensure data is handled securely and protected against unauthorized processing or loss. (Whereas RTB broadcasts personal data across the Internet, leaking highly sensitive information in the process, per earlier evidence gathered by the complainants.)

He told TechCrunch the regulator finally sent him a letter, in May 2020, in response to his request to know what the scope of the inquiry is — saying then that it is examining the following issues:

  • Whether Google has a lawful basis for processing of personal data, including special category data, for the purposes of targeted advertising via the Authorised Buyers mechanism and, specifically, for the sourcing, sharing and combining of the personal data collected by Google with other companies / partners;
  • How Google complies with its transparency obligations, particularly with regard to Art. 5(1), 12, 13 and 14 of the GDPR;
  • The legal basis / bases for Google’s retention of personal data processed in the context of the Authorised Buyers mechanism and how it complies with Article 5(1)(c) in respect of its retention of personal data processed through the Authorised Buyers mechanism;

We’ve asked the DPC to confirm whether its investigation of Google’s adtech is also examining compliance with GDPR Article 5(1)f and will update this report with any response.

The DPC did not respond to our question about the timing for any draft decision on Ryan’s two-year-old complaint. But Doyle also pointed us to work this year around cookies and other tracking technologies — including guidance on compliant usage — adding that it has set out its intention to begin related enforcement from next month, when a six-month grace period for industry to comply with the rules on tracking elapses.

The regulator also pointed to another related open enquiry — into adtech veteran Quantcast, also beginning in May 2019. (That enquiry followed a submission by privacy rights advocacy group, Privacy International.)

The DPC has said the Quantcast enquiry is examining the lawful basis claimed for processing Internet users’ data for ad targeting purposes, as well as considering whether transparency and data retention obligations are being fulfilled. It’s not clear whether the regulator is looking at the security of the data in that case, either. A summary of the scope of Quantcast enquiry in the DPC’s annual report states:

In particular, the DPC is examining whether Quantcast has discharged its obligations in connection with the processing and aggregating of personal data which it conducts for the purposes of profiling and utilising the profiles generated for targeted advertising. The inquiry is examining how, and to what extent, Quantcast fulfils its obligation to be transparent to individuals in relation to what it does with personal data (including sources of collection, combining and making the data available to its customers) as well as Quantcast’s personal data retention practices. The inquiry will also examine the lawful basis pursuant to which processing occurs.

While Ireland remains under huge pressure over the glacial pace of cross-border GDPR investigations, given it’s the lead regulator for many major tech platforms, it’s not the only EU regulator accused of sitting on its hands where enforcement is concerned.

The UK’s data watchdog has similarly faced anger for failing to act over RTB complaints — despite acknowledging systematic breaches. In its case, after months of regulatory inaction, the ICO announced earlier this year that it had ‘paused ‘its investigation into the industry’s processing of Internet users’ personal data — owing to disruption to businesses as a result of the COVID-19 pandemic.

How the NSA is disrupting foreign hackers targeting COVID-19 vaccine research

The headlines aren’t always kind to the National Security Agency, a spy agency that operates almost entirely in the shadows. But a year ago, the NSA launched its new Cybersecurity Directorate, which in the past year has emerged as one of the more visible divisions of the spy agency.

At its core, the directorate focuses on defending and securing critical national security systems that the government uses for its sensitive and classified communications. But the directorate has become best known for sharing some of the more emerging, large-scale cyber threats from foreign hackers. In the past year the directorate has warned against attacks targeting secure boot features in most modern computers, and doxxed a malware operation linked to Russian intelligence. By going public, NSA aims to make it harder for foreign hackers to reuse their tools and techniques, while helping to defend critical systems at home.

But six months after the directorate started its work, COVID-19 was declared a pandemic and large swathes of the world — and the U.S. — went into lockdown, prompting hackers to shift gears and change tactics.

“The threat landscape has changed,” Anne Neuberger, NSA’s director of cybersecurity, told TechCrunch at Disrupt 2020. “We’ve moved to telework, we move to new infrastructure, and we’ve watched cyber adversaries move to take advantage of that as well,” she said.

Publicly, the NSA advised on which videoconferencing and collaboration software was secure, and warned about the risks associated with virtual private networks, of which usage boomed after lockdowns began.

But behind the scenes, the NSA is working with federal partners to help protect the efforts to produce and distribute a vaccine for COVID-19, a feat that the U.S. government called Operation Warp Speed. News of NSA’s involvement in the operation was first reported by Cyberscoop. As the world races to develop a working COVID-19 vaccine, which experts say is the only long-term way to end the pandemic, NSA and its U.K. and Canadian partners went public with another Russian intelligence operation aimed at targeting COVID-19 research.

“We’re part of a partnership across the U.S. government, we each have different roles,” said Neuberger. “The role we play as part of ‘Team America for Cyber’ is working to understand foreign actors, who are they, who are seeking to steal COVID-19 vaccine information — or more importantly, disrupt vaccine information or shake confidence in a given vaccine.”

Neuberger said that protecting the pharma companies developing a vaccine is just one part of the massive supply chain operation that goes into getting a vaccine out to millions of Americans. Ensuring the cybersecurity of the government agencies tasked with approving a vaccine is also a top priority.

Here are more takeaways from the talk, and you can watch the interview in full below:

Why TikTok is a national security threat

TikTok is just days away from an app store ban, after the Trump administration earlier this year accused the Chinese-owned company of posing a threat to national security. But the government has been less than forthcoming about what specific risks the video sharing app poses, only alleging that the app could be compelled to spy for China. Beijing has long been accused of cyberattacks against the U.S., including the massive breach of classified government employee files from the Office of Personnel Management in 2014.

Neuberger said that the “scope and scale” of TikTok’s app’s data collection makes it easier for Chinese spies to answer “all kinds of different intelligence questions” on U.S. nationals. Neuberger conceded that U.S. tech companies like Facebook and Google also collect large amounts of user data. But that there are “greater concerns on how [China] in particular could use all that information collected against populations other than its own,” she said.

NSA is privately disclosing security bugs to companies

The NSA is trying to be more open about the vulnerabilities it finds and discloses, Neuberger said. She told TechCrunch that the agency has shared a “number” of vulnerabilities with private companies this year, but “those companies did not want to give attribution.”

One exception was earlier this year when Microsoft confirmed NSA had found and privately reported a major cryptographic flaw in Windows 10, which could have allowed hackers to run malware masquerading as a legitimate file. The bug was so dangerous that NSA reported the vulnerability to Microsoft, which patched the bug.

Only two years earlier, the spy agency was criticized for finding and using a Windows vulnerability to conduct surveillance instead of alerting Microsoft to the flaw. The exploit was later leaked and was used to infect thousands of computers with the WannaCry ransomware, causing millions of dollars’ worth of damage.

As a spy agency, NSA exploits flaws and vulnerabilities in software to gather intelligence on the enemy. It has to run through a process called the Vulnerabilities Equities Process, which allows the government to retain bugs that it can use for spying.

Instagram CEO, ACLU slam TikTok and WeChat app bans for putting US freedoms into the balance

As people begin to process the announcement from the U.S. Department of Commerce detailing how it plans, on grounds of national security, to shut down TikTok and WeChat — starting with app downloads and updates for both, plus all of WeChat’s services, on September 20, with TikTok following with a shut down of servers and services on November 12 — the CEO of Instagram and the ACLU are among those speaking out against the move.

The CEO of Instagram, Adam Mosseri, wasted little time in taking to Twitter to criticize the announcement. His particular beef is the implication the move will have for U.S. companies — like his — that also have built their businesses around operating across national boundaries.

In essence, if the U.S. starts to ban international companies from operating in the U.S., then it opens the door for other countries to take the same approach with U.S. companies.

Meanwhile, the ACLU has been outspoken in criticizing the announcement on the grounds of free speech.

“This order violates the First Amendment rights of people in the United States by restricting their ability to communicate and conduct important transactions on the two social media platforms,” said Hina Shamsi, director of the American Civil Liberties Union’s National Security Project, in a statement today.

Shamsi added that ironically, while the U.S. government might be crying foul over national security, blocking app updates poses a security threat in itself.

“The order also harms the privacy and security of millions of existing TikTok and WeChat users in the United States by blocking software updates, which can fix vulnerabilities and make the apps more secure. In implementing President Trump’s abuse of emergency powers, Secretary Ross is undermining our rights and our security. To truly address privacy concerns raised by social media platforms, Congress should enact comprehensive surveillance reform and strong consumer data privacy legislation.”

Vanessa Pappas, who is the acting CEO of TikTok, also stepped in to endorse Mosseri’s words and publicly asked Facebook to join TikTok’s litigation against the U.S. over its moves.

We agree that this type of ban would be bad for the industry. We invite Facebook and Instagram to publicly join our challenge and support our litigation,” she said in her own tweet responding to Mosseri, while also retweeting the ACLU. (Interesting how Twitter becomes Switzerland in these stories, huh?) “This is a moment to put aside our competition and focus on core principles like freedom of expression and due process of law.”

The move to shutter these apps has been wrapped in an increasingly complex set of issues, and these two dissenting voices highlight not just some of the conflict between those issues, but the potential consequences and detriment of acting based on one issue over another.

The Trump administration has stated that the main reason it has pinpointed the apps has been to “safeguard the national security of the United States” in the face of nefarious activity out of China, where the owners of WeChat and TikTok, respectively Tencent and ByteDance, are based:

“The Chinese Communist Party (CCP) has demonstrated the means and motives to use these apps to threaten the national security, foreign policy, and the economy of the U.S.,” today’s statement from the U.S. Department of Commerce noted. “Today’s announced prohibitions, when combined, protect users in the U.S. by eliminating access to these applications and significantly reducing their functionality.”

In reality, it’s hard to know where the truth actually lies.

In the case of the ACLU and Mosseri’s comments, they are highlighting issues of principles but not necessarily precedent.

It’s not as if the U.S. would be the first country to take a nationalist approach to how it permits the operation of apps. Facebook and its stable of apps, as of right now, are unable to operate in China without a VPN (and even with a VPN, things can get tricky). And free speech is regularly ignored in a range of countries today.

But the U.S. has always positioned itself as a standard-bearer in both of these areas, and so apart from the self-interest that Instagram might have in advocating for more free-market policies, it points to wider market and business position that’s being eroded.

The issue, of course, is a little like an onion (a stinking onion, I’d say), with well more than just a couple of layers around it, and with the ramifications bigger than TikTok (with 100 million users in the U.S. and huge in pop culture beyond even that) or WeChat (much smaller in the U.S. but huge elsewhere and valued by those who do use it).

The Trump administration has been carefully selecting issues to tackle to give voters reassurance of Trump’s commitment to “Make America Great Again,” building examples of how it’s helping to promote U.S. interests and demote those that stand in its way. China has been a huge part of that image building, positioned as an adversary in industrial, defence and other arenas. Pinpointing specific apps and how they might pose a security threat by sucking up our data fits neatly into that strategy.

But are they really security threats, or are they just doing the same kind of nefarious data ingesting that every social app does in order to work? Will the U.S. banning them really mean that other countries, up to now more in favor of a free market, will fall in line and take a similar approach? Will people really stop being able to express themselves?

Those are the questions that Trump has forced into the balance with his actions, and even if they were not issues before, they have very much become so now.

A bug in Joe Biden’s campaign app gave anyone access to millions of voter files

SCRANTON, PENNSYLVANIA – SEPTEMBER 11: A political poster favoring U.S. presidential candidate former Vice President Joe Biden and Senator Kamala Harris is placed on a front lawn September 11, 2020 in Scranton, Pennsylvania. (Photo by Robert Nickelsberg/Getty Images)

A privacy bug in Democratic presidential candidate Joe Biden’s official campaign app allowed anyone to look up sensitive voter information on millions of Americans, a security researcher has found.

The campaign app, Vote Joe, allows Biden supporters to encourage friends and family members to vote in the upcoming U.S. presidential election by uploading their phone’s contact lists to see if their friends and family members are registered to vote. The app uploads and matches the user’s contacts with voter data supplied from TargetSmart, a political marketing firm that claims to have files on more than 191 million Americans.

When a match is found, the app displays the voter’s name, age and birthday, and which recent election they voted in. This, the app says, helps users “find people you know and encourage them to get involved.”

While much of this data can already be public, the bug made it easy for anyone to access any voter’s information by using the app.

The App Analyst, a mobile expert who detailed his findings on his eponymous blog, found that he could trick the app into pulling in anyone’s information by creating a contact on his phone with the voter’s name.

Worse, he told TechCrunch, the app pulls in a lot more data than it actually displays. By intercepting the data that flows in and out of the device, he saw far more detailed and private information, including the voter’s home address, date of birth, gender, ethnicity and political party affiliation, such as Republican or Democrat.

The Biden campaign fixed the bug and pushed out an app update on Friday.

screenshot of Joe Biden's official iPhone app.

A screenshot of Joe Biden’s official campaign app, which uploads and matches a user’s contacts with their existing voter file. But a bug allowed anyone to pull in any voter’s information. (Image: TechCrunch)

“We were made aware about how our third-party app developer was providing additional fields of information from commercially available data that was not needed,” Matt Hill, a spokesperson for the Biden campaign, told TechCrunch. “We worked with our vendor quickly to fix the issue and remove the information. We are committed to protecting the privacy of our staff, volunteers and supporters will always work with our vendors to do so.”

After publication, Hill disputed the researcher’s findings and and that the app returned gender, ethnicities, or home addresses

A spokesperson for TargetSmart said a “limited amount of publicly or commercially available data” was accessible to other users.

It’s not uncommon for political campaigns to trade and share large amounts of voter information, called voter files, which includes basic information like a voter’s name, often their home address and contact information and which political parties they are registered with. Voter files can differ wildly state to state.

Though a lot of this data can be publicly available, political firms also try to enrich their databases with additional data from other sources to help political campaigns identify and target key swing voters.

But several security lapses involving these vast banks of data have questioned whether political firms can keep this data safe.

It’s not the first time TargetSmart has been embroiled in a data leak. In 2017, a voter file compiled by TargetSmart on close to 600,000 voters in Alaska was left on an exposed server without a password. And in 2018, TechCrunch reported that close to 15 million records on Texas voters were found on an exposed and unsecured server, just months ahead of the U.S. midterm elections.

Last week Microsoft warned that hackers backed by Russia, China and Iran are targeting both the 2020 presidential campaigns but also their political advisors. Reuters reported that one of those firms, Washington, DC-based SKDKnickerbocker, a political consultant to the Biden campaign, was targeted by Russian intelligence but that there was “no breach.”

Updated with Hill remarks.

YouTube hit with UK class action style suit seeking $3BN+ for ‘unlawful’ use of kids’ data

Another class action style lawsuit has been lodged against a tech giant in the UK alleging violations of privacy and seeking major damages. The latest representative action, filed against Google-owned YouTube, accuses the platform of routinely breaking UK and European data protection laws by unlawfully targeting up to five million under-13-year-olds with addictive programming and harvests their data for advertisers.

UK and EU law contain specific protections for children’s data, limiting the age at which minors can legally consent to their data being processed — in the case of the UK’s Data Protection Act to aged 13.

The suit is being brought by international law firm Hausfeld and Foxglove, a tech justice non-profit, which says they’re seeking damages from YouTube of more than £2.5BN (~$3.2BN).

Per the firms, it’s the first such representative litigation brought against a tech giant on behalf of children and among the largest such cases to date. (Last month a similar class style action was filed against Oracle in the UK alleging breaches of Europe’s General Data Protection Regulation (GDPR) related to cookie tracking.)

If the case succeeds, they say millions of British households whose kids watch YouTube may be owed “hundreds of pounds” in damages.

Duncan McCann, a researcher on the digital economy and father of three children all under 13 who watch YouTube and have their data collected and ads targeted at them by Google, is serving as representative claimant in the case.

Commenting in a statement, McCann said: “My kids love YouTube, and I want them to be able to use it. But it isn’t ‘free’ — we’re paying for it with our private lives and our kids’ mental health. I try to be relatively conscious of what’s happening with my kids’ data online but even so it’s just impossible to combat Google’s lure and influence, which comes from its surveillance power. There’s a massive power imbalance between us and them, and it needs to be fixed.”

“The [YouTube] website has no user practical age requirements and makes no adequate attempt to limit usage by youngsters,” notes Hausfeld in a press release about the lawsuit.

While a Foxglove release about the suit points to YouTube pitch materials intended for toy makers Mattel and Hasbro (and made public via an earlier FTC suit against Google) — in which it says the platform described itself as “the new Saturday morning cartoons”, “the number one website visited regularly by kids”, “today’s leader in reaching children age 6-11 against top TV channels”, and “unanimously voted as the favorite website of kids 2-12”.

Reached for comment, a YouTube spokesperson sent us this statement: “We don’t comment on pending litigation. YouTube is not for children under the age of 13. We launched the YouTube Kids app as a dedicated destination for kids and are always working to better protect kids and families on YouTube.”

The tech giant maintains that YouTube is not for under 13s — pointing to the existence of YouTube Kids, a dedicated kids’ app it launched in 2015 to offer what it called a “safer and easier” space for children to discover “family-focused content”, to back up the claim.

Although the company has never claimed that no children under 13 use YouTube. And last year the FTC agreed a $170M settlement with Google to end an investigation by the regulator and the New York Attorney General into alleged collection of children’s personal information by YouTube without the consent of their parents.

The rise in class action style lawsuits being filed in the UK seeking damages for breaches of data protection law follow a notable appeals court decision, just under a year ago, also against Google.

In that case the appeals court unblocked a class-action style lawsuit against the tech giant related to bypassing iOS privacy settings to track iPhone users.

In the US, Google paid $22.5M to the FTC back in 2012 to settle the same charge, and later paid a smaller sum to settle a number of US class action lawsuits. The UK case, meanwhile, continues.

While Europe has historically strong data protection laws, there has been — and still is — a lack of robust regulatory enforcement which is leaving a gap that litigation funders are increasingly willing to plug.

In the UK the challenge for those seeking damages for large scale violations is there’s no direct equivalent to a US class action. But last year’s appeals court ruling in the Safari bypass case has opened the door to representative actions.

The court also said damages could be sought for a breach of the law without needing to prove pecuniary loss or distress, establishing a route to redress for consumers that’s now being tested by several cases.

Facebook seeks fresh legal delay to block order to suspend its transatlantic data transfers

Facebook is firing up its lawyers to try to block EU regulators from forcing it to suspend transatlantic data transfers in the wake of a landmark ruling by Europe’s top court this summer.

The tech giant has applied to judges in Ireland to seek a judicial review of a preliminary suspension order, it has emerged.

Earlier this week Facebook confirmed it had received a preliminary order from its lead EU data regulator — Ireland’s Data Protection Commission (DPC) — ordering it to suspend transfers.

That’s the logical conclusion after the so-called Schrems II ruling which struck down a flagship EU-US data transfer arrangement on the grounds of US surveillance overreach — simultaneously casting doubt on the legality of alternative mechanisms for EU to US data transfers in cases where the data controller is subject to FISA 702 (as Facebook is).

Today The Currency reported that Dublin commercial law firm, Mason Hayes + Curran, filed papers with the Irish High Court yesterday, naming Ireland’s data protection commissioners as defendant in the judicial review action.

Facebook confirmed the application — sending us this statement: “A lack of safe, secure and legal international data transfers would have damaging consequences for the European economy. We urge regulators to adopt a pragmatic and proportionate approach until a sustainable long-term solution can be reached.”

In further remarks the company did not want directly quoted it told us it believes the preliminary order is premature as it said it expects further regulator guidance in the wake of the Schrems II ruling.

It’s not clear what further guidance Facebook is hankering for, nor what grounds it is claiming for seeking a judicial review of the DPC’s process. We asked it about this but it declined to offer any details. However the tech giant’s intent to (further) delay regulatory action which threats its business interests is crystal clear.

The original complaint against Facebook’s transatlantic data transfers dates all the way back to 2013.

 

Ireland’s legal system allows for ex parte applications for judicial review. So all Facebook had to do to file an application to the High Court to challenge the DPC’s preliminary order is a statement of grounds, a verifying affidavit and an ex parte docket (plus any relevant court fee). Oh and it had to be sure this paperwork was submitted on A4.

The DPC’s deputy commissioner, Graham Doyle, declined to comment on the latest twist in the neverending saga.

England’s long delayed COVID-19 contacts tracing app to launch on September 24

The UK’s long delayed coronavirus contact tracing app finally has a release date: The Department of Health and Social Care (DHSC) announced today that the app will launch in England and Wales on September 24.

The other regions of the country, Scotland and Northern Ireland, already have their own COVID-19 contacts tracing apps — the latter launching an app this summer. While the Protect Scotland app was released yesterday, where it went on to clock up more than 600,000 downloads in a matter of hours.

England and Wales have had a far lengthy-than-expected wait for an app after a false start back in May, when government ministers had suggested in daily coronavirus briefings that an app would be landing shortly.

Instead the launch was delayed, and DHSC took over development of the NHS COVID-19 app from the National Health Service’s digital division, NHSX, after it ran into problems related to the choice of a centralized app architecture — which triggered privacy concerns and saw the test app plagued by technical issues around iPhones device detection.

The government pivoted the app to a decentralized architecture which means it’s able to make use of exposure notification APIs offered by Apple and Google for official COVID-19 contacts tracing apps, avoiding the technical issues associated with iOS background Bluetooth detection.

Another element that’s been added to the NHS COVID-19 app is a check-in feature for venues via scannable QR codes. The government is encouraging businesses and locations where people may congregate, such as pubs, restaurants, hairdressers, libraries and so on, to print out and display a QR code that app users can scan to check into the venue.

This check-in data will be held locally on the device, taking the same privacy-preserving approach as for contacts data generated when devices come into proximity and swap ephemeral IDs.

Venue check-in data will be retained on device for 21 days, per the DHSC. If an outbreak is identified at a location its venue ID will be broadcast to all devices running the app — and those that contain recent check-ins will generate an on-device match.

The DHSC says such a match may generate an alert and advice to the app user on what to do (e.g. whether to quarantine) — “based on the level of risk”.

The government says trials of the reformulated app on the Isle of Wight and with NHS Volunteer Responders have shown it to be “highly effective” when used in conjunction with traditional contact tracing to identify contacts of those who have tested positive for the novel coronavirus.

It had previously suggested there were issues related to limitations in Apple’s and Google’s APIs which made it difficult to effectively estimate the distance between devices which it said was needed to generate exposure notifications.

Talking up the impending launch of the app, health and social care secretary Matt Hancock suggested that the scannable venue codes will provide “an easy and simple way to collect contact details to support the NHS Test and Trace system”. Although businesses will need a fall-back system to collect data from patrons who do not have the app.

“We need to use every tool at our disposal to control the spread of the virus including cutting-edge technology. The launch of the app later this month across England and Wales is a defining moment and will aid our ability to contain the virus at a critical time,” Hancock added.

UK businesses are being invited to download a QR code to display at their premise via gov.uk/create-coronavirus-qr-poster.

Reports last month in UK national press that suggested the app would abandon automatic contact tracing altogether appear to have been wide of the mark.

England’s long delayed COVID-19 contacts tracing app to launch on September 24

The UK’s long delayed coronavirus contact tracing app finally has a release date: The Department of Health and Social Care (DHSC) announced today that the app will launch in England and Wales on September 24.

The other regions of the country, Scotland and Northern Ireland, already have their own COVID-19 contacts tracing apps — the latter launching an app this summer. While the Protect Scotland app was released yesterday, where it went on to clock up more than 600,000 downloads in a matter of hours.

England and Wales have had a far lengthy-than-expected wait for an app after a false start back in May, when government ministers had suggested in daily coronavirus briefings that an app would be landing shortly.

Instead the launch was delayed, and DHSC took over development of the NHS COVID-19 app from the National Health Service’s digital division, NHSX, after it ran into problems related to the choice of a centralized app architecture — which triggered privacy concerns and saw the test app plagued by technical issues around iPhones device detection.

The government pivoted the app to a decentralized architecture which means it’s able to make use of exposure notification APIs offered by Apple and Google for official COVID-19 contacts tracing apps, avoiding the technical issues associated with iOS background Bluetooth detection.

Another element that’s been added to the NHS COVID-19 app is a check-in feature for venues via scannable QR codes. The government is encouraging businesses and locations where people may congregate, such as pubs, restaurants, hairdressers, libraries and so on, to print out and display a QR code that app users can scan to check into the venue.

This check-in data will be held locally on the device, taking the same privacy-preserving approach as for contacts data generated when devices come into proximity and swap ephemeral IDs.

Venue check-in data will be retained on device for 21 days, per the DHSC. If an outbreak is identified at a location its venue ID will be broadcast to all devices running the app — and those that contain recent check-ins will generate an on-device match.

The DHSC says such a match may generate an alert and advice to the app user on what to do (e.g. whether to quarantine) — “based on the level of risk”.

The government says trials of the reformulated app on the Isle of Wight and with NHS Volunteer Responders have shown it to be “highly effective” when used in conjunction with traditional contact tracing to identify contacts of those who have tested positive for the novel coronavirus.

It had previously suggested there were issues related to limitations in Apple’s and Google’s APIs which made it difficult to effectively estimate the distance between devices which it said was needed to generate exposure notifications.

Talking up the impending launch of the app, health and social care secretary Matt Hancock suggested that the scannable venue codes will provide “an easy and simple way to collect contact details to support the NHS Test and Trace system”. Although businesses will need a fall-back system to collect data from patrons who do not have the app.

“We need to use every tool at our disposal to control the spread of the virus including cutting-edge technology. The launch of the app later this month across England and Wales is a defining moment and will aid our ability to contain the virus at a critical time,” Hancock added.

UK businesses are being invited to download a QR code to display at their premise via gov.uk/create-coronavirus-qr-poster.

Reports last month in UK national press that suggested the app would abandon automatic contact tracing altogether appear to have been wide of the mark.

Facebook told it may have to suspend EU data transfers after Schrems II ruling

Ireland’s data protection watchdog, the DPC, has sent Facebook a preliminary order to suspend data transfers from the EU to the US, the Wall Street Journal reports, citing people familiar with the matter and including a confirmation from Facebook’s VP of global affairs, Nick Clegg.

The preliminary suspension order follows a landmark ruling by Europe’s top court this summer (aka Schrems II) which both struck down a flagship data transfer arrangement between the EU and the US and cast doubt on the legality of an alternative transfer mechanism — certainly in cases where data is flowing to a non-EU entity that falls under US surveillance law. 

Facebook’s use of these Standard Contractual Clauses (SCCs) to claim a legal basis for EU data transfers therefore looks to be fast running out of borrowed time.

European privacy campaigner Max Schrems, whose surname is attached to the CJEU ruling — and to an earlier ruling which invalidated the prior EU-US data transfer deal, Safe Harbor, on the same grounds of US surveillance overreach — filed his original complaint about Facebook’s use of SCCs all the way back in 2013. So the tech giant has had more than half a decade to get its European data ducks in order.

Reached for comment on the WSJ report, Facebook pointed us to a freshly published blog post, also penned by Clegg — who acknowledges “significant uncertainty” for businesses operating online services that rely on transatlantic data flows in the wake of the Schrems II ruling.

In the blog post the former deputy prime minister of the United Kingdom goes on to advocate for “global rules that can ensure consistent treatment of data around the world”.

“The Irish Data Protection Commission has commenced an inquiry into Facebook controlled EU-US data transfers, and has suggested that SCCs cannot in practice be used for EU-US data transfers,” Cleggs writes. “While this approach is subject to further process, if followed, it could have a far reaching effect on businesses that rely on SCCs and on the online services many people and businesses rely on.”

Facebook’s blog post lobbying for global rules to ensure “stability” for cross-border data transfers paints a picture of how the Schrems II ruling might negatively affect European startups — claiming it could result in local businesses being unable to use US-based cloud providers or run operations across multiple time zones.

The blog post doesn’t have anything much to say on how Facebook itself having to stop using SCCs might affect Facebook’s own business — but we’ve discussed that before here. (The short version is Facebook may need to split its infrastructure in two, and offer a federated version of its service to EU users — which would clearly be expensive and time consuming for Facebook.)

“Businesses need clear, global rules, underpinned by the strong rule of law, to protect transatlantic data flows over the long term,” Clegg goes on, before lobbying for regulatory leniency in the meanwhile, as Facebook continues to transfer EU data to the US in what he claims is “good faith” — despite the acknowledged legal uncertainty and the complaint in question dating back well over half a decade at this point.

Here he is pleading for data transfer mercy on behalf of other businesses who are not involved in this specific complaint: “While policymakers are working towards a sustainable, long-term solution, we urge regulators to adopt a proportionate and pragmatic approach to minimise disruption to the many thousands of businesses who, like Facebook, have been relying on these mechanisms in good faith to transfer data in a safe and secure way.”

EU lawmakers warned recently that there would be no quick fix for US data transfers, despite some parallel Commission noises about working with the US on an enhanced replacement mechanism for the now defunct ‘Privacy Shield’. (Although for businesses that aren’t, as Facebook is, subject to FISA 702 there may be ways to use SCCs for US transfers that are legal, or at least law firms willing to suggest measures you could take… )

Speaking to the EU Parliament last week, justice commissioner Didier Reynders suggested changes to US surveillance law will be needed to bridge the legal schism between US surveillance law and EU privacy rights.

And of course legislative changes require both time and political will. Although it’s interesting to see Facebook’s global VP feeling moved to wade in and call for global solutions for cross-border data transfers. Perhaps the tech giant will funnel some of its multi-million dollar domestic lobbying budget on making the case for reforming US surveillance law in future.

Ireland’s data protection regulator declined to comment on the WSJ report when we got in touch.

Schrems, meanwhile, is not sitting on his hands. In a statement following the newspaper’s report he said his digital rights not-for-profit, noyb, was not informed about the preliminary order by the DPC — speculating the information was leaked to the newspaper by Facebook to draw political attention to its cause.

He also reveals an intent by noyb to start a legal procedure against the DPC, saying it informed Ireland’s regulator this week that it plans to file an interlocutory injunction over the opening a ‘second’ procedure into the matter — arguing this move is in breach of a 2015 court order and is essentially the equivalent of letting Facebook carry on a multi-year game of legal whack-a-mole where it never actually faces enforcement for breaking each specific law.

“Facebook is knowingly in violation of the law since 2013. So far the DPC has covered them and for seven years refused to enforce the law. It seems after the second judgement by the Court of Justice not even the DPC can deny that Facebook’s international data transfers are built on sand,” Schrems told TechCrunch.

“At the same time, Facebook has in internal communication indicated that it has again shifted its legal basis from the SCCs to [the GDPR] Article 49 and the contract they allegedly sign with users. We are therefore very concerned that the DPC is again only investigating one of two legal basis that Facebook uses. This approach could lead to another frustrated case, like the ‘Safe Harbor’ case in 2015.”

What’s new since 2015 is Europe’s General Data Protection Regulation (GDPR) — which came into application in May 2018 and has led EU lawmakers to claim standard-setting geopolitical glory, as the issue of data privacy has risen up the agenda around the world, propelled by the deforming effects of platform power on societies and democracies.

However the two-year-old framework has so far failed to deliver anything much at all on major cross-border complaints which pertain to platform giants like Facebook (or indeed to the adtech industry). This summer a Commission review of the regulation highlighted what it described as a lack of uniformly vigorous enforcement.

Ireland’s DPC is fully in the spotlight on this front too, as the lead regulator for a large number of US tech firms.

It finally submitted the first draft decision on a cross border complaint earlier this summer — but a final decision on that case (relating to a Twitter security breach) has been delayed as the draft failed to gain the backing of all the region’s data supervisors, triggering further procedures related to joint working under the GDPR’s one-stop-shop mechanism.

Any order from the DPC to Facebook to suspend SCCs would similarly need to gain the backing of the bloc’s other regulators (or at least a majority of them). Per the WSJ’s report, Ireland’s regulator has given Facebook until mid-September to respond to the order — after which a new draft would be sent to the other supervisors for joint approval.

So there’s further delay built into the GDPR process before any final suspension order could be issued against Facebook in this seven year+ case. Move fast and break things this most certainly is not.

The WSJ also speculates that Facebook could also try to challenge such an order in court. “Internally, Facebook considers the preliminary order and its future implications a big deal,” it adds, citing one of its unnamed sources.