Zoom misses its own deadline to publish its first transparency report

How many government demands for user data has Zoom received? We won’t know until “later this year,” an updated Zoom blog post now says.

The video conferencing giant previously said it would release the number of government demands it has received by June 30. But the company said it’s missed that target and has given no firm new date for releasing the figures.

It comes amid heightened scrutiny of the service after a number of security issues and privacy concerns came to light following a massive spike in its user base, thanks to millions working from home because of the coronavirus pandemic.

In a blog post today reflecting on the company’s turnaround efforts, chief executive Eric Yuan said the company has made “made significant progress defining the framework and approach for a transparency report that details information related to requests Zoom receives for data, records, or content.”

“We look forward to providing the fiscal [second quarter data in our first report later this year,” he said.

Transparency reports offer rare insights into the number of demands or requests a company gets from the government for user data. These reports are not mandatory, but are important to understand the scale and scope of government surveillance.

Zoom said last month it would launch its first transparency report after the company admitted it briefly suspended the Zoom accounts of two U.S.-base accounts and one Hong Kong activist at the request of the Chinese government. The users, who were not based in China, held a Zoom call commemorating the anniversary of the Tiananmen Square massacre, an event that’s cloaked in secrecy and censorship in mainland China.

The company said at the time it “must comply with applicable laws in the jurisdictions where we operate,” but later said that it would change its policies to disallow requests from the Chinese government to impact users outside of mainland China.

A spokesperson for Zoom did not immediately comment.

Decrypted: Police leaks, iOS 14 kills ad-tracking, anti-encryption bill

What would the world look like if encryption were outlawed? If three Republican senators get their way, it might just happen.

Under the guise of national security, the Senate Judiciary Committee pushed through a draft bill that would end “warrant-proof” encryption — that is strong, near-impossible to break encryption that lets only the device owner unlock their data and nobody else. Silicon Valley quickly embraced this approach, not least because it cuts even the tech giants out of the loop so that the feds can’t demand they hand over their users’ data.

Except that didn’t happen. The opposite happened. The FBI cried foul, as did the Justice Department, claiming it makes it harder to solve crimes, while conveniently neglecting to mention its vast array of hacking tools that also makes it easier than ever to get the data that prosecutors seek.

Now a legislative fix to the government’s near-nonexistent problem. The bill, if passed, would create a “backdoor mandate” that would force tech companies to build in “backdoors” to let police, with a warrant, access an encrypted device’s photos, messages, files and more. The same would apply to data “in motion” as it traverses the internet, undermining the security that keeps our emails safe and our online banking secure, and effectively banning end-to-end messaging apps like Signal, WhatsApp and Facebook Messenger.

Experts decried the bill, as expected, and as they have done with every other attempt to undermine the security of the internet. Their argument is simple, and mathematically irrefutable: If police can get a backdoor, so can hackers. There’s no secure way to give one access and not the other.

Lawmakers seem set on changing the law of the land, but they can’t change the laws of mathematics.

More on that in this week’s Decrypted.


THE BIG PICTURE

‘BlueLeaks’ dumps data on decades of police files

Hacking collective Anonymous crashed onto the internet a decade ago by publishing reams of secret files and stolen data from governments and corporations. Last week the collective emerged after a long hiatus, returning with a massive trove of data obtained from hundreds of U.S. police departments in an operation dubbed BlueLeaks.

The data was published by Distributed Denial of Secrets, an alternative to WikiLeaks that’s dedicated to publishing files in the public interest. The data contains a decade’s worth of police training materials and other internal law enforcement data, like protest containment strategies, which have come under fire after tactics used against protesters in the wake of George Floyd’s death.

Privacy not a blocker for “meaningful” research access to platform data, says report

European lawmakers are eyeing binding transparency requirements for Internet platforms in a Digital Services Act (DSA) due to be drafted by the end of the year. But the question of how to create governance structures that provide regulators and researchers with meaningful access to data so platforms can be held accountable for the content they’re amplifying is a complex one.

Platforms’ own efforts to open up their data troves to outside eyes have been chequered to say the least. Back in 2018, Facebook announced the Social Science One initiative, saying it would provide a select group of academics with access to about a petabyte’s worth of sharing data and metadata. But it took almost two years before researchers got access to any data.

“This was the most frustrating thing I’ve been involved in, in my life,” one of the involved researchers told Protocol earlier this year, after spending some 20 months negotiating with Facebook over exactly what it would release.

Facebook’s political Ad Archive API has similarly frustrated researchers. “Facebook makes it impossible to get a complete picture of all of the ads running on their platform (which is exactly the opposite of what they claim to be doing),” said Mozilla last year, accusing the tech giant of transparency-washing.

Facebook, meanwhile, points to European data protection regulations and privacy requirements attached to its business following interventions by the US’ FTC to justify painstaking progress around data access. But critics argue this is just a cynical shield against transparency and accountability. Plus of course none of these regulations stopped Facebook grabbing people’s data in the first place.

In January, Europe’s lead data protection regulator penned a preliminary opinion on data protection and research which warned against such shielding.

“Data protection obligations should not be misappropriated as a means for powerful players to escape transparency and accountability,” wrote EDPS Wojciech Wiewiorówski. “Researchers operating within ethical governance frameworks should therefore be able to access necessary API and other data, with a valid legal basis and subject to the principle of proportionality and appropriate safeguards.”

Nor is Facebook the sole offender here, of course. Google brands itself a ‘privacy champion’ on account of how tight a grip it keeps on access to user data, heavily mediating data it releases in areas where it claims ‘transparency’. While, for years, Twitter routinely disparaged third party studies which sought to understand how content flows across its platform — saying its API didn’t provide full access to all platform data and metadata so the research couldn’t show the full picture. Another convenient shield to eschew accountability.

More recently the company has made some encouraging noises to researchers, updating its dev policy to clarify rules, and offering up a COVID-related dataset — though the included tweets remains self selected. So Twitter’s mediating hand remains on the research tiller.

A new report by AlgorithmWatch seeks to grapple with the knotty problem of platforms evading accountability by mediating data access — suggesting some concrete steps to deliver transparency and bolster research, including by taking inspiration from how access to medical data is mediated, among other discussed governance structures.

The goal: “Meaningful” research access to platform data. (Or as the report title puts it: Operationalizing Research Access in Platform Governance: What to Learn from Other Industries?

“We have strict transparency rules to enable accountability and the public good in so many other sectors (food, transportation, consumer goods, finance, etc). We definitely need it for online platforms — especially in COVID-19 times, where we’re even more dependent on them for work, education, social interaction, news and media consumption,” co-author Jef Ausloos tells TechCrunch.

The report, which the authors are aiming at European Commission lawmakers as they ponder how to shape an effective platform governance framework, proposes mandatory data sharing frameworks with an independent EU-institution acting as an intermediary between disclosing corporations and data recipients.

“Such an institution would maintain relevant access infrastructures including virtual secure operating environments, public databases, websites and forums. It would also play an important role in verifying and pre-processing corporate data in order to ensure it is suitable for disclosure,” they write in a report summary.

Discussing the approach further, Ausloos argues it’s important to move away from “binary thinking” to break the current ‘data access’ trust deadlock. “Rather than this binary thinking of disclosure vs opaqueness/obfuscation, we need a more nuanced and layered approach with varying degrees of data access/transparency,” he says. “Such a layered approach can hinge on types of actors requesting data, and their purposes.”

A market research purpose might only get access to very high level data, he suggests. Whereas medical research by academic institutions could be given more granular access — subject, of course, to strict requirements (such as a research plan, ethical board review approval and so on).

“An independent institution intermediating might be vital in order to facilitate this and generate the necessary trust. We think it is vital that that regulator’s mandate is detached from specific policy agendas,” says Ausloos. “It should be focused on being a transparency/disclosure facilitator — creating the necessary technical and legal environment for data exchange. This can then be used by media/competition/data protection/etc authorities for their potential enforcement actions.”

Ausloos says many discussions on setting up an independent regulator for online platforms have proposed too many mandates or competencies — making it impossible to achieve political consensus. Whereas a leaner entity with a narrow transparency/disclosure remit should be able to cut through noisy objections, is the theory.

The infamous example of Cambridge Analytica does certainly loom large over the ‘data for research’ space — aka, the disgraced data company which paid a Cambridge University academic to use an app to harvest and process Facebook user data for political ad targeting. And Facebook has thought nothing of turning this massive platform data misuse scandal into a stick to beat back regulatory proposals aiming to crack open its data troves.

But Cambridge Analytica was a direct consequence of a lack of transparency, accountability and platform oversight. It was also, of course, a massive ethical failure — given that consent for political targeting was not sought from people whose data was acquired. So it doesn’t seem a good argument against regulating access to platform data. On the contrary.

With such ‘blunt instrument’ tech talking points being lobbied into the governance debate by self-interested platform giants, the AlgorithmWatch report brings both welcome nuance and solid suggestions on how to create effective governance structures for modern data giants.

On the layered access point, the report suggests the most granular access to platform data would be the most highly controlled, along the lines of a medical data model. “Granular access can also only be enabled within a closed virtual environment, controlled by an independent body — as is currently done by Findata [Finland’s medical data institution],” notes Ausloos.

Another governance structure discussed in the report — as a case study from which to draw learnings on how to incentivize transparency and thereby enable accountability — is the European Pollutant Release and Transfer Register (E-PRTR). This regulates pollutant emissions reporting across the EU, and results in emissions data being freely available to the public via a dedicated web-platform and as a standalone dataset.

“Credibility is achieved by assuring that the reported data is authentic, transparent and reliable and comparable, because of consistent reporting. Operators are advised to use the best available reporting techniques to achieve these standards of completeness, consistency and credibility,” the report says on the E-PRTR.

“Through this form of transparency, the E-PRTR aims to impose accountability on operators of industrial facilities in Europe towards to the public, NGOs, scientists, politicians, governments and supervisory authorities.”

While EU lawmakers have signalled an intent to place legally binding transparency requirements on platforms — at least in some less contentious areas, such as illegal hate speech, as a means of obtaining accountability on some specific content problems — they have simultaneously set out a sweeping plan to fire up Europe’s digital economy by boosting the reuse of (non-personal) data.

Leveraging industrial data to support R&D and innovation is a key plank of the Commission’s tech-fuelled policy priorities for the next five+ years, as part of an ambitious digital transformation agenda.

This suggests that any regional move to open up platform data is likely to go beyond accountability — given EU lawmakers are pushing for the broader goal of creating a foundational digital support structure to enable research through data reuse. So if privacy-respecting data sharing frameworks can be baked in, a platform governance structure that’s designed to enable regulated data exchange almost by default starts to look very possible within the European context.

“Enabling accountability is important, which we tackle in the pollution case study; but enabling research is at least as important,” argues Ausloos, who does postdoc research at the University of Amsterdam’s Institute for Information Law. “Especially considering these platforms constitute the infrastructure of modern society, we need data disclosure to understand society.”

“When we think about what transparency measures should look like for the DSA we don’t need to reinvent the wheel,” adds Mackenzie Nelson, project lead for AlgorithmWatch’s Governing Platforms Project, in a statement. “The report provides concrete recommendations for how the Commission can design frameworks that safeguard user privacy while still enabling critical research access to dominant platforms’ data.”

GDPR’s two-year review flags lack of “vigorous” enforcement

It’s more than two years since a flagship update to the European Union’s data protection regime moved into the application phase. Yet the General Data Protection Regulation (GDPR) has been dogged by criticism of a failure of enforcement related to major cross-border complaints — lending weight to critics who claim the legislation has created a moat for dominant multinationals, at the expense of smaller entities.

Today the European Commission responded to that criticism as it gave a long scheduled assessment of how the regulation is functioning, in its first review two years in.

While EU lawmakers’ top-line message is the clear claim: ‘GDPR is working’ — with commissioners lauding what they couched as the many positives of this “modern and horizontal piece of legislation”; which they also said has become a “global reference point” — they conceded there is a “very serious to-do list”, calling for uniformly “vigorous” enforcement of the regulation across the bloc.

So, in other words, GDPR decisions need to flow more smoothly than they have so far.

Speaking at a Commission briefing today, Věra Jourová, Commission VP for values and transparency, said: “The European Data Protection Board and the data protection authorities have to step up their work to create a truly common European culture — providing more coherent and more practical guidance, and work on vigorous but uniform enforcement.

“We have to work together, as the Board and the Member States, to address concerns — in particular those of the small and medium enterprises.”

Justice commissioner, Didier Reynders, also speaking at the briefing, added: “We have to ensure that [GDPR] is applied harmoniously — or at least with the same vigour across the European territory. There may be some nuanced differences but it has to be applied with the same vigour.

“In order for that to happen data protection authorities have to be sufficiently equipped — they have to have the relevant number of staff, the relevant budgets, and there is a clear will to move in that direction.”

Front and center for GDPR enforcement is the issue of resourcing for national data protection authorities (DPAs), who are tasked with providing oversight and issuing enforcement decisions.

Jourová noted today that EU DPAs — taken as a whole — have increased headcount by 42% and budget by 49% between 2016 and 2019.

However that’s an aggregate which conceals major differences in resourcing. A recent report by pro-privacy browser Brave found that half of all national DPAs receive just €5M or less in annual budget from their governments, for example. Brave also found budget increases peaked for the application of the GDPR — saying, two years in, governments are now slowing the increase.

It’s also true that DPA case load isn’t uniform across the bloc, with certain Member States (notably Ireland and Luxembourg) handling many more and/or more complex complaints than others as a result of how many multinationals locate their regional HQs there.

One key issue for GDPR thus relates to how the regulation handles cross border cases.

A one-stop-shop mechanism was supposed to simplify this process — by having a single regulator (typically in the country where the business has its main establishment) taking a lead on complaints that affect users in multiple Member States, and other interested DPAs not dealing directly with the data processor. But they do remain involved — and, once there’s a draft decision, play an important role as they can raise objections to whatever the lead regulator has decided.

However a lot of friction seems to be creeping in via current processes, via technical issues related to sharing data between DPAs — and also via the opportunity for additional legal delays.

In the case of big tech, GDPR’s one-stop-shop has resulted in a major backlog around enforcement, with multiple complaints being re-routed via Ireland’s Data Protection Commission (DPC) — which is yet to issue a single decision on a cross border case. And has more than 20 such investigations ongoing.

Last month Ireland’s DPC trailed looming decisions on Twitter and Facebook — saying it had submitted a draft decision on the Twitter case to fellow DPAs and expressed hope that case could be finalized in July.

Its data protection commissioner, Helen Dixon, had previously suggested the first cross border decisions would be coming in “early” 2020. In the event, we’re past half way through the year still with no enforcement on show.

This looks especially problematic as there is a counter example elsewhere in the EU: France’s CNIL managed to issue a decision in a major GDPR case against Google all the way back in January 2019. Last week the country’s top court for administrative law cemented the regulator’s findings — dismissing Google’s appeal. Its $57M fine against Google remains the largest yet levied against big tech under GDPR.

Asked directly whether the Commission believes Ireland’s DPC is sufficiently resourced — with the questioner noting it has multiple ongoing investigations into Facebook, in particular, with still no decisions taken on the company — Jourová emphasized DPAs are “fully independent”, before adding: “The Commission has no tools to push them to speed up but the cases you mention, especially the cases that relate to big tech, are always complex and they require thorough investigation — and it simply requires more time.”

However CNIL’s example shows effective enforcement against major tech platforms is possible — at least, where there’s a will to take on corporate power. Though France’s relative agility may also have something to do with not having to deal simultaneously with such a massive load of complex cross-border cases.

At the same time, critics point to Ireland’s cosy political relationship with the corporate giants it attracts via low tax rates — which in turn raises plenty of questions when set against the oversized role its DPA has in overseeing most of big tech. The stench of forum shopping is unmistakable.

Criticism of national regulators extends beyond Ireland, too, though. In the UK, privacy experts have slammed the ICO’s repeated failure to enforce the law against the adtech industry — despite its own assessments finding systemic flouting of the law. The country remains an EU Member State until the end of the year — and the ICO is the best resourced DPA in the bloc, in terms of budget and headcount (and likely tech expertise too). Which hardly reflects well on the functional state of the regulation.

Despite all this, the Commission continues to present GDPR as a major geopolitical success, claiming — as it did again today — that it’s ahead of the digital regulatory curve globally at a time when lawmakers almost everywhere are considering putting harder limits on Internet players.

But there’s only so long it can sell a success on paper. Without consistently “vigorous” enforcement, the whole framework crumbles — so the EU’s executive has serious skin in the game when it comes to GDPR actually doing what it says on the tin.

Pressure is coming from commercial quarters too — not only privacy and consumer rights groups.

Earlier this year, Brave lodged a complaint with the Commission against 27 EU Member States — accusing them of under resourcing their national data protection watchdogs. It called on the EU executive to launch an infringement procedure against national governments, and refer them to the bloc’s top court if necessary. So startups are banging the drum for enforcement too.

If decision wheels don’t turn on their own, courts may eventually be needed to force Europe’s DPAs to get a move on — albeit, the Commission is still hoping it won’t have to come to that.

“We saw a considerable increase of capacities both in Ireland and Luxembourg,” said Jourová, discussing the DPA resourcing issue. “We saw a sufficient increase in at least half of other Member States DPAs so we have to let them do very responsible and good work — and of course wait for the results.”

Reynders suggested that while there has been an increase in resource for DPAs the Commission may need to conduct a “deeper” analysis — to see if more resource is needed in some Member States, “due to the size of the companies at work in the jurisdiction of such a national authority”.

“We have huge differences between the Member States about the need to react to the requests from the companies. And of course we need to reinforce the cooperation and the co-ordination on cross border issues. We need to be sure that it’s possible for all the national authorities to work together. And in the network of national authorities it’s the case — and with the Board [EDPB] it’s possible to organize that. So we’ll continue to work on it,” he said.

“So it’s not only a question to have the same kind of approach in all the Member States. It’s to be fit to all the demands coming in your jurisdiction and it’s true that in some jurisdictions we have more multinationals and more members of high tech companies than in others.”

“The best answer will be a decision from the Irish data protection authority about important cases,” he added.

We’ve reached out to the Irish DPC and the EDPB for comment on the Commission’s GDPR assessment.

Asked whether the Commission has a list of Member States that it might instigate infringement proceedings against related to the terms of GDPR — which, for example, require governments to provide adequate resourcing to their national DPA in order that they can properly oversee the regulation — Reynders said it doesn’t currently have such a list.

“We have a list of countries where we try to see if it’s possible to reinforce the possibilities for the national authorities to have enough resources — human resources, financial resources, to organize better cross border activities — if at the end we see there’s a real problem about the enforcement of the GDPR in one Member State we will propose to go maybe to the court with an infringement proceeding — but we don’t have, for the moment, a list of countries to organize such a kind of process,” he said.

The commissioners were a lot more comfortable talking up the positives of GDPR, with Jourová noting, with a sphinx-like smile, how three years ago there was “literal panic” and an army of lobbyists warning of a “doomsday” for business and innovation should the legislation pass. “I have good news today — no dooms day was here,” she said.

“Our approach to the GDPR was the right one,” she went on. “It created the more harmonized rules across the Single Market and more and more companies are using GDPR concepts, such as privacy by design and by default, as a competitive differentiation.

“I can say that the philosophy of one continent, one law is very advantageous for European small and medium enterprises who want to operate on the European Single Market.

“In general GDPR has become a truly European trade mark,” she added. “It puts people and their rights at the center. It does not leave everything to the market like in the US. And it does not see data as a means for state supervision, as in China. Our truly European approach to data is the first answer to difficult questions we face as a society.”

She also noted that the regulation served as inspiration for the current Commission’s tech-focused policy priorities — including a planned “human centric approach to AI“.

“It makes us pause before facial recognition technology, for instance, will be fully developed or implemented. And I dare to say that it makes Europe fit for the digital age. On the international side the GDPR has become a reference point — with a truly global convergence movement. In this context we are happy to support trade and safe digital data flows and work against digital protectionism.”

Another success the commissioners credited to the GDPR framework is the region’s relatively swift digital response to the coronavirus — with the regulation helping DPAs to more quickly assess the privacy implications of COVID-19 contacts tracing apps and tools.

Reynders lauded “a certain degree of flexibility in the GDPR” which he said had been able to come into play during the crisis, feeding into discussions around tracing apps — on “how to ensure protection of personal data in the context of such tracing apps linked to public and individual health”.

Under its to-do list, other areas of work the Commission cited today included ensuring DPAs provide more such support related to the application of the regulation by coming out with guidelines related to other new technologies. “In various new areas we will have to be able to provide guidance quickly, just as we did on the tracing apps recently,” noted Reynders.

Further increasing public awareness of GDPR and the rights it affords is another Commission focus — though it said more than two-third of EU citizens above the age of 16 have at least heard of the GDPR. But it wants citizens to be able to make what Reynders called “best use” of their rights, perhaps via new applications.

“So the GDPR provides support to innovation in this respect,” he said. “And there’s a lot of work that still needs to be done in order to strengthen innovation.”

“We also have to convince those who may still be reticent about the GDPR. Certain companies, for instance, who have complained about how difficult it is to implement it. I think we need to explain to them what the requirements of the GDPR and how they can implement these,” he added.

Apple’s iOS 14 will give users option to decline ad tracking

A new version of iOS wouldn’t be the same without a bunch of security and privacy updates. Apple on Monday announced a ton of new features it’ll bake into iOS 14, expected out later this year with the release of new iPhones and iPads.

Apple said it will allow users to share your approximate location with apps, instead of your precise location. It’ll allow apps to take your rough location without identifying precisely where you are. It’s another option that users have when they give over their location. Last year, Apple allowed users to give over their location once so that apps can’t track a person as they go about their day.

iPhones with iOS 14 will also get a camera recording indicator in the status bar. It’s a similar feature to the camera light that comes with Macs and MacBooks. The recording indicator will sit in the top bar of your iPhone’s display when your front or rear camera is in use.

But the biggest changes are for app developers themselves, Apple said. In iOS 14, users will be asked if they want to be tracked by the app. That’s a major change that will likely have a ripple effect: by allowing users to reject tracking, it’ll reduce the amount of data that’s collected, preserving user privacy.

Apple also said it will also require app developers to self-report the kinds of permissions that their apps ask for. This will improve transparency, allowing the user to know what kind of data they may have to give over in order to use the app. It’s a feature that Android users have been able to see app permissions for years on the Google Play app store.

The move is Apple’s latest assault against the ad industry as part of the tech giant’s privacy-conscious mantra.

The ad industry has frequently been the target of Apple’s barbs, amid a string of controversies that have embroiled both advertisers and data-hungry tech giants, like Facebook and Google, which make the bulk of their profits from targeted advertising. As far back as 2015, Apple CEO Tim Cook said its Silicon Valley rivals are “gobbling up everything they can learn about you and trying to monetize it.” Apple, which makes its money selling hardware, “elected not to do that,” said Cook.

As targeted advertising became more invasive, Apple countered by baking in new privacy features to its software, like its intelligence tracking prevention technology and allowing Safari users to install content blockers that prevent ads and trackers from loading.

Just last year Apple told developers to stop using third-party trackers in apps for children or face rejection from the App Store.

French court slaps down Google’s appeal against $57M GDPR fine

France’s top court for administrative law has dismissed Google’s appeal against a $57M fine issued by the data watchdog last year for not making it clear enough to Android users how it processes their personal information.

The State Council issued the decision today, affirming the data watchdog CNIL’s earlier finding that Google did not provide “sufficiently clear” information to Android users — which in turn meant it had not legally obtained their consent to use their data for targeted ads.

“Google’s request has been rejected,” a spokesperson for the Conseil D’Etat confirmed to TechCrunch via email.

“The Council of State confirms the CNIL’s assessment that information relating to targeting advertising is not presented in a sufficiently clear and distinct manner for the consent of the user to be validly collected,” the court also writes in a press release [translated with Google Translate] on its website.

It found the size of the fine to be proportionate — given the severity and ongoing nature of the violations.

Importantly, the court also affirmed the jurisdiction of France’s national watchdog to regulate Google — at least on the date when this penalty was issued (January 2019).

The CNIL’s multimillion dollar fine against Google remains the largest to date against a tech giant under Europe’s flagship General Data Protection Regulation (GDPR) — lending the case a certain symbolic value, for those concerned about whether the regulation is functioning as intended vs platform power.

While the size of the fine is still relative peanuts vs Google’s parent entity Alphabet’s global revenue, changes the tech giant may have to make to how it harvests user data could be far more impactful to its ad-targeting bottom line. 

Under European law, for consent to be a valid legal basis for processing personal data it must be informed, specific and freely given. Or, to put it another way, consent cannot be strained.

In this case French judges concluded Google had not provided clear enough information for consent to be lawfully obtained — including objecting to a pre-ticked checkbox which the court affirmed does not meet the requirements of the GDPR.

So, tl;dr, the CNIL’s decision has been entirely vindicated.

Reached for comment on the court’s dismissal of its appeal, a Google spokeswoman sent us this statement:

People expect to understand and control how their data is used, and we’ve invested in industry-leading tools that help them do both. This case was not about whether consent is needed for personalised advertising, but about how exactly it should be obtained. In light of this decision, we will now review what changes we need to make.

GDPR came into force in 2018, updating long standing European data protection rules and opening up the possibility of supersized fines of up to 4% of global annual turnover.

However actions against big tech have largely stalled, with scores of complaints being funnelled through Ireland’s Data Protection Commission — on account of a one-stop-shop mechanism in the regulation — causing a major backlog of cases. The Irish DPC has yet to issue decisions on any cross border complaints, though it has said its first ones are imminent — on complaints involving Twitter and Facebook.

Ireland’s data watchdog is also continuing to investigate a number of complaints against Google, following a change Google announced to the legal jurisdiction of where it processes European users’ data — moving them to Google Ireland Limited, based in Dublin, which it said applied from January 22, 2019 — with ongoing investigations by the Irish DPC into a long running complaint related to how Google handles location data and another major probe of its adtech, to name two

On the GDPR one-stop shop mechanism — and, indirectly, the wider problematic issue of ‘forum shopping’ and European data protection regulation — the French State Council writes: “Google believed that the Irish data protection authority was solely competent to control its activities in the European Union, the control of data processing being the responsibility of the authority of the country where the main establishment of the data controller is located, according to a ‘one-stop-shop’ principle instituted by the GDPR. The Council of State notes however that at the date of the sanction, the Irish subsidiary of Google had no power of control over the other European subsidiaries nor any decision-making power over the data processing, the company Google LLC located in the United States with this power alone.”

In its own statement responding to the court’s decision, the CNIL notes the court’s view that GDPR’s one-stop-shop mechanism was not applicable in this case — writing: “It did so by applying the new European framework as interpreted by all the European authorities in the guidelines of the European Data Protection Committee.”

Privacy NGO noyb — one of the privacy campaign groups which lodged the original ‘forced consent’ complaint against Google, all the way back in May 2018 — welcomed the court’s decision on all fronts, including the jurisdiction point.

Commenting in a statement, noyb’s honorary chairman, Max Schrems, said: “It is very important that companies like Google cannot simply declare themselves to be ‘Irish’ to escape the oversight by the privacy regulators.”

A key question is whether CNIL — or another (non-Irish) EU DPA — will be found to be competent to sanction Google in future, following its shift to naming its Google Ireland subsidiary as the regional data processor. (Other tech giants use the same or a similar playbook, seeking out the EU’s more ‘business-friendly’ regulators.)

On the wider ruling, Schrems also said: “This decision requires substantial improvements by Google. Their privacy policy now really needs to make it crystal clear what they do with users’ data. Users must also get an option to agree to only some parts of what Google does with their data and refuse other things.”

French digital rights group, La Quadrature du Net — which had filed a related complaint against Google, feeding the CNIL’s investigation — also declared victory today, noting it’s the first sanction in a number of GDPR complaints it has lodged against tech giants on behalf of 12,000 citizens.

“The rest of the complaints against Google, Facebook, Apple and Microsoft are still under investigation in Ireland. In any case, this is what this authority promises us,” it added in another tweet.

Oracle’s BlueKai tracks you across the web. That data spilled online

Have you ever wondered why online ads appear for things that you were just thinking about?

There’s no big conspiracy. Ad tech can be creepily accurate.

Tech giant Oracle is one of a few companies in Silicon Valley that has near-perfected the art of tracking people across the internet. The company has spent a decade and billions of dollars buying startups to build its very own panopticon of users’ web browsing data.

One of those startups, BlueKai, which Oracle bought for a little over $400 million in 2014, is barely known outside marketing circles, but it amassed one of the largest banks of web tracking data outside of the federal government.

BlueKai uses website cookies and other tracking tech to follow you around the web. By knowing which websites you visit and which emails you open, marketers can use this vast amount of tracking data to infer as much about you as possible — your income, education, political views, and interests to name a few — in order to target you with ads that should match your apparent tastes. If you click, the advertisers make money.

But for a time, that web tracking data was spilling out onto the open internet because a server was left unsecured and without a password, exposing billions of records for anyone to find.

Security researcher Anurag Sen found the database and reported his finding to Oracle through an intermediary — Roi Carthy, chief executive at cybersecurity firm Hudson Rock and former TechCrunch reporter.

TechCrunch reviewed the data shared by Sen and found names, home addresses, email addresses and other identifiable data in the database. The data also revealed sensitive users’ web browsing activity — from purchases to newsletter unsubscribes.

“There’s really no telling how revealing some of this data can be,” said Bennett Cyphers, a staff technologist at the Electronic Frontier Foundation, told TechCrunch.

“Oracle is aware of the report made by Roi Carthy of Hudson Rock related to certain BlueKai records potentially exposed on the Internet,” said Oracle spokesperson Deborah Hellinger. “While the initial information provided by the researcher did not contain enough information to identify an affected system, Oracle’s investigation has subsequently determined that two companies did not properly configure their services. Oracle has taken additional measures to avoid a reoccurrence of this issue.”

Oracle did not name the companies or say what those additional measures were, and declined to answer our questions or comment further.

But the sheer size of the exposed database makes this one of the largest security lapses this year.

The more it knows

BlueKai relies on vacuuming up a never-ending supply of data from a variety of sources to understand trends to deliver the most precise ads to a person’s interests.

Marketers can either tap into Oracle’s enormous bank of data, which it pulls in from credit agencies, analytics firms, and other sources of consumer data including billions of daily location data points, in order to target their ads. Or marketers can upload their own data obtained directly from consumers, such as the information you hand over when you register an account on a website or when you sign up for a company’s newsletter.

But BlueKai also uses more covert tactics like allowing websites to embed invisible pixel-sized images to collect information about you as soon as you open the page — hardware, operating system, browser and any information about the network connection.

This data — known as a web browser’s “user agent” — may not seem sensitive, but when fused together it can create a unique “fingerprint” of a person’s device, which can be used to track that person as they browse the internet.

BlueKai can also tie your mobile web browsing habits to your desktop activity, allowing it to follow you across the internet no matter which device you use.

Say a marketer wants to run a campaign trying to sell a new car model. In BlueKai’s case, it already has a category of “car enthusiasts” — and many other, more specific categories — that the marketer can use to target with ads. Anyone who’s visited a car maker’s website or a blog that includes a BlueKai tracking pixel might be categorized as a “car enthusiast.” Over time that person will be siloed into different categories under a profile that learns as much about you to target you with those ads.

(Sources: DaVooda, Filborg/Getty Images; Oracle BlueKai)

The technology is far from perfect. Harvard Business Review found earlier this year that the information collected by data brokers, such as Oracle, can vary wildly in quality.

But some of these platforms have proven alarmingly accurate.

In 2012, Target mailed maternity coupons to a high school student after an in-house analytics system figured out she was pregnant — before she had even told her parents — because of the data it collected from her web browsing.

Some might argue that’s precisely what these systems are designed to do.

Jonathan Mayer, a science professor at Princeton University, told TechCrunch that BlueKai is one of the leading systems for linking data.

“If you have the browser send an email address and a tracking cookie at the same time, that’s what you need to build that link,” he said.

The end goal: the more BlueKai collects, the more it can infer about you, making it easier to target you with ads that might entice you to that magic money-making click.

But marketers can’t just log in to BlueKai and download reams of personal information from its servers, one marketing professional told TechCrunch. The data is sanitized and masked so that marketers never see names, addresses or any other personal data.

As Mayer explained: BlueKai collects personal data; it doesn’t share it with marketers.

‘No telling how revealing’

Behind the scenes, BlueKai continuously ingests and matches as much raw personal data as it can against each person’s profile, constantly enriching that profile data to make sure it’s up to date and relevant.

But it was that raw data spilling out of the exposed database.

TechCrunch found records containing details of private purchases. One record detailed how a German man, whose name we’re withholding, used a prepaid debit card to place a €10 bet on an esports betting site on April 19. The record also contained the man’s address, phone number and email address.

Another record revealed how one of the largest investment holding companies in Turkey used BlueKai to track users on its website. The record detailed how one person, who lives in Istanbul, ordered $899 worth of furniture online from a homeware store. We know because the record contained all of these details, including the buyer’s name, email address and the direct web address for the buyer’s order, no login needed.

We also reviewed a record detailing how one person unsubscribed from an email newsletter run by an electronics consumer, sent to his iCloud address. The record showed that the person may have been interested in a specific model of car dash-cam. We can even tell based on his user agent that his iPhone was out of date and needed a software update.

The more BlueKai collects, the more it can infer about you, making it easier to target you with ads that might entice you to that magic money-making click.

The data went back for months, according to Sen, who discovered the database. Some logs dated back to August 2019, he said.

“Fine-grained records of people’s web-browsing habits can reveal hobbies, political affiliation, income bracket, health conditions, sexual preferences, and — as evident here — gambling habits,” said the EFF’s Cyphers. “As we live more of our lives online, this kind of data accounts for a larger and larger portion of how we spend our time.”

Oracle declined to say if it informed those whose data was exposed about the security lapse. The company also declined to say if it had warned U.S. or international regulators of the incident.

Under California state law, companies like Oracle are required to publicly disclose data security incidents, but Oracle has not to date declared the lapse. When reached, a spokesperson for California’s attorney general’s office declined to say if Oracle had informed the office of the incident.

Under Europe’s General Data Protection Regulation, companies can face fines of up to 4% of their global annual turnover for flouting data protection and disclosure rules.

Trackers, trackers everywhere

BlueKai is everywhere — even when you can’t see it.

One estimate says BlueKai tracks over 1% of all web traffic — an unfathomable amount of daily data collection — and tracks some of the world’s biggest websites: Amazon, ESPN, Forbes, Glassdoor, Healthline, Levi’s, MSN.com, Rotten Tomatoes, and The New York Times. Even this very article has a BlueKai tracker because our parent company, Verizon Media, is a BlueKai partner.

But BlueKai is not alone. Nearly every website you visit contains some form of invisible tracking code that watches you as you traverse the internet.

As invasive as it is that invisible trackers are feeding your web browsing data to a gigantic database in the cloud, it’s that very same data that has kept the internet largely free for so long.

To stay free, websites use advertising to generate revenue. The more targeted the advertising, the better the revenue is supposed to be.

While the majority of web users are not naive enough to think that internet tracking does not exist, few outside marketing circles understand how much data is collected and what is done with it.

Take the Equifax data breach in 2017, which brought scathing criticism from lawmakers after it collected millions of consumers’ data without their explicit consent. Equifax, like BlueKai, relies on consumers skipping over the lengthy privacy policies that govern how websites track them.

In any case, consumers have little choice but to accept the terms. Be tracked or leave the site. That’s the trade-off with a free internet.

But there are dangers with collecting web-tracking data on millions of people.

“Whenever databases like this exist, there’s always a risk the data will end up in the wrong hands and in a position to hurt someone,” said Cyphers.

Cyphers said the data, if in the hands of someone malicious, could contribute to identity theft, phishing or stalking.

“It also makes a valuable target for law enforcement and government agencies who want to piggyback on the data gathering that Oracle already does,” he said.

Even when the data stays where it’s intended, Cyphers said these vast databases enable “manipulative advertising for things like political issues or exploitative services, and it allows marketers to tailor their messages to specific vulnerable populations,” he said.

“Everyone has different things they want to keep private, and different people they want to keep them private from,” said Cyphers. “When companies collect raw web browsing or purchase data, thousands of little details about real people’s lives get scooped up along the way.”

“Each one of those little details has the potential to put somebody at risk,” he said.


Send tips securely over Signal and WhatsApp to +1 646-755-8849.

Zoom U-turns on no e2e encryption for free users

In a major security U-turn, videoconferencing platform Zoom has said it will, after all, offer end-to-end encryption to all users — including those who do not pay to use its service.

The caveat is that free users must provide certain “additional” pieces of information for verification purposes (such as a phone number where they can respond to a verification link) before being allowed to use e2e encryption — which Zoom says is a necessary check so it can “prevent and fight abuse” on its platform. However it’s a major step up from the prior offer of ‘no e2e unless you pay us‘.

“We are grateful to those who have provided their input on our E2EE design, both technical and philosophical,” Zoom writes in a blog update today. “We encourage everyone to continue to share their views throughout this complex, ongoing process.”

The company faced a storm of criticism earlier this month after Bloomberg reported comments by CEO Eric Yuan, who said it did not intend to provide e2e encryption for non-paying users because it wanted to be able to work with law enforcement.

Security and privacy experts waded it to blast the stance. One notable critic of the position was cryptography expert, Matthew Green — whose name you’ll find listed on Zoom’s e2e encryption design white paper.

“Once the precedent is set that E2E encryption is too ‘dangerous’ to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it’s going to be hard to put it back,” Green warned in a nuanced Twitter thread.

Since the e2e encryption storm, Zoom has faced another scandal — this time related to privacy and censorship, after it admitted shutting down a number of Chinese activists accounts at the request of the Chinese government. So the company may have stumbled upon another good reason for reversing its stance — given it’s a lot more difficult to censor content you can’t see.

Explaining the shift in its blog post, Zoom says only that it follows a period of engagement “with civil liberties organizations, our CISO council, child safety advocates, encryption experts, government representatives, our own users, and others”.

“We have also explored new technologies to enable us to offer E2EE to all tiers of users,” it adds.

Its blog briefly discusses how non-paying users will be able to gain access to e2e encryption, with Zoom writing: “Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message.”

“Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our Report a User function — we can continue to prevent and fight abuse,” it adds.

Certain countries require an ID check to purchase a SIM card so Zoom’s verification provision may make it impossible for some users to access e2e encryption without leaving an identity trail which state agencies could unpick.

Per Zoom’s blog post, a beta of the e2e encryption implementation will kick off in July. The platform’s default encryption remains AES 256 GCM in the meanwhile.

The forthcoming e2e encryption will not be switched on by default — but rather offered as an option. Zoom says this is because it limits some meeting functionality (“such as the ability to include traditional PSTN phone lines or SIP/H.323 hardware conference room systems”).

“Hosts will toggle E2EE on or off on a per-meeting basis,” it further notes, adding that account administrators will also have the ability to enable and disable E2EE at the account and group level.

Today the company also released a v2 update of its e2e encryption design — posting the spec to Github.

Zoom U-turns on no e2e encryption for free users

In a major security U-turn, videoconferencing platform Zoom has said it will, after all, offer end-to-end encryption to all users — including those who do not pay to use its service.

The caveat is that free users must provide certain “additional” pieces of information for verification purposes (such as a phone number where they can respond to a verification link) before being allowed to use e2e encryption — which Zoom says is a necessary check so it can “prevent and fight abuse” on its platform. However it’s a major step up from the prior offer of ‘no e2e unless you pay us‘.

“We are grateful to those who have provided their input on our E2EE design, both technical and philosophical,” Zoom writes in a blog update today. “We encourage everyone to continue to share their views throughout this complex, ongoing process.”

The company faced a storm of criticism earlier this month after Bloomberg reported comments by CEO Eric Yuan, who said it did not intend to provide e2e encryption for non-paying users because it wanted to be able to work with law enforcement.

Security and privacy experts waded it to blast the stance. One notable critic of the position was cryptography expert, Matthew Green — whose name you’ll find listed on Zoom’s e2e encryption design white paper.

“Once the precedent is set that E2E encryption is too ‘dangerous’ to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it’s going to be hard to put it back,” Green warned in a nuanced Twitter thread.

Since the e2e encryption storm, Zoom has faced another scandal — this time related to privacy and censorship, after it admitted shutting down a number of Chinese activists accounts at the request of the Chinese government. So the company may have stumbled upon another good reason for reversing its stance — given it’s a lot more difficult to censor content you can’t see.

Explaining the shift in its blog post, Zoom says only that it follows a period of engagement “with civil liberties organizations, our CISO council, child safety advocates, encryption experts, government representatives, our own users, and others”.

“We have also explored new technologies to enable us to offer E2EE to all tiers of users,” it adds.

Its blog briefly discusses how non-paying users will be able to gain access to e2e encryption, with Zoom writing: “Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message.”

“Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our Report a User function — we can continue to prevent and fight abuse,” it adds.

Certain countries require an ID check to purchase a SIM card so Zoom’s verification provision may make it impossible for some users to access e2e encryption without leaving an identity trail which state agencies could unpick.

Per Zoom’s blog post, a beta of the e2e encryption implementation will kick off in July. The platform’s default encryption remains AES 256 GCM in the meanwhile.

The forthcoming e2e encryption will not be switched on by default — but rather offered as an option. Zoom says this is because it limits some meeting functionality (“such as the ability to include traditional PSTN phone lines or SIP/H.323 hardware conference room systems”).

“Hosts will toggle E2EE on or off on a per-meeting basis,” it further notes, adding that account administrators will also have the ability to enable and disable E2EE at the account and group level.

Today the company also released a v2 update of its e2e encryption design — posting the spec to Github.

Zoom U-turns on no e2e encryption for free users

In a major security U-turn, videoconferencing platform Zoom has said it will, after all, offer end-to-end encryption to all users — including those who do not pay to use its service.

The caveat is that free users must provide certain “additional” pieces of information for verification purposes (such as a phone number where they can respond to a verification link) before being allowed to use e2e encryption — which Zoom says is a necessary check so it can “prevent and fight abuse” on its platform. However it’s a major step up from the prior offer of ‘no e2e unless you pay us‘.

“We are grateful to those who have provided their input on our E2EE design, both technical and philosophical,” Zoom writes in a blog update today. “We encourage everyone to continue to share their views throughout this complex, ongoing process.”

The company faced a storm of criticism earlier this month after Bloomberg reported comments by CEO Eric Yuan, who said it did not intend to provide e2e encryption for non-paying users because it wanted to be able to work with law enforcement.

Security and privacy experts waded it to blast the stance. One notable critic of the position was cryptography expert, Matthew Green — whose name you’ll find listed on Zoom’s e2e encryption design white paper.

“Once the precedent is set that E2E encryption is too ‘dangerous’ to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it’s going to be hard to put it back,” Green warned in a nuanced Twitter thread.

Since the e2e encryption storm, Zoom has faced another scandal — this time related to privacy and censorship, after it admitted shutting down a number of Chinese activists accounts at the request of the Chinese government. So the company may have stumbled upon another good reason for reversing its stance — given it’s a lot more difficult to censor content you can’t see.

Explaining the shift in its blog post, Zoom says only that it follows a period of engagement “with civil liberties organizations, our CISO council, child safety advocates, encryption experts, government representatives, our own users, and others”.

“We have also explored new technologies to enable us to offer E2EE to all tiers of users,” it adds.

Its blog briefly discusses how non-paying users will be able to gain access to e2e encryption, with Zoom writing: “Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message.”

“Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our Report a User function — we can continue to prevent and fight abuse,” it adds.

Certain countries require an ID check to purchase a SIM card so Zoom’s verification provision may make it impossible for some users to access e2e encryption without leaving an identity trail which state agencies could unpick.

Per Zoom’s blog post, a beta of the e2e encryption implementation will kick off in July. The platform’s default encryption remains AES 256 GCM in the meanwhile.

The forthcoming e2e encryption will not be switched on by default — but rather offered as an option. Zoom says this is because it limits some meeting functionality (“such as the ability to include traditional PSTN phone lines or SIP/H.323 hardware conference room systems”).

“Hosts will toggle E2EE on or off on a per-meeting basis,” it further notes, adding that account administrators will also have the ability to enable and disable E2EE at the account and group level.

Today the company also released a v2 update of its e2e encryption design — posting the spec to Github.