Rapid Huawei rip-out could cause outages and security risks, warns UK telco

The chief executive of UK incumbent telco BT has warned any government move to require a rapid rip-out of Huawei kit from existing mobile infrastructure could cause network outages for mobile users and generate its own set of security risks.

Huawei has been the focus of concern for Western governments including the US and its allies because of the scale of its role in supplying international networks and next-gen 5G, and its close ties to the Chinese government — leading to fears that relying on its equipment could expose nations to cybersecurity threats and weaken national security.

The UK government is widely expected to announce a policy shift tomorrow, following reports earlier this year that it would reverse course on so called “high risk” vendors and mandate a phase out of use of such kit in 5G networks by 2023.

Speaking to BBC Radio 4’s Today program this morning, BT CEO Philip Jansen said he was not aware of the detail of any new government policy but warned too rapid a removal of Huawei equipment would carry its own risks.

“Security and safety in the short term could be put at risk. This is really critical — because if you’re not able to buy or transact with Huawei that would mean you wouldn’t be able to get software upgrades if you take it to that specificity,” he said.

“Over the next five years we’d expect 15-20 big software upgrades. If you don’t have those you’re running gaps in critical software that could have security implications far bigger than anything we’re talking about in terms of managing to a 35% cap in the access network of a mobile operator.”

“If we get a situation where things need to go very, very fast then you’re in a situation where potentially service for 24M BT Group mobile customers is put into question,” he added, warning that “outages would be possible”.

Back in January the government issued a much delayed policy announcement setting out an approach to what it dubbed “high risk” 5G vendors — detailing a package of restrictions it said were intended to mitigate any risk, including capping their involvement at 35% of the access network. Such vendors would also be entirely barred them from the sensitive “core” of 5G networks. However the UK has faced continued international and domestic opposition to the compromise policy, including from within its own political party.

Wider geopolitical developments — such as additional US sanctions on Huawei and China’s approach to Hong Kong, a former British colony — appear to have worked to shift the political weather in Number 10 Downing Street against allowing even a limited role for Huawei.

Asked about the feasibility of BT removing all Huawei kit, not just equipment used for 5G, Jansen suggested the company would need at least a decade to do so.

“It’s all about timing and balance,” he told the BBC. “If you wanted to have no Huawei in the whole telecoms infrastructure across the whole of the UK I think that’s impossible to do in under ten years.”

If the government policy is limited to only removing such kit from 5G networks Jansen said “ideally” BT would want seven years to carry out the work — though he conceded it “could probably do it in five”.

“The current policy announced in January was to cap the use of Huawei or any high risk vendor to 35% in the access network. We’re working towards that 35% cap by 2023 — which I think we can make although it has implications in terms of roll out costs,” he went on. “If the government makes a policy decision which effectively heralds a change from that announced in January then we just need to understand the potential implications and consequences of that.

“Again we always — at BT and in discussions with GCHQ — we always take the approach that security is absolutely paramount. It’s the number one priority. But we need to make sure that any change of direction doesn’t lead to more risk in the short term. That’s where the detail really matters.”

Jansen fired a further warning shot at Johnson’s government, which has made a major push to accelerate the roll out of fiber wired broadband across the country as part of a pledge to “upgrade” the UK, saying too tight a timeline to remove Huawei kit would jeopardize this “build out for the future”. Instead, he urged that “common sense” prevail.

“There is huge opportunity for the economy, for the country and for all of us from 5G and from full fiber to the home and if you accelerate the rip out obviously you’re not building either so we’ve got to understand all those implications and try and steer a course and find the right balance to managing this complicated issue.

“It’s really important that we very carefully weigh up all the different considerations and find the right way through this — depending on what the policy is and what’s driving the policy. BT will obviously and is talking directly with all parts of government, [the National] Cyber Security Center, GCHQ, to make sure that everybody understands all the information and a sensible decision is made. I’m confident that in the end common sense will prevail and we will head down the right direction.”

Asked whether it agrees there are security risks attached to an accelerated removal of Huawei kit, the UK’s National Cyber Security Centre declined to comment. But a spokesperson for the NCSC pointed us to an earlier statement in which it said: “The security and resilience of our networks is of paramount importance. Following the US announcement of additional sanctions against Huawei, the NCSC is looking carefully at any impact they could have to the U.K.’s networks.”

We’ve also reached out to DCMS for comment.

Amsterdam ejects Airbnb et al from three central districts in latest p2p platform limits

Another brick in the wall for vacation rental platforms: Amsterdam is booting Airbnb and other such platforms from three districts in the city’s old center from July 1, further tightening its rules for such services.

In other districts in the famous city of canals, vacation rentals will only be permitted with a permit from next Wednesday, still for a maximum of 30 nights per year.

The latest tightening of the city’s rules on Airbnb and similar platforms comes after a period of consultation with residents and organizations which city authorities say drew 780 responses — a full 75% of which supported banning the platforms from operating in the three central districts.

The three districts where vacation rentals on platforms such as Airbnb are prohibited from next Wednesday are: Burgwallen-Oude Zijde, Burgwallen-Nieuwe Zijde and the Grachtengordel-Zuid.

“This [consultation] indicates that the subject is very much alive among Amsterdammers. What is striking is that no less than 75% are in favor of a ban on holiday rentals in the three districts, said deputy mayor Laurens Ivens in a press release [translated from Dutch using DeepL].

Furthermore, Ivens said the consultation exercise showed some support for a citywide ban on such platforms. However current pan-EU rules — notable the European Services Directive — limit how cities can respond to public sentiment against such services. Hence Amsterdam applying the ban to specific districts where it has been able to confirm tourism leads to major disruption.

The legal cover afforded to vacation platforms operating in the region by the European Services Directive has show itself to be robust to challenge, after Europe’s top court ruled in December that Airbnb is an online intermediation service. A French tourism association had sought to argue the platform should rather be required to comply with real estate regulations.

Ivens said Amsterdam will conduct another tourism review in two years — and may add more districts to the ban list if it finds similar problems have migrated there.

These are by no means the first restrictions the city has put on vacation rental platforms. Back in 2018 it tightened a cap on the number of nights properties can be rented, squeezing it from 60 nights to 30 per year.

Yet despite such restrictions city authorities note tourist rental of homes has experienced “strong growth” in recent years, with 1 in 15 homes in Amsterdam being offered online. It also said that the supply of homes on the various platforms has increased fivefold — amounting to around 25,000 advertisements per month.

Due to this increase, tourist rental has an increasingly negative impact on the quality of life in various Amsterdam neighborhoods, the council writes in a press release.

The permit system which is also being brought in is intended to aid enforcement of tighter rules — with stipulations that a house must be inhabited; and that the maximum of 30 nights per year can only be rented to a maximum of four people. The council has also made it mandatory for those renting homes on vacation rental platforms to report to the municipality every time the house is rented, so will be building up its own dataset on how these platforms are being used.

Additional changes to Amsterdam’s housing regulations also include higher fines for repeat offender landlords, such as if they rent a property without a permit or violate the maximum number of nights for holiday rentals.

The city has also put limits on conversions, stipulating that only properties larger than 100 m2 may be converted into two or more smaller homes — a provision that seems aimed at landlords who try to maximize holiday rental income by turning a single larger property into two or more smaller flats, and thereby reducing suitable housing stock for larger families.

After early skirmishes between cities and vacation rental platforms related to the collection of tourist taxes, access to data remains an ongoing bone of contention — with cities pressing platforms to share data in order that they can enforce tighter regulations. Platforms, meanwhile, have a clear commercial incentive to avoid such transparency.

In 2018, for example, city officials in Amsterdam called for Airbnb to share “specific rental data with authorities — who is renting out for how long, and to how many people”.

We’ve asked Airbnb to confirm what data it shares with the city now.

The European Commission has sought to play a mediating role here, announcing earlier this year it had secured agreement with p2p rental platforms Airbnb, Booking.com, Expedia Group and Tripadvisor to share limited pan-EU data — and saying it wanted to encourage “balanced” development of the sector while noting concerns that such platforms put unsustainable pressure on local communities.

The initial pan-EU data points the platforms agreed to share are number of nights booked and number of guests, aggregated at the level of “municipalities.” A second phase of the arrangement will see platforms share data on the number of properties rented and the proportion that are full property rentals vs rooms in occupied properties.

However the Commission is also in the process of updating the rules around digital services, via the forthcoming Digital Services Act. So it’s possible it could propose specific data access obligations on vacation rental platforms.

We reached out to the Commission to ask if it’s considering updates in this area and will update this report with any response.

Ten EU cities — including Amsterdam — penned an open letter last year, calling on the Commission to introduce “strong legal obligations for platforms to cooperate with us in registration-schemes and in supplying rental-data per house that is advertised on their platforms”. So the regional pressure for better platform governance is loud and clear.

Privacy not a blocker for “meaningful” research access to platform data, says report

European lawmakers are eyeing binding transparency requirements for Internet platforms in a Digital Services Act (DSA) due to be drafted by the end of the year. But the question of how to create governance structures that provide regulators and researchers with meaningful access to data so platforms can be held accountable for the content they’re amplifying is a complex one.

Platforms’ own efforts to open up their data troves to outside eyes have been chequered to say the least. Back in 2018, Facebook announced the Social Science One initiative, saying it would provide a select group of academics with access to about a petabyte’s worth of sharing data and metadata. But it took almost two years before researchers got access to any data.

“This was the most frustrating thing I’ve been involved in, in my life,” one of the involved researchers told Protocol earlier this year, after spending some 20 months negotiating with Facebook over exactly what it would release.

Facebook’s political Ad Archive API has similarly frustrated researchers. “Facebook makes it impossible to get a complete picture of all of the ads running on their platform (which is exactly the opposite of what they claim to be doing),” said Mozilla last year, accusing the tech giant of transparency-washing.

Facebook, meanwhile, points to European data protection regulations and privacy requirements attached to its business following interventions by the US’ FTC to justify painstaking progress around data access. But critics argue this is just a cynical shield against transparency and accountability. Plus of course none of these regulations stopped Facebook grabbing people’s data in the first place.

In January, Europe’s lead data protection regulator penned a preliminary opinion on data protection and research which warned against such shielding.

“Data protection obligations should not be misappropriated as a means for powerful players to escape transparency and accountability,” wrote EDPS Wojciech Wiewiorówski. “Researchers operating within ethical governance frameworks should therefore be able to access necessary API and other data, with a valid legal basis and subject to the principle of proportionality and appropriate safeguards.”

Nor is Facebook the sole offender here, of course. Google brands itself a ‘privacy champion’ on account of how tight a grip it keeps on access to user data, heavily mediating data it releases in areas where it claims ‘transparency’. While, for years, Twitter routinely disparaged third party studies which sought to understand how content flows across its platform — saying its API didn’t provide full access to all platform data and metadata so the research couldn’t show the full picture. Another convenient shield to eschew accountability.

More recently the company has made some encouraging noises to researchers, updating its dev policy to clarify rules, and offering up a COVID-related dataset — though the included tweets remains self selected. So Twitter’s mediating hand remains on the research tiller.

A new report by AlgorithmWatch seeks to grapple with the knotty problem of platforms evading accountability by mediating data access — suggesting some concrete steps to deliver transparency and bolster research, including by taking inspiration from how access to medical data is mediated, among other discussed governance structures.

The goal: “Meaningful” research access to platform data. (Or as the report title puts it: Operationalizing Research Access in Platform Governance: What to Learn from Other Industries?

“We have strict transparency rules to enable accountability and the public good in so many other sectors (food, transportation, consumer goods, finance, etc). We definitely need it for online platforms — especially in COVID-19 times, where we’re even more dependent on them for work, education, social interaction, news and media consumption,” co-author Jef Ausloos tells TechCrunch.

The report, which the authors are aiming at European Commission lawmakers as they ponder how to shape an effective platform governance framework, proposes mandatory data sharing frameworks with an independent EU-institution acting as an intermediary between disclosing corporations and data recipients.

“Such an institution would maintain relevant access infrastructures including virtual secure operating environments, public databases, websites and forums. It would also play an important role in verifying and pre-processing corporate data in order to ensure it is suitable for disclosure,” they write in a report summary.

Discussing the approach further, Ausloos argues it’s important to move away from “binary thinking” to break the current ‘data access’ trust deadlock. “Rather than this binary thinking of disclosure vs opaqueness/obfuscation, we need a more nuanced and layered approach with varying degrees of data access/transparency,” he says. “Such a layered approach can hinge on types of actors requesting data, and their purposes.”

A market research purpose might only get access to very high level data, he suggests. Whereas medical research by academic institutions could be given more granular access — subject, of course, to strict requirements (such as a research plan, ethical board review approval and so on).

“An independent institution intermediating might be vital in order to facilitate this and generate the necessary trust. We think it is vital that that regulator’s mandate is detached from specific policy agendas,” says Ausloos. “It should be focused on being a transparency/disclosure facilitator — creating the necessary technical and legal environment for data exchange. This can then be used by media/competition/data protection/etc authorities for their potential enforcement actions.”

Ausloos says many discussions on setting up an independent regulator for online platforms have proposed too many mandates or competencies — making it impossible to achieve political consensus. Whereas a leaner entity with a narrow transparency/disclosure remit should be able to cut through noisy objections, is the theory.

The infamous example of Cambridge Analytica does certainly loom large over the ‘data for research’ space — aka, the disgraced data company which paid a Cambridge University academic to use an app to harvest and process Facebook user data for political ad targeting. And Facebook has thought nothing of turning this massive platform data misuse scandal into a stick to beat back regulatory proposals aiming to crack open its data troves.

But Cambridge Analytica was a direct consequence of a lack of transparency, accountability and platform oversight. It was also, of course, a massive ethical failure — given that consent for political targeting was not sought from people whose data was acquired. So it doesn’t seem a good argument against regulating access to platform data. On the contrary.

With such ‘blunt instrument’ tech talking points being lobbied into the governance debate by self-interested platform giants, the AlgorithmWatch report brings both welcome nuance and solid suggestions on how to create effective governance structures for modern data giants.

On the layered access point, the report suggests the most granular access to platform data would be the most highly controlled, along the lines of a medical data model. “Granular access can also only be enabled within a closed virtual environment, controlled by an independent body — as is currently done by Findata [Finland’s medical data institution],” notes Ausloos.

Another governance structure discussed in the report — as a case study from which to draw learnings on how to incentivize transparency and thereby enable accountability — is the European Pollutant Release and Transfer Register (E-PRTR). This regulates pollutant emissions reporting across the EU, and results in emissions data being freely available to the public via a dedicated web-platform and as a standalone dataset.

“Credibility is achieved by assuring that the reported data is authentic, transparent and reliable and comparable, because of consistent reporting. Operators are advised to use the best available reporting techniques to achieve these standards of completeness, consistency and credibility,” the report says on the E-PRTR.

“Through this form of transparency, the E-PRTR aims to impose accountability on operators of industrial facilities in Europe towards to the public, NGOs, scientists, politicians, governments and supervisory authorities.”

While EU lawmakers have signalled an intent to place legally binding transparency requirements on platforms — at least in some less contentious areas, such as illegal hate speech, as a means of obtaining accountability on some specific content problems — they have simultaneously set out a sweeping plan to fire up Europe’s digital economy by boosting the reuse of (non-personal) data.

Leveraging industrial data to support R&D and innovation is a key plank of the Commission’s tech-fuelled policy priorities for the next five+ years, as part of an ambitious digital transformation agenda.

This suggests that any regional move to open up platform data is likely to go beyond accountability — given EU lawmakers are pushing for the broader goal of creating a foundational digital support structure to enable research through data reuse. So if privacy-respecting data sharing frameworks can be baked in, a platform governance structure that’s designed to enable regulated data exchange almost by default starts to look very possible within the European context.

“Enabling accountability is important, which we tackle in the pollution case study; but enabling research is at least as important,” argues Ausloos, who does postdoc research at the University of Amsterdam’s Institute for Information Law. “Especially considering these platforms constitute the infrastructure of modern society, we need data disclosure to understand society.”

“When we think about what transparency measures should look like for the DSA we don’t need to reinvent the wheel,” adds Mackenzie Nelson, project lead for AlgorithmWatch’s Governing Platforms Project, in a statement. “The report provides concrete recommendations for how the Commission can design frameworks that safeguard user privacy while still enabling critical research access to dominant platforms’ data.”

GDPR’s two-year review flags lack of “vigorous” enforcement

It’s more than two years since a flagship update to the European Union’s data protection regime moved into the application phase. Yet the General Data Protection Regulation (GDPR) has been dogged by criticism of a failure of enforcement related to major cross-border complaints — lending weight to critics who claim the legislation has created a moat for dominant multinationals, at the expense of smaller entities.

Today the European Commission responded to that criticism as it gave a long scheduled assessment of how the regulation is functioning, in its first review two years in.

While EU lawmakers’ top-line message is the clear claim: ‘GDPR is working’ — with commissioners lauding what they couched as the many positives of this “modern and horizontal piece of legislation”; which they also said has become a “global reference point” — they conceded there is a “very serious to-do list”, calling for uniformly “vigorous” enforcement of the regulation across the bloc.

So, in other words, GDPR decisions need to flow more smoothly than they have so far.

Speaking at a Commission briefing today, Věra Jourová, Commission VP for values and transparency, said: “The European Data Protection Board and the data protection authorities have to step up their work to create a truly common European culture — providing more coherent and more practical guidance, and work on vigorous but uniform enforcement.

“We have to work together, as the Board and the Member States, to address concerns — in particular those of the small and medium enterprises.”

Justice commissioner, Didier Reynders, also speaking at the briefing, added: “We have to ensure that [GDPR] is applied harmoniously — or at least with the same vigour across the European territory. There may be some nuanced differences but it has to be applied with the same vigour.

“In order for that to happen data protection authorities have to be sufficiently equipped — they have to have the relevant number of staff, the relevant budgets, and there is a clear will to move in that direction.”

Front and center for GDPR enforcement is the issue of resourcing for national data protection authorities (DPAs), who are tasked with providing oversight and issuing enforcement decisions.

Jourová noted today that EU DPAs — taken as a whole — have increased headcount by 42% and budget by 49% between 2016 and 2019.

However that’s an aggregate which conceals major differences in resourcing. A recent report by pro-privacy browser Brave found that half of all national DPAs receive just €5M or less in annual budget from their governments, for example. Brave also found budget increases peaked for the application of the GDPR — saying, two years in, governments are now slowing the increase.

It’s also true that DPA case load isn’t uniform across the bloc, with certain Member States (notably Ireland and Luxembourg) handling many more and/or more complex complaints than others as a result of how many multinationals locate their regional HQs there.

One key issue for GDPR thus relates to how the regulation handles cross border cases.

A one-stop-shop mechanism was supposed to simplify this process — by having a single regulator (typically in the country where the business has its main establishment) taking a lead on complaints that affect users in multiple Member States, and other interested DPAs not dealing directly with the data processor. But they do remain involved — and, once there’s a draft decision, play an important role as they can raise objections to whatever the lead regulator has decided.

However a lot of friction seems to be creeping in via current processes, via technical issues related to sharing data between DPAs — and also via the opportunity for additional legal delays.

In the case of big tech, GDPR’s one-stop-shop has resulted in a major backlog around enforcement, with multiple complaints being re-routed via Ireland’s Data Protection Commission (DPC) — which is yet to issue a single decision on a cross border case. And has more than 20 such investigations ongoing.

Last month Ireland’s DPC trailed looming decisions on Twitter and Facebook — saying it had submitted a draft decision on the Twitter case to fellow DPAs and expressed hope that case could be finalized in July.

Its data protection commissioner, Helen Dixon, had previously suggested the first cross border decisions would be coming in “early” 2020. In the event, we’re past half way through the year still with no enforcement on show.

This looks especially problematic as there is a counter example elsewhere in the EU: France’s CNIL managed to issue a decision in a major GDPR case against Google all the way back in January 2019. Last week the country’s top court for administrative law cemented the regulator’s findings — dismissing Google’s appeal. Its $57M fine against Google remains the largest yet levied against big tech under GDPR.

Asked directly whether the Commission believes Ireland’s DPC is sufficiently resourced — with the questioner noting it has multiple ongoing investigations into Facebook, in particular, with still no decisions taken on the company — Jourová emphasized DPAs are “fully independent”, before adding: “The Commission has no tools to push them to speed up but the cases you mention, especially the cases that relate to big tech, are always complex and they require thorough investigation — and it simply requires more time.”

However CNIL’s example shows effective enforcement against major tech platforms is possible — at least, where there’s a will to take on corporate power. Though France’s relative agility may also have something to do with not having to deal simultaneously with such a massive load of complex cross-border cases.

At the same time, critics point to Ireland’s cosy political relationship with the corporate giants it attracts via low tax rates — which in turn raises plenty of questions when set against the oversized role its DPA has in overseeing most of big tech. The stench of forum shopping is unmistakable.

Criticism of national regulators extends beyond Ireland, too, though. In the UK, privacy experts have slammed the ICO’s repeated failure to enforce the law against the adtech industry — despite its own assessments finding systemic flouting of the law. The country remains an EU Member State until the end of the year — and the ICO is the best resourced DPA in the bloc, in terms of budget and headcount (and likely tech expertise too). Which hardly reflects well on the functional state of the regulation.

Despite all this, the Commission continues to present GDPR as a major geopolitical success, claiming — as it did again today — that it’s ahead of the digital regulatory curve globally at a time when lawmakers almost everywhere are considering putting harder limits on Internet players.

But there’s only so long it can sell a success on paper. Without consistently “vigorous” enforcement, the whole framework crumbles — so the EU’s executive has serious skin in the game when it comes to GDPR actually doing what it says on the tin.

Pressure is coming from commercial quarters too — not only privacy and consumer rights groups.

Earlier this year, Brave lodged a complaint with the Commission against 27 EU Member States — accusing them of under resourcing their national data protection watchdogs. It called on the EU executive to launch an infringement procedure against national governments, and refer them to the bloc’s top court if necessary. So startups are banging the drum for enforcement too.

If decision wheels don’t turn on their own, courts may eventually be needed to force Europe’s DPAs to get a move on — albeit, the Commission is still hoping it won’t have to come to that.

“We saw a considerable increase of capacities both in Ireland and Luxembourg,” said Jourová, discussing the DPA resourcing issue. “We saw a sufficient increase in at least half of other Member States DPAs so we have to let them do very responsible and good work — and of course wait for the results.”

Reynders suggested that while there has been an increase in resource for DPAs the Commission may need to conduct a “deeper” analysis — to see if more resource is needed in some Member States, “due to the size of the companies at work in the jurisdiction of such a national authority”.

“We have huge differences between the Member States about the need to react to the requests from the companies. And of course we need to reinforce the cooperation and the co-ordination on cross border issues. We need to be sure that it’s possible for all the national authorities to work together. And in the network of national authorities it’s the case — and with the Board [EDPB] it’s possible to organize that. So we’ll continue to work on it,” he said.

“So it’s not only a question to have the same kind of approach in all the Member States. It’s to be fit to all the demands coming in your jurisdiction and it’s true that in some jurisdictions we have more multinationals and more members of high tech companies than in others.”

“The best answer will be a decision from the Irish data protection authority about important cases,” he added.

We’ve reached out to the Irish DPC and the EDPB for comment on the Commission’s GDPR assessment.

Asked whether the Commission has a list of Member States that it might instigate infringement proceedings against related to the terms of GDPR — which, for example, require governments to provide adequate resourcing to their national DPA in order that they can properly oversee the regulation — Reynders said it doesn’t currently have such a list.

“We have a list of countries where we try to see if it’s possible to reinforce the possibilities for the national authorities to have enough resources — human resources, financial resources, to organize better cross border activities — if at the end we see there’s a real problem about the enforcement of the GDPR in one Member State we will propose to go maybe to the court with an infringement proceeding — but we don’t have, for the moment, a list of countries to organize such a kind of process,” he said.

The commissioners were a lot more comfortable talking up the positives of GDPR, with Jourová noting, with a sphinx-like smile, how three years ago there was “literal panic” and an army of lobbyists warning of a “doomsday” for business and innovation should the legislation pass. “I have good news today — no dooms day was here,” she said.

“Our approach to the GDPR was the right one,” she went on. “It created the more harmonized rules across the Single Market and more and more companies are using GDPR concepts, such as privacy by design and by default, as a competitive differentiation.

“I can say that the philosophy of one continent, one law is very advantageous for European small and medium enterprises who want to operate on the European Single Market.

“In general GDPR has become a truly European trade mark,” she added. “It puts people and their rights at the center. It does not leave everything to the market like in the US. And it does not see data as a means for state supervision, as in China. Our truly European approach to data is the first answer to difficult questions we face as a society.”

She also noted that the regulation served as inspiration for the current Commission’s tech-focused policy priorities — including a planned “human centric approach to AI“.

“It makes us pause before facial recognition technology, for instance, will be fully developed or implemented. And I dare to say that it makes Europe fit for the digital age. On the international side the GDPR has become a reference point — with a truly global convergence movement. In this context we are happy to support trade and safe digital data flows and work against digital protectionism.”

Another success the commissioners credited to the GDPR framework is the region’s relatively swift digital response to the coronavirus — with the regulation helping DPAs to more quickly assess the privacy implications of COVID-19 contacts tracing apps and tools.

Reynders lauded “a certain degree of flexibility in the GDPR” which he said had been able to come into play during the crisis, feeding into discussions around tracing apps — on “how to ensure protection of personal data in the context of such tracing apps linked to public and individual health”.

Under its to-do list, other areas of work the Commission cited today included ensuring DPAs provide more such support related to the application of the regulation by coming out with guidelines related to other new technologies. “In various new areas we will have to be able to provide guidance quickly, just as we did on the tracing apps recently,” noted Reynders.

Further increasing public awareness of GDPR and the rights it affords is another Commission focus — though it said more than two-third of EU citizens above the age of 16 have at least heard of the GDPR. But it wants citizens to be able to make what Reynders called “best use” of their rights, perhaps via new applications.

“So the GDPR provides support to innovation in this respect,” he said. “And there’s a lot of work that still needs to be done in order to strengthen innovation.”

“We also have to convince those who may still be reticent about the GDPR. Certain companies, for instance, who have complained about how difficult it is to implement it. I think we need to explain to them what the requirements of the GDPR and how they can implement these,” he added.

On illegal hate speech, EU lawmakers eye binding transparency for platforms

It’s more than four years since major tech platforms signed up to a voluntary pan-EU Code of Conduct on illegal hate speech removals. Yesterday the European Commission’s latest assessment of the non-legally binding agreement lauds “overall positive” results — with 90% of flagged content assessed within 24 hours and 71% of the content deemed to be illegal hate speech removed. The latter is up from just 28% in 2016.

However the report cards finds platforms are still lacking in transparency. Nor are they providing users with adequate feedback on the issue of hate speech removals, in the Commission’s view.

Platforms responded and gave feedback to 67.1% of the notifications received, per the report card — up from 65.4% in the previous monitoring exercise. Only Facebook informs users systematically — with the Commission noting: “All the other platforms have to make improvements.”

In another criticism, its assessment of platforms’ performance in dealing with hate speech reports found inconsistencies in their evaluation processes — with “separate and comparable” assessments of flagged content that were carried out over different time periods showing “divergences” in how they were handled.

Signatories to the EU online hate speech code are: Dailymotion, Facebook, Google+, Instagram, Jeuxvideo.com, Microsoft, Snapchat, Twitter and YouTube.

This is now the fifth biannual evaluation of the code. It may not yet be the final assessment but EU lawmakers’ eyes are firmly tilted toward a wider legislative process — with commissioners now busy consulting on and drafting a package of measures to update the laws wrapping digital services.

A draft of this Digital Services Act is slated to land by the end of the year, with commissioners signalling they will update the rules around online liability and seek to define platform responsibilities vis-a-vis content.

Unsurprisingly, then, the hate speech code is now being talked about as feeding that wider legislative process — while the self-regulatory effort looks to be reaching the end of the road. 

The code’s signatories are also clearly no longer a comprehensive representation of the swathe of platforms in play these days. There’s no WhatsApp, for example, nor TikTok (which did just sign up to a separate EU Code of Practice targeted at disinformation). But that hardly matters if legal limits on illegal content online are being drafted — and likely to apply across the board. 

Commenting in a statement, Věra Jourová, Commission VP for values and transparency, said: “The Code of conduct remains a success story when it comes to countering illegal hate speech online. It offered urgent improvements while fully respecting fundamental rights. It created valuable partnerships between civil society organisations, national authorities and the IT platforms. Now the time is ripe to ensure that all platforms have the same obligations across the entire Single Market and clarify in legislation the platforms’ responsibilities to make users safer online. What is illegal offline remains illegal online.”

In another supporting statement, Didier Reynders, commissioner for Justice, added: The forthcoming Digital Services Act will make a difference. It will create a European framework for digital services, and complement existing EU actions to curb illegal hate speech online. The Commission will also look into taking binding transparency measures for platforms to clarify how they deal with illegal hate speech on their platforms.”

Earlier this month, at a briefing discussing Commission efforts to tackle online disinformation, Jourová suggested lawmakers are ready to set down some hard legal limits online where illegal content is concerned, telling journalists: “In the Digital Services Act you will see the regulatory action very probably against illegal content — because what’s illegal offline must be clearly illegal online and the platforms have to proactively work in this direction.” Disinformation would not likely get the same treatment, she suggested.

The Commission has now further signalled it will consider ways to prompt all platforms that deal with illegal hate speech to set up “effective notice-and-action systems”.

In addition, it says it will continue — this year and next — to work on facilitating the dialogue between platforms and civil society organisations that are focused on tackling illegal hate speech, saying that it especially wants to foster “engagement with content moderation teams, and mutual understanding on local legal specificities of hate speech”

In its own report last year assessing the code of conduct, the Commission concluded that it had contributed to achieving “quick progress”, particularly on the “swift review and removal of hate speech content”.

It also suggested the effort had “increased trust and cooperation between IT Companies, civil society organisations and Member States authorities in the form of a structured process of mutual learning and exchange of knowledge” — noting that platforms reported “a considerable extension of their network of ‘trusted flaggers’ in Europe since 2016.”

“Transparency and feedback are also important to ensure that users can appeal a decision taken regarding content they posted as well as being a safeguard to protect their right to free speech,” the Commission report also notes, specifying that Facebook reported having received 1.1 million appeals related to content actioned for hate speech between January 2019 and March 2019, and that 130,000 pieces of content were restored “after a reassessment”.

On volumes of hate speech, the Commission suggested the amount of notices on hate speech content are roughly in the range of 17-30% of total content, noting for example that Facebook reported having removed 3.3M pieces of content for violating hate speech policies in the last quarter of 2018 and 4M in the first quarter of 2019.

“The ecosystems of hate speech online and magnitude of the phenomenon in Europe remains an area where more research and data are needed,” the report added.

French court slaps down Google’s appeal against $57M GDPR fine

France’s top court for administrative law has dismissed Google’s appeal against a $57M fine issued by the data watchdog last year for not making it clear enough to Android users how it processes their personal information.

The State Council issued the decision today, affirming the data watchdog CNIL’s earlier finding that Google did not provide “sufficiently clear” information to Android users — which in turn meant it had not legally obtained their consent to use their data for targeted ads.

“Google’s request has been rejected,” a spokesperson for the Conseil D’Etat confirmed to TechCrunch via email.

“The Council of State confirms the CNIL’s assessment that information relating to targeting advertising is not presented in a sufficiently clear and distinct manner for the consent of the user to be validly collected,” the court also writes in a press release [translated with Google Translate] on its website.

It found the size of the fine to be proportionate — given the severity and ongoing nature of the violations.

Importantly, the court also affirmed the jurisdiction of France’s national watchdog to regulate Google — at least on the date when this penalty was issued (January 2019).

The CNIL’s multimillion dollar fine against Google remains the largest to date against a tech giant under Europe’s flagship General Data Protection Regulation (GDPR) — lending the case a certain symbolic value, for those concerned about whether the regulation is functioning as intended vs platform power.

While the size of the fine is still relative peanuts vs Google’s parent entity Alphabet’s global revenue, changes the tech giant may have to make to how it harvests user data could be far more impactful to its ad-targeting bottom line. 

Under European law, for consent to be a valid legal basis for processing personal data it must be informed, specific and freely given. Or, to put it another way, consent cannot be strained.

In this case French judges concluded Google had not provided clear enough information for consent to be lawfully obtained — including objecting to a pre-ticked checkbox which the court affirmed does not meet the requirements of the GDPR.

So, tl;dr, the CNIL’s decision has been entirely vindicated.

Reached for comment on the court’s dismissal of its appeal, a Google spokeswoman sent us this statement:

People expect to understand and control how their data is used, and we’ve invested in industry-leading tools that help them do both. This case was not about whether consent is needed for personalised advertising, but about how exactly it should be obtained. In light of this decision, we will now review what changes we need to make.

GDPR came into force in 2018, updating long standing European data protection rules and opening up the possibility of supersized fines of up to 4% of global annual turnover.

However actions against big tech have largely stalled, with scores of complaints being funnelled through Ireland’s Data Protection Commission — on account of a one-stop-shop mechanism in the regulation — causing a major backlog of cases. The Irish DPC has yet to issue decisions on any cross border complaints, though it has said its first ones are imminent — on complaints involving Twitter and Facebook.

Ireland’s data watchdog is also continuing to investigate a number of complaints against Google, following a change Google announced to the legal jurisdiction of where it processes European users’ data — moving them to Google Ireland Limited, based in Dublin, which it said applied from January 22, 2019 — with ongoing investigations by the Irish DPC into a long running complaint related to how Google handles location data and another major probe of its adtech, to name two

On the GDPR one-stop shop mechanism — and, indirectly, the wider problematic issue of ‘forum shopping’ and European data protection regulation — the French State Council writes: “Google believed that the Irish data protection authority was solely competent to control its activities in the European Union, the control of data processing being the responsibility of the authority of the country where the main establishment of the data controller is located, according to a ‘one-stop-shop’ principle instituted by the GDPR. The Council of State notes however that at the date of the sanction, the Irish subsidiary of Google had no power of control over the other European subsidiaries nor any decision-making power over the data processing, the company Google LLC located in the United States with this power alone.”

In its own statement responding to the court’s decision, the CNIL notes the court’s view that GDPR’s one-stop-shop mechanism was not applicable in this case — writing: “It did so by applying the new European framework as interpreted by all the European authorities in the guidelines of the European Data Protection Committee.”

Privacy NGO noyb — one of the privacy campaign groups which lodged the original ‘forced consent’ complaint against Google, all the way back in May 2018 — welcomed the court’s decision on all fronts, including the jurisdiction point.

Commenting in a statement, noyb’s honorary chairman, Max Schrems, said: “It is very important that companies like Google cannot simply declare themselves to be ‘Irish’ to escape the oversight by the privacy regulators.”

A key question is whether CNIL — or another (non-Irish) EU DPA — will be found to be competent to sanction Google in future, following its shift to naming its Google Ireland subsidiary as the regional data processor. (Other tech giants use the same or a similar playbook, seeking out the EU’s more ‘business-friendly’ regulators.)

On the wider ruling, Schrems also said: “This decision requires substantial improvements by Google. Their privacy policy now really needs to make it crystal clear what they do with users’ data. Users must also get an option to agree to only some parts of what Google does with their data and refuse other things.”

French digital rights group, La Quadrature du Net — which had filed a related complaint against Google, feeding the CNIL’s investigation — also declared victory today, noting it’s the first sanction in a number of GDPR complaints it has lodged against tech giants on behalf of 12,000 citizens.

“The rest of the complaints against Google, Facebook, Apple and Microsoft are still under investigation in Ireland. In any case, this is what this authority promises us,” it added in another tweet.

Germany tightens online hate speech rules to make platforms send reports straight to the feds

While a French online hate speech law has just been derailed by the country’s top constitutional authority on freedom of expression grounds, Germany is beefing up hate speech rules — passing a provision that will require platforms to send suspected criminal content directly to the Federal police at the point it’s reported by a user.

The move is part of a wider push by the German government to tackle a rise in right wing extremism and hate crime — which it links to the spread of hate speech online.

Germany’s existing Network Enforcement Act (aka the NetzDG law) came into force in the country in 2017, putting an obligation on social network platforms to remote hate speech within set deadlines as tight as 24 hours for easy cases — with fines of up to €50M should they fail to comply.

Yesterday the parliament passed a reform which extends NetzDG by placing a reporting obligation on platforms which requires them to report certain types of “criminal content” to the Federal Criminal Police Office.

A wider reform of the NetzDG law remains ongoing in parallel, that’s intended to bolster user rights and transparency, including by simplifying user notifications and making it easier for people to object to content removals and have successfully appealed content restored, among other tweaks. Broader transparency reporting requirements are also looming for platforms.

The NetzDG law has always been controversial, with critics warning from the get go that it would lead to restrictions on freedom of expression by incentivizing platforms to remove content rather than risk a fine. (Aka, the risk of ‘overblocking’.) In 2018 Human Rights Watch dubbed it a flawed law — critiquing it for being “vague, overbroad, and turn[ing] private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal”.

The latest change to hate speech rules is no less controversial: Now the concern is that social media giants are being co-opted to help the state build massive databases on citizens without robust legal justification.

A number of amendments to the latest legal reform were rejected, including one tabled by the Greens which would have prevented the personal data of the authors of reported social media posts from being automatically sent to the police.

The political party is concerned about the risk of the new reporting obligation being abused — resulting in data on citizens who have not in fact posted any criminal content ending up with the police.

It also argues there are only weak notification requirements to inform authors of flagged posts that their data has been passed to the police, among sundry other criticisms.

The party had proposed that only the post’s content would be transmitted directly to police who would have been able to request associated personal data from the platform should there be a genuine need to investigate a particular piece of content.

The German government’s reform of hate speech law follows the 2019 murder of a pro-refugee politician, Walter Lübcke, by neo nazis — which it said was preceded by targeted threats and hate speech online.

Earlier this month police staged raids on 40 hate speech suspects across a number of states who are accused of posting “criminally relevant comments” about Lübcke, per national media.

The government also argues that hate speech online has a chilling effect on free speech and a deleterious impact on democracy by intimidating those it targets — meaning they’re unable to freely express themselves or participate without fear in society.

At the pan-EU level, the European Commission has been pressing platforms to improve their reporting around hate speech takedowns for a number of years, after tech firms signed up to voluntary EU Code of Conduct on hate speech.

It is also now consulting on wider changes to platform rules and governance — under a forthcoming Digital Services Act which will consider how much liability tech giants should face for content they’re fencing.

EU digs in on digital tax plan, after US quits talks

The European Commission has reiterated its commitment to pushing ahead with a regional plan for taxing digital services after the US quit talks aimed at finding agreement on reforming tax rules — ramping up the prospects of a trade war.

Yesterday talks between the EU and the US on a digital services tax broke down after U.S. treasury secretary, Steven Mnuchin, walked out — saying they’d failed to make any progress, per Reuters.

The EU has been eyeing levying a tax of between 2% and 6% on the local revenues of platform giants.

Today the European Commission dug in in response to the US move, with commissioner Paolo Gentiloni reiterating the need for “one digital tax” to adapt to what he dubbed “the reality of the new century” — and calling for “understanding” in the global negotiation.

However he also repeated the Commission’s warning that it will push ahead alone if necessary, saying that if the US’ decision to quit talks means achieving global consensus impossible it will put “a new European proposal on the table”.

Following the break down of talks, France also warned it will go ahead with a digital tax on tech giants this year — reversing an earlier suspension that had been intended to grease the negotiations.

The New York Times reports French finance minister, Bruno Le Maire, describing the US walk-out as “a provocation”, and complaining about the country “systematically threatening” allies with sanctions.

The issue of ‘fair taxes’ for platforms has been slow burning in Europe for years, with politicians grilling tech execs in public over how little they contribute to national coffers and even urging the public to boycott services like Amazon (with little success).

Updating the tax system to account for digital giants is also front and center for Ursula von der Leyen’s Commission — which is responding to the widespread regional public anger over how little tech giants pay in relation to the local revenue they generate.

European Commission president von der Leyen, who took up her mandate at the back end of last year, has said “urgent” reform of the tax system is needed — warning at the start of 2020 that the European Union would be prepared to go it alone on “a fair digital tax” if no global accord was reached by the end of this year.

At the same time, a number of European countries have been pushing ahead with their own proposals to tax big tech — including the UK, which started levying a 2% digital services tax on local revenue in April; and France, which has set out a plan to tax tech giants 3% of their local revenues.

This gives the Commission another clear reason to act, given its raison d’être is to reduce fragmentation of the EU’s Single Market.

Although it faces internal challenges on achieving agreement across Member States, given some smaller economies have used low national corporate tax rates to attract inward investment, including from tech giants.

The US, meanwhile, has not been sitting on its hands as European governments move ahead to set their own platform taxes. The Trump administration has been throwing its weight around — arguing US companies are being unfairly targeted by the taxes and warning that it could retaliate with up to 100% tariffs on countries that go ahead. Though it has yet to do so.

On the digital tax reform issue the US has said it wants a multilateral agreement via the OECD on a global minimum. And a petite entente cordiale was reached between France and the US last summer when president Emmanuel Macron agreed the French tech tax would be scraped once the OECD came up with a global fix.

However with Trump’s negotiators pulling out of international tax talks with the EU the prospect of a global understanding on a very divisive issue looks further away than ever.

Though the UK said today it remains committed to a global solution, per Reuters which quotes a treasury spokesman.

Earlier this month the US also launched a formal investigation into new or proposed digital taxes in the EU, including the UK’s levy and the EU’s proposal, and plans set out by a number of other EU countries, claiming they “unfairly target” U.S. tech companies — lining up a pipeline of fresh attacks on reform plans.

Zoom U-turns on no e2e encryption for free users

In a major security U-turn, videoconferencing platform Zoom has said it will, after all, offer end-to-end encryption to all users — including those who do not pay to use its service.

The caveat is that free users must provide certain “additional” pieces of information for verification purposes (such as a phone number where they can respond to a verification link) before being allowed to use e2e encryption — which Zoom says is a necessary check so it can “prevent and fight abuse” on its platform. However it’s a major step up from the prior offer of ‘no e2e unless you pay us‘.

“We are grateful to those who have provided their input on our E2EE design, both technical and philosophical,” Zoom writes in a blog update today. “We encourage everyone to continue to share their views throughout this complex, ongoing process.”

The company faced a storm of criticism earlier this month after Bloomberg reported comments by CEO Eric Yuan, who said it did not intend to provide e2e encryption for non-paying users because it wanted to be able to work with law enforcement.

Security and privacy experts waded it to blast the stance. One notable critic of the position was cryptography expert, Matthew Green — whose name you’ll find listed on Zoom’s e2e encryption design white paper.

“Once the precedent is set that E2E encryption is too ‘dangerous’ to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it’s going to be hard to put it back,” Green warned in a nuanced Twitter thread.

Since the e2e encryption storm, Zoom has faced another scandal — this time related to privacy and censorship, after it admitted shutting down a number of Chinese activists accounts at the request of the Chinese government. So the company may have stumbled upon another good reason for reversing its stance — given it’s a lot more difficult to censor content you can’t see.

Explaining the shift in its blog post, Zoom says only that it follows a period of engagement “with civil liberties organizations, our CISO council, child safety advocates, encryption experts, government representatives, our own users, and others”.

“We have also explored new technologies to enable us to offer E2EE to all tiers of users,” it adds.

Its blog briefly discusses how non-paying users will be able to gain access to e2e encryption, with Zoom writing: “Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message.”

“Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our Report a User function — we can continue to prevent and fight abuse,” it adds.

Certain countries require an ID check to purchase a SIM card so Zoom’s verification provision may make it impossible for some users to access e2e encryption without leaving an identity trail which state agencies could unpick.

Per Zoom’s blog post, a beta of the e2e encryption implementation will kick off in July. The platform’s default encryption remains AES 256 GCM in the meanwhile.

The forthcoming e2e encryption will not be switched on by default — but rather offered as an option. Zoom says this is because it limits some meeting functionality (“such as the ability to include traditional PSTN phone lines or SIP/H.323 hardware conference room systems”).

“Hosts will toggle E2EE on or off on a per-meeting basis,” it further notes, adding that account administrators will also have the ability to enable and disable E2EE at the account and group level.

Today the company also released a v2 update of its e2e encryption design — posting the spec to Github.

Zoom U-turns on no e2e encryption for free users

In a major security U-turn, videoconferencing platform Zoom has said it will, after all, offer end-to-end encryption to all users — including those who do not pay to use its service.

The caveat is that free users must provide certain “additional” pieces of information for verification purposes (such as a phone number where they can respond to a verification link) before being allowed to use e2e encryption — which Zoom says is a necessary check so it can “prevent and fight abuse” on its platform. However it’s a major step up from the prior offer of ‘no e2e unless you pay us‘.

“We are grateful to those who have provided their input on our E2EE design, both technical and philosophical,” Zoom writes in a blog update today. “We encourage everyone to continue to share their views throughout this complex, ongoing process.”

The company faced a storm of criticism earlier this month after Bloomberg reported comments by CEO Eric Yuan, who said it did not intend to provide e2e encryption for non-paying users because it wanted to be able to work with law enforcement.

Security and privacy experts waded it to blast the stance. One notable critic of the position was cryptography expert, Matthew Green — whose name you’ll find listed on Zoom’s e2e encryption design white paper.

“Once the precedent is set that E2E encryption is too ‘dangerous’ to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it’s going to be hard to put it back,” Green warned in a nuanced Twitter thread.

Since the e2e encryption storm, Zoom has faced another scandal — this time related to privacy and censorship, after it admitted shutting down a number of Chinese activists accounts at the request of the Chinese government. So the company may have stumbled upon another good reason for reversing its stance — given it’s a lot more difficult to censor content you can’t see.

Explaining the shift in its blog post, Zoom says only that it follows a period of engagement “with civil liberties organizations, our CISO council, child safety advocates, encryption experts, government representatives, our own users, and others”.

“We have also explored new technologies to enable us to offer E2EE to all tiers of users,” it adds.

Its blog briefly discusses how non-paying users will be able to gain access to e2e encryption, with Zoom writing: “Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message.”

“Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our Report a User function — we can continue to prevent and fight abuse,” it adds.

Certain countries require an ID check to purchase a SIM card so Zoom’s verification provision may make it impossible for some users to access e2e encryption without leaving an identity trail which state agencies could unpick.

Per Zoom’s blog post, a beta of the e2e encryption implementation will kick off in July. The platform’s default encryption remains AES 256 GCM in the meanwhile.

The forthcoming e2e encryption will not be switched on by default — but rather offered as an option. Zoom says this is because it limits some meeting functionality (“such as the ability to include traditional PSTN phone lines or SIP/H.323 hardware conference room systems”).

“Hosts will toggle E2EE on or off on a per-meeting basis,” it further notes, adding that account administrators will also have the ability to enable and disable E2EE at the account and group level.

Today the company also released a v2 update of its e2e encryption design — posting the spec to Github.