Tech giants are ignoring questions over the legality of their EU-US data transfers

A survey of responses from more than 30 companies to questions about how they’re approaching EU-US data transfers in the wake of a landmark ruling (aka Schrems II) by Europe’s top court in July, which struck down the flagship Privacy Shield over US surveillance overreach, suggests most are doing the equivalent of burying their head in the sand and hoping the legal nightmare goes away.

European privacy rights group, noyb, has done most of the groundwork here — rounding up in this 45-page report responses (some in English, others in German) from EU entities of 33 companies to a set of questions about personal data transfers.

It sums up the answers to the questions about companies’ legal basis for transferring EU citizens’ data over the pond post-Schrems II as “astonishing” or AWOL — given some failed to send a response at all.

Tech companies polled on the issue run the alphabetic gamut from Apple to Zoom. While Airbnb, Netflix and WhatsApp are among the companies that noyb says failed to respond about their EU-US data transfers.

Responses provided by companies that did respond appear to raise many more questions than they answer — with lots of question-dodging ‘boilerplate responses’ in evidence and/or pointing to existing privacy policies in the hope that will make the questioner go away (hi Facebook!) .

Facebook also made repeat claims that sought for info falls outside the scope of the EU’s data protection framework…

noyb also highlights a response by Slack which said it does not “voluntarily” provide governments with access to data — which, as the privacy rights group points out, “does not answer the question of whether they are compelled to do so under surveillance laws such as FISA702”.

A similar issue affects Microsoft. So while the tech giant did at least respond specifically to each question it was asked, saying it’s relying on Standard Contractual Clauses (SCCs) for EU-US data transfers, again it’s one of the companies subject to US surveillance law — or as noyb notes: “explicitly named by the documents disclosed by Edward Snowden and publicly numbering the FISA702 requests by the US government it received and answered”.

That, in turn, raises questions about how Microsoft can claim to (legally) use SCCs if users’ data cannot be adequately protected from US mass surveillance… 

The Court of Justice of the EU made it clear that use of SCCs to take data outside the EU is contingent on a case by case assessment of whether the data will in fact be safe. If it is not the data controller is legally required to suspend the transfer. EU regulators also have a clear duty to act to suspend transfers where data is at risk.

“Overall, we were astonished by how many companies were unable to provide little more than a boilerplate answer. It seems that most of the industry still does not have a plan as to how to move forward,” noyb adds.

In August the group filed 101 complaints against websites it had identified as still sending data to the US via Google Analytics and/or Facebook Connect integrations — with, again, both tech giants clearly subject to US surveillance laws, such as FISA 702.

noyb founder Max Schrems — whose surname has become synonymous with questions over EU-US data transfers — also continues to push the Irish Data Protection Commission (DPC) to take enforcement action over Facebook’s use of SCCs in a case that dates back some seven years.

Earlier this month it emerged the DPC had written to Facebook — issuing a preliminary order to suspend transfers. However Facebook filed an appeal for a judicial review in the Irish courts and was granted a stay.

In an affidavit filed to the court the tech giant appeared to claim it could shut down its service in Europe if the suspension order is enforced. But last week Facebook’s global VP and former UK deputy PM, Nick Clegg, denied it could shut down in Europe over the issue. Though he warned of “profound effects” on scores of digital businesses if a way is not found by lawmakers on both sides of the pond to resolve the legal uncertainty around U.S. data transfers. (A Privacy Shield 2 has been mooted but the European Commission has warned there’s no quick fix, suggesting reform of US surveillance law will be required.)

For his part Schrems has suggested the solution for Facebook at least is to federate its service — splitting its infrastructure in two. But Thierry Breton, EU commissioner for the internal market, has also called for “European data…[to] be stored and processed in Europe” — arguing earlier this month this data “belong in Europe” and “there is nothing protectionist about this”, in a discussion that flowed from US president Trump’s concerns about TikTok.

Back in Ireland, Facebook has complained to the courts that regulatory action over its EU-EU data transfers is being rushed (despite the complaint dating back to 2013); and also that it’s being unfairly singled out.

But now with data transfer complaints filed by noyb against scores of companies on the desk of every EU data supervisor, and regulators under explicit ECJ instruction they have a duty to step in a lot of pressure is being exerted to actually enforce the law and uphold Europeans’ data rights.

The European Data Protection Board’s guidance on Schrems II — which Facebook had also claimed to be waiting for — also specifies that the ability to (legally) use SCCs to transfer data to the U.S. hinges on a data controller being able to offer a legal guarantee that “U.S. law does not impinge on the adequate level of protection” for the transferred data. So Facebook et al would do well to lobby the US government on reform of FISA. 

Uber wins latest London licence appeal

Uber has won its appeal against having its licence to operate withdrawn in London.

In today’s judgement the court decided it was satisfied with process improvements made by the ride-hailing company, including around its communication with the city’s transport regulator.

However it’s still not clear how long Uber will be granted a licence for — with the judge wanting to hear more evidence before taking a decision.

We’ve reached out to Uber and TfL for comment.

The ride-sharing giant has faced a multi-year battle to have its licence reinstated after TfL, the city’s transport regulator, took the shock decision not to issue a renewal in 2017 — citing safety concerns and deeming the company not “fit and proper” to hold a private hire operator licence.

It won a provisional appeal back in 2018 — when a UK court granted it a 15-month licence to give it time to continue working on meeting TfL’s requirements. However last November the regulator once again denied a full licence renewal — raising a range of new safety issues.

Despite that Uber has been able to continue operating in London throughout the legal process — but with ongoing uncertainty over the future of its licence. Now it will be hoping this is in the past.

In the appeal, Uber’s key argument was it is now “fit and proper” to hold a licence — claiming it’s listened to the regulator’s concerns and learnt from errors, making major changes to address issues related to passenger safety.

For example Uber pointed to improvements in its governance and document review systems, including a freeze on drivers who had not taken a trip for an extended period; real-time driver ID verification; and new scrutiny teams and processes; as well as the launch of ‘Programme Zero’ — which aims to prevent all breaches of licence conditions.

It also argued system flaws were not widespread — claiming only 24 of the 45,000 drivers using the app had exploited its system to its knowledge.

It also argued it now cooperates effectively and proactively with TfL and police forces, denying it conceals any failures. Furthermore, it claimed denying its licence would have a “profound effect” on groups at risk of street harassment — such as women and ethnic minorities, as well as disabled people.

It’s certainly fair to say the Uber of 2020 has travelled some distance from the company whose toxic internal culture included developing proprietary software to try to thwart regulatory oversight and eventually led to a major change of guard of its senior management.

However it’s interesting the court has taken the step of choosing to debate what length of licence Uber should receive. So while it’s a win for Uber, there are still some watchful caveats.

Offering commentary on today’s ruling, Anna McCaffrey, a senior counsel for the law firm Taylor Wessing, highlighted this element of the judgement. “The Magistrates Court agreed that Uber had made improvements and addressed TfL safety concerns. However, the fact that the length of extension is up for debate, rather than securing Uber’s preferred five year licence, demonstrates that Uber will have to work hard to continue to prove to TfL and the Court that it has really changed. If not, Uber is likely to find itself back in Court facing the same battle next year,” she noted in a statement.

She also pointed out that a decision is still pending from the Supreme Court to “finally settle” the question as to whether Uber’s drivers are workers or self-employed — another long-running legal saga for Uber in the UK.

It is also now facing fresh legal challenges related to its algorithmic management of drivers. So there’s still plenty of work for its lawyers.

The App Drivers and Couriers Union (ADCU), meanwhile, offered a cautious welcome of the court’s decision to grant Uber’s licence renewal — given how many of its members are picking up jobs via the platform.

However the union also called for the major of London to break up what it dubbed Uber’s “monopoly” by imposing limits on the numbers of drivers who can register on its platform. In a statement, ADCU president, Yaseen Aslam, argued: “The reduced scale will give both Uber and Transport for London the breathing space necessary to ensure all compliance obligations -– including worker rights — are met in future.”

HumanForest suspends London e-bike sharing service, cuts jobs after customer accident

UK-based startup HumanForest has suspended its nascent ‘free’ e-bike service in London this week, after experiencing “mechanical” issues and after a user had an accident on one of its bikes, TechCrunch has learned. The suspension has also seen the company make a number of layoffs with plans to re-launch next spring using a different e-bike.

The service suspension comes only a few months after HumanForest started the trial in North London — and just a couple of weeks after announcing a $2.3M seed round of funding backed by the founders of Cabify and others.

We were tipped to the closure by an anonymous source who said they were employed by the startup. They told us the company’s e-bike had been found to have a defect and there had been an accident involving a user, after which the service was suspended. They also told us HumanForest fired a bunch of staff this week with little warning and minimal severance.

Asked about the source’s allegations, HumanForest confirmed it had suspended its service in London following a “minor accident” on Sunday, saying also that it had identified “problems of a similar nature” prior to the accident but had put down those down to “tampering or minor mechanical issues”.

Here’s its statement in full: “We were not aware that the bike was defective. There had been problems of a similar nature which were suspected to be tampering or minor mechanical issues. We undertook extra mechanical checks which we believed had resolved the issue and informed the supplier. We immediately suspended operations following the minor accident on Sunday. The supplier is now investigating whether there is a more serious problem with the e-bike.”

In an earlier statement the startup also told us: “There was an accident last week. Fortunately, the customer was not hurt. We immediately withdrew all e-bikes from the street and we have informed the supplier who is investigating. Our customers’ safety is our priority. We have, therefore, decided to re-launch with a new e-bike in Spring 2021.”

HumanForest declined to offer any details about the nature of the defect that caused it to suspend service but a spokeswoman confirmed all its e-bikes were withdrawn from London streets the same day as the accident, raising questions as to why it did not do so sooner — having, by its own admission, already identified “similar problems”.

The spokeswoman also confirmed HumanForest made a number of job cuts in the wake of the service suspension.

“We are very sorry that we had to let people go at this difficult time but, with operations suspended, we could only continue as a business with a significantly reduced team,” she said. “We tried very hard to find a way to keep people on board and we looked at the possibility of alternative contractual arrangements or employment but unfortunately, there are no guarantees of when we can re-launch.”

“Employees who had been with the company for less than three months were on their probation period which, as outlined in their contract, had one week’s notice. We will be paying their salaries until the end of the month,” she said, reiterating that it’s a difficult time for the startup.

The e-bikes HumanForest was using for the service appear to be manufactured by the Chinese firm Hongji — but are supplied by a German startup, called Wunder Mobility, which offers both b2c and b2b mobility services.

We contacted both companies to ask about the e-bike defect reported by HumanForest.

At the time of writing only Wunder Mobility had responded — confirming it acts as “an intermediary” for HumanForest but not offering any details about the nature of the technical problem.

Instead, it sent us this statement, attributed to its CCO Lukas Loers: “HumanForest stands for reliable quality and works continuously to improve its services. In order to offer its customers the best possible range of services in the sharing business, HumanForest will use the winter break to evaluate its findings from the pilot project in order to provide the best and most sustainable solution for its customers together with Wunder Mobility in the spring.”

“Unfortunately, we cannot provide any information about specific defects on the vehicles, as we have only acted as an intermediary. Only the manufacturer or the operator HumanForest can comment on this,” it added.

In a further development this week, which points to the competitive and highly dynamic nature of the nascent micromobility market, another e-bike sharing startup, Bolt — which industry sources suggest uses the same model of e-bike as HumanForest (its e-bike is visually identical, just painted a more lurid shade of green) — closed its e-bike sharing service in Paris, a few months after launching in the French capital.

When we contacted Bolt to ask whether it had withdrawn any e-bikes because of technical issues it flat denied doing so — saying the Paris closure was a business decision, and was not related to problems with its e-bike hardware.

“We understand some other companies have had issues with their providers. Bolt hasn’t withdrawn any electric bikes from suppliers due to defects,” a spokesperson told us, going on to note it has “recently” launched in Barcelona and trailing “more announcements about future expansion soon”.

In follow up emails the spokesperson further confirmed it hasn’t identified any defects with any e-bikes it’s tested, nor withdrawn any bikes from its supplier.

Bolt’s UK country manager, Matt Barrie, had a little more to say in a response to chatter about the various micromobility market moves on Twitter — tweeting the claim that: “Hardware at Bolt is fine, all good, the issues that HumanForest have had are with their bespoke components.”

“The Paris-Prague move is a commercial decision to support our wider business in Prague. Paris a good market and we hope to be back soon,” he added.

We asked HumanForest about Barrie’s claim that the technical issues with its hardware are related to “bespoke components” — but its spokeswoman declined to comment.

HumanForest’s twist on the e-bike sharing model is the idea of offering free trips with in-app ads subsidizing the rides. Its marketing has also been geared towards pushing a ‘greener commute’ message — touting that the e-bike batteries and service vehicles are charged with certified renewable energy sources.

Cambridge Analytica’s former boss gets 7-year ban on being a business director

The former CEO of Cambridge Analytica, the disgraced data company that worked for the 2016 Trump campaign and shut down in 2018 over a voter manipulation scandal involving masses of Facebook data — has been banned from running limited companies for seven years.

Alexander Nix signed a disqualification undertaking earlier this month which the UK government said yesterday it had accepted. The ban commences on October 5.

“Within the undertaking, Alexander Nix did not dispute that he caused or permitted SCL Elections Ltd or associated companies to market themselves as offering potentially unethical services to prospective clients; demonstrating a lack of commercial probity,” the UK insolvency service wrote in a press release.

Nix was suspended as CEO of Cambridge Analytica at the peak of the Facebook data scandal after footage emerged of him, filmed by undercover reporters, boasting of spreading disinformation and entrapping politicians to meet clients’ needs.

Cambridge Analytica was a subsidiary of the SCL Group, which included the division SCL Elections, while Nix was one of the key people in the group — being a director for SCL Group Ltd, SCL Social Ltd, SCL Analytics Ltd, SCL Commercial Ltd, SCL Elections and Cambridge Analytica (UK) Ltd. All six companies entered into administration in May 2018, going into compulsory liquidation in April 2019.

The “potentially unethical” activities that Nix does not dispute the companies offered, per the undertaking, are:

  • bribery stings and honey trap stings designed to uncover corruption
  • voter disengagement campaigns
  • the obtaining of information to discredit political opponents
  • the anonymous spreading of information

Last year the FTC also settled with Nix over the data misuse scandal — with the former Cambridge Analytica boss agreeing to an administrative order restricting how he conducts business in the future. The order also required the deletion/destruction of any personal information collected via the business.

Back in 2018 Nix was also grilled by the UK parliament’s DCMS committee — and in a second hearing he claimed Cambridge Analytica had licensed “millions of data points on American individuals from very large reputable data aggregators and data vendors such as Acxiom, Experian, Infogroup”, arguing the Facebook data had not been its “foundational data-set”.

It’s fair to say there are still many unanswered questions attached to the data misuse scandal. Last month, for example, the UK’s data watchdog — which raided Cambridge Analytica’s UK offices in 2018, seizing evidence, before going on to fine and then settle with Facebook (which did not admit any liability) over the scandal — said it would no longer be publishing a final report on its data analytics investigation.

Asked about the fate of the final report on Cambridge Analytica, an ICO spokesperson told us: “As part of the conclusion to our data analytics investigation we will be writing to the DCMS select committee to answer the outstanding questions from April 2019. We have committed to updating the select committee on our final findings but this will not be in the form of a further report.”

It’s not clear whether the DCMS committee — which has reformed with a different chair vs the one who in 2018 led the charge to dig into the Cambridge Analytica scandal as part of an enquiry into the impact of online disinformation — will publish the ICO’s written answers. Last year its final report called for Facebook’s business to be investigated over data protection and competition concerns.

You can read a TechCrunch interview with Nix here, from 2017 before the Facebook data scandal broke, in which he discusses how his company helped the Trump campaign.

Snyk acquires DeepCode to boost its code review smarts

Switzerland-based machine learning code review startup DeepCode — which bills itself as ‘Grammarly for coders’ — has been acquired by Snyk, a post-unicorn valuation cybersecurity startup which is focused on helping developers secure their code.

Financial terms of the deal have not been disclosed. But the ‘big code’ parsing startup had only raised around $5.2M since being founded back in 2016, per CrunchBase — mostly recently closing a $4M seed round from investors including Earlybird, 3VC and btov Partners last year.

DeepCode CEO and co-founder Boris Paskalev confirmed the whole team is “eagerly” joining Snyk to continue what he couched as “the mission of making semantic AI-driven code analysis available for every developer on the planet”.

“DeepCode as a company will continue to exist (fully owned by Snyk), we will keep and plan to grow the Zurich office and tap into the amazing talent pool here and we will continue supporting and expanding the cutting-edge product offering for the global development community,” Paskalev told TechCrunch.

Asked whether DeepCode’s product will continue to exist as a standalone in the future or whether full assimilation into Snyk’s platform will include closing down the code-review bot it currently offers developers he said no decision has yet been taken.

“We are still to evaluate that in details but the main goal is to maintain/expand the benefits that we offer to all developers and specifically to grow the open-source adoption and engagement,” he said, adding: “Initially clearly nothing will change and the DeepCode product will remain as a standalone product.”

“Both companies have a very clear vision and passion for developer-first and helping developers and security teams to further reduce risk and become more productive,” Paskalev added.

In a statement announcing the acquisition Snyk said it will be integrating DeepCode’s technology into its Cloud Native Application Security platform — going on to tout the benefits of bolting on its AI engine which it said would enable developers to “more quickly identify vulnerabilities”.

“DeepCode’s AI engine will help Snyk both increase speed and ensure a new level of accuracy in finding and fixing vulnerabilities, while constantly learning from the Snyk vulnerability database to become smarter,” wrote CEO Peter McKay. “It will enable an even faster integration for developers, testing for issues while they develop rather than as an additional step. And it will further increase the accuracy of our results, almost eliminating the need to waste time chasing down false positives.”

Among the features that have impressed Snyk about DeepCode, McKay lauded code scanning that’s “10-50x faster than alternatives”; and what he described as an “exceptional developer UX” — which allows for “high precision semantic code analysis in real-time” because scanning is carried out at the IDE and git level.

In its own blog post about the acquisition of the ETH Zurich spin-off, the university writes that the AI startup’s “decisive advantage” is that ‘it has developed the first AI system that can learn from billions of program codes quickly, enabling AI-​based detection of security and reliability code issues”.

“DeepCode is an excellent example of a modern AI system that can learn from data, program codes in this case, yet remain transparent and interpretable for humans,” it adds.

The university research work underpinning DeepCode dates back to 2013 — when its co-founders were figuring out how to combine data-​driven machine learning methods with semantic static code analysis methods based on symbolic reasoning, per the blog post.

DeepCode’s tech currently reaches more than 4M contributing developers, with more than 100,000 repositories subscribed to its service.

UK launches COVID-19 exposure notification app for England and Wales

The last two regions of the UK now have an official coronavirus contacts tracing app, after the UK government pushed the button to launch the NHS COVID-19 app across England and Wales today.

Northern Ireland and Scotland launched their own official apps to automate coronavirus exposure notifications earlier this year. But the England and Wales app was delayed after a false start back in May. The key point is that the version that’s launched now has a completely different app architecture.

All three of the UK’s official coronavirus contacts tracing apps make use of smartphones’ Bluetooth radios to generate alerts of potential exposure to COVID-19 — based on estimating the proximity of the devices.

A very condensed version of how this works is that ephemeral IDs are exchanged by devices that come into close contact and stored locally on app users’ phones. If a person is subsequently diagnosed with COVID-19 they are able to notify the system, via their public health authority, which will broadcast the relevant (i.e. ‘risky’) IDs to all other devices.

Matching to see whether an app user has been exposed to any of the risky IDs also takes place locally — meaning exposure alerts are not centralized.

The use of this decentralized, privacy-preserving architecture for the NHS COVID-19 app is a major shift vs the original app which was being designed to centralize data with the public health authority.

However the government U-turned after a backlash over privacy and ongoing technical problems linked to trying to hack its way around iOS limits on background access to Bluetooth.

Switching the NHS COVID-19 app to a decentralized architecture has allowed it to plug into coronavirus exposure notification APIs developed by Apple and Google — resolving technical problems related to device detection which caused problems for the earlier version of the app.

In June, the government suggested there were issues with the APIs related to the reliability of estimating distance between devices. Asked about the reliability of the Bluetooth technology the app is used on BBC Radio 4’s Today program this morning, health secretary Matt Hancock said: “What we know for absolute sure is that the app will not tell you to self isolate because you’ve been in close contact with someone unless you have been in close contact. The accuracy with which it does that is increasing all of the time — and we’ve been working very closely with Apple and with Google who’ve done a great job in working to make this happen and to ensure that accuracy is constantly improved.”

The health secretary described the app as “an important tool in addition to all the other tools that we have” — adding that one of the reasons he’d delayed the launch until now was because he didn’t want to release an app that wasn’t effective.

“Everybody who downloads the app will be helping to protect themselves, helping to protect their loves one, helping to protect their community — because the more people who download it the more effective it will be. And it will help to keep us safe,” Hancock went on.

“One of the things that we’ve learnt over the course of the pandemic is where people are likely to have close contacts and in fact the app that we’re launching today will help to find more of those close contacts,” he added.

The England and Wales app does have some of unique quirks — as the government has opted to pack in multiple features, rather than limiting it to only exposure notifications.

These bells & whistles include: risk alerts based on postcode district; a system of QR code check-in at venues (which are now required by law to display a QR code for app users to scan); a COVID-19 symptom checker and test booking feature — including the ability to get results through the app; and a timer for users who have been told to self-isolate to help them keep count of the number of days left before they can come out of quarantine, with pointers offered to “relevant advice”.

“[The app] helps you to easily go to the pub or a restaurant or hospitality venue because you can then click through on the QR code which automatically does the contact tracing that is now mandatory,” said Hancock explaining the thinking behind some of the extra features. “And it helps by explaining what the rules are and the risk in your area for people easily and straightforwardly to be able to answer questions and consult on the rules so it has a whole series of features.”

It remains to be seen whether it was sensible product design to bolt on all these extras — and QR code venue check-ins could carry a risk of confusing users. However the government’s logic appears to be that more features will encourage more people to download the app and thereby increase uptake and utility.

Once widespread, the mandatory venue QR codes will also effectively double as free ads for the app so that could help drive downloads.

More saliently, the Bluetooth exposure notification system depends on an effective testing regime and will therefore be useless in limiting the spread of COVID-19 if the government can’t improve coronavirus test turnaround times — which it has been struggling with in recent weeks, as major backlogs have built up.

Internet law expert, professor Lilian Edwards — who was on an ethics advisory panel for the earlier, now defunct version of the England & Wales app — made this point to BBC Radio 4’s World at One program yesterday.

“My main concern is not the app itself but the interaction with the testing schedule,” she said. The app only sends out proximity warnings to the contacts on upload of a positive test. The whole idea is to catch contacts before they develop symptoms in that seven-day window when they won’t be isolating. If tests are taking five to seven days to get back then by that time the contacts will have developed symptoms and should hopefully be isolating or reporting their symptoms themselves. So if we don’t speed up testing then the app is functionally useless.”

EU’s antitrust probe of Google-Fitbit gets more time

European antitrust regulators now have until almost the end of the year to take a decision on whether to green light Google’s planned acquisition of Fitbit.

The tech giant announced its intention to buy the fitness tracking wearable maker in November 2019, saying it would shell out $2.1 billion in cash to make off with Fitbit and the health data it holds on some 28M+ users.

EU regulators were quick to sound the alarm about letting the tech giant go shopping for such a major cache of sensitive personal data, with the European Data Protection Board warned in February that the proposed purchase poses a huge risk to privacy.

There is also a parallel concern that Fitbit’s fitness data could further consolidate Google’s regional dominance in the ad market. And last month EU competition regulators announced a full antitrust probe — saying then they would take a decision within 90 working days. That deadline has now been extended by a further two weeks.

A Commission spokeswoman confirmed the earlier provisional December 9 deadline has been pushed on “in agreement with the parties” — citing Article 10(3) of the EU’s Merger Regulation.

“The provisional legal deadline for a final decision in this case is now December 23, 2020,” she added.

The Commission has not offered any detail on the reason for allocating more time to take a decision.

When EU regulators announced the in-depth probe, the Commission said it was concerned data gathered by Fitbit could lead to a distortion of competition if Google was allowed to assimilate the wearable maker and “further entrench” its dominance in online ad markets.

Other concerns include the impact on the nascent digital healthcare sector, and whether Google might be incentivised to degrade the interoperability of rival wearables with its Android OS once it has its own hardware skin in the game.

The tech giant, meanwhile, has offered assurances around the deal in an attempt to get it cleared — claiming ahead of the Commission’s probe announcement it would not use Fitbit health data for ad targeting, and suggesting that it would create a ‘data silo’ for Fitbit data to keep it separate from other data holdings.

However regulators have expressed scepticism — with the Commission writing last month that the “data silo commitment proposed by Google is insufficient to clearly dismiss the serious doubts identified at this stage as to the effects of the transaction”.

It remains to be seen what the bloc’s competition regulators conclude after taking a longer and harder look at the deal — and it’s worth noting they are simultaneously consulting on whether to give themselves new powers to be able to intervene faster to regulate digital markets — but Google’s hopes of friction-free regulatory clearance and being able to hit the ground running in 2020 with Fitbit’s data in its pocket have certainly not come to pass. 

Facebook denies it will pull service in Europe over data transfer ban

Facebook’s head of global policy has denied the tech giant could close its service to Europeans if local regulators order it to suspend data transfers to the US following a landmark Court of Justice ruling in July that has cemented the schism between US surveillance laws and EU privacy rights.

Press reports emerged this week of a Dublin court filing by Facebook, which is seeking a stay to a preliminary suspension order on its EU-US data transfers, that suggested the tech giant could pull out of the region if regulators enforce a ban against its use of a data transfer mechanism known as Standard Contractual Clauses.

The court filing is attached to Facebook’s application for a judicial review of a preliminary suspension order from Ireland’s Data Protection Commission earlier this month, as Facebook’s lead EU data supervisor responded to the implications of the CJEU ruling.

“We of course won’t [shut down in Europe] — and the reason we won’t of course is precisely because we want to continue to serve customer and small and medium sized businesses in Europe,” said Facebook VP Nick Clegg during a livestreamed EU policy debate yesterday.

However he also warned of “profound effects” on scores of digital businesses if a way is not found by lawmakers on both sides of the pond to resolve the legal uncertainty around US data transfers — making a pitch to politicians to come up with a new legal ‘sticking plaster’ for EU-US data transfers now that a flagship arrangement, called Privacy Shield, is dead.

“We have a major issue — which is that for various complex, legal, political and other reasons question marks are being raised about the current legal basis under which data transfers occur. If those legal means of data transfer are removed — not by us, but by regulators — then of course that will have a profound effect on how, not just our services, but countless other companies operate. We’re trying to avoid that.”

The Facebook VP was speaking during an EBS panel debate on rebooting the regional economy “towards a green, digital and resilient union” — which included the EU’s commissioner for the economy, Paolo Gentiloni, and others.

Discussing the Dublin legal filing, Clegg suggested that an overenthusiastic reporter “slightly overwrote” in their interpretation of the document. “We’ve taken legal action in the Dublin courts to — in a sense — to try to send a signal that this is a really big issue for the whole European economy, for all small and large companies that rely on data transfers,” he said.

Clegg went on to claim that while Facebook being forced to suspend data transfers from the EU to the US “would of course be very bad for Facebook” the impact of such an order “would be absolutely disastrous for the economy as a whole”.

“What is at stake here is quite a big issue that in the end can only be resolved politically between a continued negotiation between the US and the EU that clearly is not going to happen until there’s a new US administration in place after the transition period in the early part of next year,” he said, indicating Facebook is using Ireland’s courts to try to buy time for a political fix.

“We need the time and the space for the political process between the EU and the US to work out so that companies can have confidence going fwd that they’re able to transfer data going forward,” he added.

Clegg also sought to present Facebook’s platform as a vital component of any regional economy recovery — talking up its utility to European SMEs for reaching customers.

Some 25M European companies use its apps and tools, he said — impressing that the “vast majority” do so for free and further claiming activity on Facebook’s ad platform could be linked to sales of 208BN, and 3M+ jobs, per independent estimates.

“In terms of the economic recovery, our most important role is to continue to provide that extraordinary capacity for small businesses to do something which in the past only big businesses could do,” he said. “In the past only big businesses had the fancy marketing budgets and could take out bill boards and television and radio ads. The transformational effect of social media and Facebook in part economically speaking is that it’s levelled the playing field.”

Clegg went further on this point — linking the mass exploitation of Internet users’ personal data to the economic value generated by regional businesses via what he badged “personalized advertising” — aka “Facebook’s business model”.

“The personalized advertising model allows us to do that — allows us to level the playing field,” he claimed.

The tech giant’s processing of Europeans’ personal data remains under investigation on multiple fronts by EU regulators — meaning that as well as the clear threat to its US transfers Facebook’s core business model risks being unpicked by regulatory action if it faces enforcement action over data protection violations in future.

“I’m acutely aware that it is a business model that has plenty of criticism aimed at it and there’s a totally legitimate debate which rages in Brussels and elsewhere about how Facebook gathers, stores and monetizes data — and that is a totally legitimate and ongoing debate — but I hope people will not overlook that that business model has one ingenious benefit, amongst others, which is that it allows small businesses to operate on the same basis as big businesses in reaching their customers,” he said.

Never one to waste a lobbying opportunity, Clegg argued the pandemic has made this capacity “even more important” with EU populations under lockdown and fewer opportunities for businesses to engage in face to face selling.

Taxing times

The knotty issue of digital tax reform also came up during the debate.

Gentiloni reiterated the Commission position that it wants to see global agreement on reforming tax rules to take account of the shift to online business but he said the bloc is willing to go ahead with a European digital tax if that effort fails.

“We can’t remain with the model of the previous century,” he said, before going on to flesh out the challenges facing global accord on the issue. “We don’t want to be the one breaking this OECD process. To be honest, there was a lot of progress in this thing that we call ‘inclusive framework’ — more than dozens of countries working together and reaching something like an agreement on a new form of digital tax but then one single country — but a very important one — is not agreeing with this solution, is proposing a different one. But this different solution, the so called ‘Safe Harbor’, appears a little bit like an optional solution and it’s a bit difficult to conceive of an optional solution because of course you don’t pay ‘optional taxes’, I don’t think so. But we are still committed towards the end of this year to try to find this solution.

“My absolute preferred solution would be a global one. For many reasons — for avoiding tensions among different countries, and for facilitating for business the payment of taxes — but I want to say very clearly that we have a second best solution which is a European digital taxation because the alternative to this would be to have, as we already have in legislation, a French one, an Italian one, a Spanish one and I don’t think this is a good solution for Facebook or other companies. So we’re working for global but if global is not possible we will go European.”

Facebook’s Clegg said the company “will pay the taxes that are due under the rules that operate”, adding that if there is a European digital tax it will “of course” abide by it. But he too said Facebook’s preference is for a global arrangement.

Big tech has 2 elephants in the room: Privacy and competition

The question of how policymakers should respond to the power of big tech didn’t get a great deal of airtime at TechCrunch Disrupt last week, despite a number of investigations now underway in the United States (hi, Google).

It’s also clear that attention- and data-monopolizing platforms compel many startups to use their comparatively slender resources to find ways to compete with the giants — or hope to be acquired by them.

But there’s clearly a nervousness among even well-established tech firms to discuss this topic, given how much their profits rely on frictionless access to users of some of the gatekeepers in question.

Dropbox founder and CEO Drew Houston evinced this dilemma when TechCrunch Editor-in-Chief Matthew Panzarino asked him if Apple’s control of the iOS App Store should be “reexamined” by regulators or whether it’s just legit competition.

“I think it’s an important conversation on a bunch of dimensions,” said Houston, before offering a circular and scrupulously balanced reply in which he mentioned the “ton of opportunity” app stores have unlocked for third-party developers, checking off some of Apple’s preferred talking points like “being able to trust your device” and the distribution the App Store affords startups.

“They also are a huge competitive advantage,” Houston added. “And so I think the question of … how do we make sure that there’s still a level playing field and so that owning an app store isn’t too much of an advantage? I don’t know where it’s all going to end up. I do think it’s an important conversation to be had.”

Rep. Zoe Lofgren (D-CA) said the question of whether large tech companies are too powerful needs to be reframed.

“Big per se is not bad,” she told TC’s Zack Whittaker. “We need to focus on whether competitors and consumers are being harmed. And, if that’s the case, what are the remedies?”

In recent years, U.S. lawmakers have advanced their understanding of digital business models — making great strides since Facebook’s Mark Zuckerberg answered a question two years ago about how his platform makes money: “Senator, we sell ads.”

A House antitrust subcommittee hearing in July 2020 that saw the CEOs of Google, Facebook, Amazon and Apple answer awkward questions and achieved a higher dimension of detail than the big tech hearings of 2018.

Nonetheless, there still seems to be a lack of consensus among lawmakers over how exactly to grapple with big tech, even though the issue elicits bipartisan support, as was in plain view during a Senate Judiciary Committee interrogation of Google’s ad business earlier this month.

On stage, Lofgren demonstrated some of this tension by discouraging what she called “bulky” and “lengthy” antitrust investigations, making a general statement in favor of “innovation” and suggesting a harder push for overarching privacy legislation. She also advocated at length for inalienable rights for U.S. citizens so platform manipulators can’t circumvent rules with their own big data holdings and some dark pattern design.

Amnesty calls for human rights controls on EU digital surveillance exports

In a new report, Amnesty International says it’s found evidence of EU companies selling digital surveillance technologies to China — despite the stark human rights risks of technologies like facial recognition ending up in the hands of an authoritarian regime that’s been rounding up ethnic Uyghurs and holding them in “re-education” camps.

The human rights charity has called for the bloc to update its export framework, given that the export of most digital surveillance technologies is currently unregulated — urging EU lawmakers to bake in a requirement to consider human rights risks as a matter of urgency.

“The current EU exports regulation (i.e. Dual Use Regulation) fails to address the rapidly changing surveillance dynamics and fails to mitigate emerging risks that are posed by new forms of digital surveillance technologies [such as facial recognition tech],” it writes. “These technologies can be exported freely to every buyer around the globe, including Chinese public security bureaus. The export regulation framework also does not obligate the exporting companies to conduct human rights due diligence, which is unacceptable considering the human rights risk associated with digital surveillance technologies.”

“The EU exports regulation framework needs fixing, and it needs it fast,” it adds, saying there’s a window of opportunity as the European legislature is in the process of amending the exports regulation framework.

Amnesty’s report contains a number of recommendations for updating the framework so it’s able to respond to fast-paced developments in surveillance tech — including saying the scope of the Recast Dual Use Regulation should be “technology-neutral”, and suggesting obligations are placed on exporting companies to carry out human rights due diligence, regardless of size, location or structure.

We’ve reached out to the European Commission for a response to Amnesty’s call for updates to the EU export framework.

The report identifies three EU-based companies — biometrics authentication solutions provider Morpho (now Idemia) from France; networked camera maker Axis Communications from Sweden; and human (and animal) behavioral research software provider Noldus Information Technology from the Netherlands — as having exported digital surveillance tools to China.

“These technologies included facial and emotion recognition software, and are now used by Chinese public security bureaus, criminal law enforcement agencies, and/or government-related research institutes, including in the region of Xinjiang,” it writes, referring to a region of north-west China that’s home to many ethnic minorities, including the persecuted Uyghurs.

“None of the companies fulfilled their human rights due diligence responsibilities for these transactions, as prescribed by international human rights law,” it adds. “The exports pose significant risks to human rights.”

Amnesty suggests the risks posed by some of the technologies that have already been exported from the EU include interference with the right to privacy — such as via eliminating the possibility for individuals to remain anonymous in public spaces — as well as interference with non-discrimination, freedom of opinion and expression, and potential impacts on the rights to assembly and association too.

We contacted the three EU companies named in the report for a response.

At the time of writing only Axis Communications had replied — pointing us to a public statement, where it writes that its network video solutions are “used all over the world to help increase security and safety”, adding that it “always” respects human rights and opposes discrimination and repression “in any form”.

“In relation to the ethics of how our solutions are used by our customers, customers are systematically screened to highlight any legal restrictions or inclusion on lists of national and international sanctions,” it also claims, although the statement makes no reference to why this process did not prevent it from selling its technology to China.

On the domestic front, European lawmakers are in the process of fashioning regional rules for the use of ‘high risk’ applications of AI across the bloc — with a draft proposal due next year, per a recent speech by the Commission president.

Thus far the EU’s executive has steered away from an earlier suggestion that it could seek a temporary ban on the use of facial recognition tech in public places. It also appears to favor lighter touch regulation which defines only a sub-set of ‘high risk’ applications, rather than imposing any blanket bans. Additionally regional lawmakers have sought a ‘broad’ debate on circumstances where use of remote use of biometric identification could be justified, suggesting nothing is yet off the table.