Google misled consumers over location data settings, Australia court finds

Google’s historical collection of location data has got it into hot water in Australia where a case brought by the country’s Competition and Consumer Commission (ACCC) has led to a federal court ruling that the tech giant misled consumers by operating a confusing dual-layer of location settings in what the regulator describes as a “world-first enforcement action”.

The case relates to personal location data collected by Google through Android mobile devices between January 2017 and December 2018.

Per the ACCC, the court ruled that “when consumers created a new Google Account during the initial set-up process of their Android device, Google misrepresented that the ‘Location History’ setting was the only Google Account setting that affected whether Google collected, kept or used personally identifiable data about their location”.

“In fact, another Google Account setting titled ‘Web & App Activity’ also enabled Google to collect, store and use personally identifiable location data when it was turned on, and that setting was turned on by default,” it wrote.

The Court also ruled that Google misled consumers when they later accessed the ‘Location History’ setting on their Android device during the same time period to turn that setting off because it did not inform them that by leaving the ‘Web & App Activity’ setting switched on, Google would continue to collect, store and use their personally identifiable location data.

“Similarly, between 9 March 2017 and 29 November 2018, when consumers later accessed the ‘Web & App Activity’ setting on their Android device, they were misled because Google did not inform them that the setting was relevant to the collection of personal location data,” the ACCC added.

Similar complaints about Google’s location data processing being deceptive — and allegations that it uses manipulative tactics in order to keep tracking web users’ locations for ad-targeting purposes — have been raised by consumer agencies in Europe for years. And in February 2020 the company’s lead data regulator in the region finally opened an investigation. However that probe remains ongoing.

Whereas the ACCC said today that it will be seeking “declarations, pecuniary penalties, publications orders, and compliance orders” following the federal court ruling. Although it added that the specifics of its enforcement action will be determined “at a later date”. So it’s not clear exactly when Google will be hit with an order — nor how large a fine it might face.

The tech giant may also seek to appeal the court ruling.

Google said today it’s reviewing its legal options and considering a “possible appeal” — highlighting the fact the Court did not agree wholesale with the ACCC’s case because it dismissed some of the allegations (related to certain statements Google made about the methods by which consumers could prevent it from collecting and using their location data, and the purposes for which personal location data was being used by Google).

Here’s Google’s statement in full:

“The court rejected many of the ACCC’s broad claims. We disagree with the remaining findings and are currently reviewing our options, including a possible appeal. We provide robust controls for location data and are always looking to do more — for example we recently introduced auto delete options for Location History, making it even easier to control your data.”

While Mountain View denies doing anything wrong in how it configures location settings — while simultaneously claiming it’s always looking to improve the controls it offers its users — Google’s settings and defaults have, nonetheless, got it into hot water with regulators before.

Back in 2019 France’s data watchdog, the CNIL, fined it $57M over a number of transparency and consent failures under the EU’s General Data Protection Regulation. That remains the largest GDPR penalty issued to a tech giant since the regulation came into force a little under three years ago — although France has more recently sanctioned Google $120M under different EU laws for dropping tracking cookies without consent.

Australia, meanwhile, has forged ahead with passing legislation this year that directly targets the market power of Google (and Facebook) — passing a mandatory news media bargaining code in February which aims to address the power imbalance between platform giants and publishers around the reuse of journalism content.

Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach

Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.

Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).

Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.

The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.

Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.

Facebook has been contacted for comment on the litigation.

The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.

A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.

With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.

(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).

Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.

Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.

That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533M accounts now, suggests it should face a higher sanction from the DPC than Twitter received.

However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.

Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.

“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.

It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.

It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.

In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.

Facebook, meanwhile, has sought to play down the breach it failed to disclose — claiming it’s ‘old data’ — a deflection that ignores the fact that dates of birth don’t change (nor do most people routinely change their mobile number or email address).

Plenty of the ‘old’ data exposed in this latest massive Facebook data leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.

Clarence Thomas plays a poor devil’s advocate in floating First Amendment limits for tech companies

Supreme Court Justice Clarence Thomas flaunted a dangerous ignorance regarding matters digital in an opinion published today. In attempting to explain the legal difficulties of social media platforms, particularly those arising from Twitter’s ban of Trump, he makes an ill-informed, bordering on bizarre, argument as to why such companies may need their First Amendment rights curtailed.

There are several points on which Thomas seems to willfully misconstrue or misunderstand the issues.

The first is in his characterization of Trump’s use of Twitter. You may remember that several people sued after being blocked by Trump, alleging that his use of the platform amounted to creating a “public forum” in a legal sense, meaning it was unlawful to exclude anyone from it for political reasons. (The case, as it happens, was rendered moot after its appeal and dismissed by the court except as a Thomas’s temporary soapbox.)

“But Mr. Trump, it turned out, had only limited control of the account; Twitter has permanently removed the account from the platform,” writes Thomas. “[I]t seems rather odd to say something is a government forum when a private company has unrestricted authority to do away with it.”

Does it? Does it seem odd? Because a few paragraphs later, he uses the example of a government agency using a conference room in a hotel to hold a public hearing. They can’t kick people out for voicing their political opinions, certainly, because the room is a de facto public forum. But if someone is loud and disruptive, they can ask hotel security to remove that person, because the room is de jure a privately owned space.

Yet the obvious third example, and the one clearly most relevant to the situation at hand, is skipped. What if it is the government representatives who are being loud and disruptive, to the point where the hotel must make the choice whether to remove them?

It says something that this scenario, so remarkably close a metaphor for what actually happened, is not considered. Perhaps it casts the ostensibly “odd” situation and actors in too clear a light, for Thomas’s other arguments suggest he is not for clarity here but for muddying the waters ahead of a partisan knife fight over free speech.

In his best “I’m not saying, I’m just saying” tone, Thomas presents his reasoning why, if the problem is that these platforms have too much power over free speech, then historically there just happen to be some legal options to limit that power.

Thomas argues first, and worst, that platforms like Facebook and Google may amount to “common carriers,” a term that goes back centuries to actual carriers of cargo, but which is now a common legal concept that refers to services that act as simple distribution – “bound to serve all customers alike, without discrimination.” A telephone company is the most common example, in that it cannot and does not choose what connections it makes, nor what conversations happen over those connections – it moves electric signals from one phone to another.

But as he notes at the outset of his commentary, “applying old doctrines to new digital platforms is rarely straightforward.” And Thomas’s method of doing so is spurious.

“Though digital instead of physical, they are at bottom communications networks, and they ‘carry’ information from one user to another,” he says, and equates telephone companies laying cable with companies like Google laying “information infrastructure that can be controlled in much the same way.”

Now, this is certainly wrong. So wrong in so many ways that it’s hard to know where to start and when to stop.

The idea that companies like Facebook and Google are equivalent to telephone lines is such a reach that it seems almost like a joke. These are companies that have built entire business empires by adding enormous amounts of storage, processing, analysis, and other services on top of the element of pure communication. One might as easily suggest that because computers are just a simple piece of hardware that moves data around, that Apple is a common carrier as well. It’s really not so far a logical leap!

There’s no real need to get into the technical and legal reasons why this opinion is wrong, however, because these grounds have been covered so extensively over the years, particularly by the FCC — which the Supreme Court has deferred to as an expert agency on this matter. If Facebook were a common carrier (or telecommunications service), it would fall under the FCC’s jurisdiction — but it doesn’t, because it isn’t, and really, no one thinks it is. This has been supported over and over, by multiple FCCs and administrations, and the deferral is itself a Supreme Court precedent that has become doctrine.

In fact, and this is really the cherry on top, freshman Justice Kavanaugh in a truly stupefying legal opinion a few years ago argued so far in the other direction that it became wrong in a totally different way! It was Kavanaugh’s considered opinion that the bar for qualifying as a common carrier was actually so high that even broadband providers don’t qualify for it (This was all in service of taking down net neutrality, a saga we are in danger of resuming soon). As his erudite colleague Judge Srinivasan explained to him at the time, this approach too is embarrassingly wrong.

Looking at these two opinions, of two sitting conservative Supreme Court Justices, you may find the arguments strangely at odds, yet they are wrong after a common fashion.

Kavanaugh claims that broadband providers, the plainest form of digital common carrier conceivable, are in fact providing all kinds sophisticated services over and above their functionality as a pipe (they aren’t). Thomas claims that companies actually providing all kinds of sophisticated services are nothing more than pipes.

Simply stated, these men have no regard for the facts but have chosen the definition that best suits their political purposes: for Kavanaugh, thwarting a Democrat-led push for strong net neutrality rules; for Thomas, asserting control over social media companies perceived as having an anti-conservative bias.

The case Thomas uses for his sounding board on these topics was rightly rendered moot — Trump is no longer president and the account no longer exists — but he makes it clear that he regrets this extremely.

“As Twitter made clear, the right to cut off speech lies most powerfully in the hands of private digital platforms,” he concludes. “The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions. This petition, unfortunately, affords us no opportunity to confront them.”

Between the common carrier argument and questioning the form of Section 230 (of which in this article), Thomas’s hypotheticals break the seals on several legal avenues to restrict First Amendment rights of digital platforms, as well as legitimizing those (largely on one side of the political spectrum) who claim a grievance along these lines. (Slate legal commentator Mark Joseph Stern, who spotted the opinion early, goes further, calling Thomas’s argument a “paranoid Marxist delusion” and providing some other interesting context.)

This is not to say that social media and tech do not deserve scrutiny on any number of fronts — they exist in an alarming global vacuum of regulatory powers, and hardly anyone would suggest they have been entirely responsible with this freedom. But the arguments of Thomas and Kavanaugh stink of cynical partisan sophistry. This endorsement by Thomas amounts accomplishes nothing legally, but will provide valuable fuel for the bitter fires of contention — though they hardly needed it.

The Supreme Court sided with Google in its epic copyright fight against Oracle

The highest court in the land has a lot to say about tech this week. The Supreme Court weighed in on Google’s long legal battle with Oracle on Monday, overturning a prior victory for the latter company that could have resulted in an $8 billion award.

In a 6-2 decision, the court ruled that Google didn’t break copyright laws when it incorporated pieces of Oracle’s Java software language into its own mobile operating system. Google copied Oracle’s code for Java APIs for Android, and the case kicked off a yearslong debate over the reuse of established APIs and copyright.

In 2018, a federal appeals court ruled that Google did in fact violate copyright law by using the APIs and that its implementation didn’t fall under fair use.

“In reviewing that decision, we assume, for argument’s sake, that the material was copyrightable. But we hold that the copying here at issue nonetheless constituted a fair use. Hence, Google’s copying did not violate the copyright law,” Justice Stephen Breyer wrote in the decision, which reverses Oracle’s previous win. Justices Samuel Alito and Clarence Thomas dissented.

“Google’s copying of the Java SE API, which included only those lines of code that were needed to allow programmers to put their accrued talents to work in a new and transformative program, was a fair use of that material as a matter of law,” Breyer wrote.

Google SVP of Global Affairs Kent Walker called the ruling, embedded below, a “big win for innovation, interoperability & computing.”

Click to access 18-956_d18f.pdf

Competition challenge to Facebook’s ‘superprofiling’ of users sparks referral to Europe’s top court

A German court that’s considering Facebook’s appeal against a pioneering pro-privacy order by the country’s competition authority to stop combining user data without consent has said it will refer questions to Europe’s top court.

In a press release today the Düsseldorf court writes [translated by Google]: “…the Senate has come to the conclusion that a decision on the Facebook complaints can only be made after referring to the Court of Justice of the European Union (ECJ).

“The question of whether Facebook is abusing its dominant position as a provider on the German market for social networks because it collects and uses the data of its users in violation of the GDPR can not be decided without referring to the ECJ. Because the ECJ is responsible for the interpretation of European law.”

The Bundeskartellamt (Federal Cartel Office, FCO)’s ‘exploitative abuse’ case links Facebook’s ability to gather data on users of its products from across the web, via third party sites (where it deploys plug-ins and tracking pixels), and across its own suite of products (Facebook, Instagram, WhatsApp, Oculus), to its market power — asserting this data-gathering is not legal under EU privacy law as users are not offered a choice.

The associated competition contention, therefore, is that inappropriate contractual terms allow Facebook to build a unique database for each individual user and unfairly gain market power over rivals who don’t have such broad and deep reach into user’s personal data.

The FOC’s case against Facebook is seen as highly innovative as it combines the (usually) separate (and even conflicting) tracks of competition and privacy law — offering the tantalizing prospect, were the order to actually get enforced, of a structural separation of Facebook’s business empire without having to order a break up of its various business units up.

However enforcement at this point — some five years after the FCO started investigating Facebook’s data practices in March 2016 — is still a big if.

Soon after the FCO’s February 2019 order to stop combining user data, Facebook succeeded in blocking the order via a court appeal in August 2019.

But then last summer Germany’s federal court unblocked the ‘superprofiling’ case — reviving the FCO’s challenge to the tech giant’s data-harvesting-by-default.

The latest development means another long wait to see whether competition law innovation can achieve what the EU’s privacy regulators have so far failed to do — with multiple GDPR challenges against Facebook still sitting undecided on the desk of the Irish Data Protection Commission.

Albeit, it’s fair to say that neither route looks capable of ‘moving fast and breaking’ platform power at this point.

In its opinion the Düsseldorf court does appear to raise questions over the level of Facebook’s data collection, suggesting the company could avoid antitrust concerns by offering users a choice to base profiling on only the data they upload themselves rather than on a wider range of data sources, and querying its use of Instagram and Oculus data.

But it also found fault with the FCO’s approach — saying Facebook’s US and Irish business entities were not granted a fair hearing before the order against its German sister company was issued, among other procedural quibbles.

Referrals to the EU’s Court of Justice can take years to return a final interpretation.

In this case the ECJ will likely be asked to consider whether the FCO has exceeded its remit, although the exact questions being referred by the court have not been confirmed — with a written reference set to be issued in the next few weeks, per its press release.

In a statement responding to the court’s announcement today, a Facebook spokesperson said:

“Today, the Düsseldorf Court has expressed doubts as to the legality of the Bundeskartellamt’s order and decided to refer questions to the Court of Justice of the European Union. We believe that the Bundeskartellamt’s order also violates European law.”

Uber under pressure over facial recognition checks for drivers

Uber’s use of facial recognition technology for a driver identity system is being challenged in the UK where the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE) have called for Microsoft to suspend the ride-hailing giant’s use of B2B facial recognition after finding multiple cases where drivers were mis-identified and went on to have their licence to operate revoked by Transport for London (TfL).

The union said it has identified seven cases of “failed facial recognition and other identity checks” leading to drivers losing their jobs and license revocation action by TfL.

When Uber launched the “Real Time ID Check” system in the UK, in April 2020, it said it would “verify that driver accounts aren’t being used by anyone other than the licensed individuals who have undergone an Enhanced DBS check”. It said then that drivers could “choose whether their selfie is verified by photo-comparison software or by our human reviewers”.

In one misidentification case the ADCU said the driver was dismissed from employment by Uber and his license was revoked by TfL. The union adds that it was able to assist the member to establish his identity correctly forcing Uber and TfL to reverse their decisions. But it highlights concerns over the accuracy of the Microsoft facial recognition technology — pointing out that the company suspended the sale of the system to US police forces in the wake of the Black Lives Matter protests of last summer.

Research has shown that facial recognition systems can have an especially high error rate when used to identify people of color — and the ADCU cites a 2018 MIT study which found Microsoft’s system can have an error rate as high as 20% (accuracy was lowest for dark skinned women).

The union said it’s written to the Mayor of London to demand that all TfL private hire driver license revocations based on Uber reports using evidence from its Hybrid Real Time Identification systems are immediately reviewed.

Microsoft has been contacted for comment on the call for it to suspend Uber’s licence for its facial recognition tech.

The ADCU said Uber rushed to implement a workforce electronic surveillance and identification system as part of a package of measures implemented to regain its license to operate in the UK capital.

Back in 2017, TfL made the shock decision not to grant Uber a licence renewal — ratcheting up regulatory pressure on its processes and maintaining this hold in 2019 when it again deemed Uber ‘not fit and proper’ to hold a private hire vehicle licence.

Safety and security failures were a key reason cited by TfL for withholding Uber’s licence renewal.

Uber has challenged TfL’s decision in court and it won another appeal against the licence suspension last year — but the renewal granted was for only 18 months (not the full five years). It also came with a laundry list of conditions — so Uber remains under acute pressure to meet TfL’s quality bar.

Now, though, Labor activists are piling pressure on Uber from the other direction too — pointing out that no regulatory standard has been set around the workplace surveillance technology that the ADCU says TfL encouraged Uber to implement. No equalities impact assessment has even been carried out by TfL, it adds.

WIE confirmed to TechCrunch that it’s filing a discrimination claim in the case of one driver, called Imran Raja, who was dismissed after Uber’s Real ID check — and had his license revoked by TfL.

His licence was subsequently restored — but only after the union challenged the action.

A number of other Uber drivers who were also misidentified by Uber’s facial recognition checks will be appealing TfL’s revocation of their licences via the UK courts, per WIE.

A spokeswoman for TfL told us it is not a condition of Uber’s licence renewal that it must implement facial recognition technology — only that Uber must have adequate safety systems in place.

The relevant condition of its provisional licence on ‘driver identity’ states:

ULL shall maintain appropriate systems, processes and procedures to confirm that a driver using the app is an individual licensed by TfL and permitted by ULL to use the app.

We’ve also asked TfL and the UK’s Information Commissioner’s Office for a copy of the data protection impact assessment Uber says was carried before the Real-Time ID Check was launched — and will update this report if we get it.

Uber, meanwhile, disputes the union’s assertion that its use of facial recognition technology for driver identity checks risks automating discrimination because it says it has a system of manual (human) review in place that’s intended to prevent failures.

Albeit it accepts that that system clearly failed in the case of Raja — who only got his Uber account back (and an apology) after the union’s intervention.

Uber said its Real Time ID system involves an automated ‘picture matching’ check on a selfie that the driver must provide at the point of log in, with the system comparing that selfie with a (single) photo of them held on file. 

If there’s no machine match, the system sends the query to a three-person human review panel to conduct a manual check. Uber said checks will be sent to a second human panel if the first can’t agree. 

In a statement the tech giant told us:

“Our Real-Time ID Check is designed to protect the safety and security of everyone who uses the app by ensuring the correct driver or courier is using their account. The two situations raised do not reflect flawed technology — in fact one of the situations was a confirmed violation of our anti-fraud policies and the other was a human error.

“While no tech or process is perfect and there is always room for improvement, we believe the technology, combined with the thorough process in place to ensure a minimum of two manual human reviews prior to any decision to remove a driver, is fair and important for the safety of our platform.”

In two of the cases referred to by the ADCU, Uber said that in one instance a driver had shown a photo during the Real-Time ID Check instead of taking a selfie as required to carry out the live ID check — hence it argues it was not wrong for the ID check to have failed as the driver was not following the correct protocol.

In the other instance Uber blamed human error on the part of its manual review team(s) who (twice) made an erroneous decision. It said the driver’s appearance had changed and its staff were unable to recognize the face of the (now bearded) man who sent the selfie as the same person in the clean-shaven photo Uber held on file.

Uber was unable to provide details of what happened in the other five identity check failures referred to by the union.

It also declined to specify the ethnicities of the seven drivers the union says were misidentified by its checks.

Asked what measures it’s taking to prevent human errors leading to more misidentifications in future Uber declined to provide a response.

Uber said it has a duty to notify TfL when a driver fails an ID check — a step which can lead to the regulator suspending the license, as happened in Raja’s case. So any biases in its identity check process clearly risk having disproportionate impacts on affected individuals’ ability to work.

WIE told us it knows of three TfL licence revocations that relate solely to facial recognition checks.

“We know of more [UberEats] couriers who have been deactivated but no further action since they are not licensed by TfL,” it noted.

TechCrunch also asked Uber how many driver deactivations have been carried out and reported to TfL in which it cited facial recognition in its testimony to the regulator — but again the tech giant declined to answer our questions.

WIE told us it has evidence that facial recognition checks are incorporated into geo-location-based deactivations Uber carries out.

It said that in one case a driver who had their account revoked was given an explanation by Uber relating solely to location but TfL accidentally sent WIE Uber’s witness statement — which it said “included facial recognition evidence”.

That suggests a wider role for facial recognition technology in Uber’s identity checks vs the one the ride-hailing giant gave us when explaining how its Real Time ID system works. (Again, Uber declined to answer follow up questions about this or provide any other information beyond its on-the-record statement and related background points.)

But even just focusing on Uber’s Real Time ID system there’s the question of much say Uber’s human review staff actually have in the face of machine suggestions combined with the weight of wider business imperatives (like an acute need to demonstrate regulatory compliance on the issue of safety).

James Farrer, the founder of WIE, queries the quality of the human checks Uber has put in place as a backstop for facial recognition technology which has a known discrimination problem.

“Is Uber just confecting legal plausible deniability of automated decision making or is there meaningful human intervention,” he told TechCrunch. “In all of these cases, the drivers were suspended and told the specialist team would be in touch with them. A week or so typically would go by and they would be permanently deactivated without ever speaking to anyone.”

“There is research out there to show when facial recognition systems flag a mismatch humans have bias to confirm the machine. It takes a brave human being to override the machine. To do so would mean they would need to understand the machine, how it works, its limitations and have the confidence and management support to over rule the machine,” Farrer added. “Uber employees have the risk of Uber’s license to operate in London to consider on one hand and what… on the other? Drivers have no rights and there are in excess so expendable.”

He also pointed out that Uber has previously said in court that it errs on the side of customer complaints rather than give the driver benefit of the doubt. “With that in mind can we really trust Uber to make a balanced decision with facial recognition?” he asked.

Farrer further questioned why Uber and TfL don’t show drivers the evidence that’s being relied upon to deactivate their accounts — to given them a chance to challenge it via an appeal on the actual substance of the decision.

“IMHO this all comes down to tech governance,” he added. “I don’t doubt that Microsoft facial recognition is a powerful and mostly accurate tool. But the governance of this tech must be intelligent and responsible. Microsoft are smart enough themselves to acknowledge this as a limitation.

“The prospect of Uber pressured into surveillance tech as a price of keeping their licence… and a 94% BAME workforce with no worker rights protection from unfair dismissal is a recipe for disaster!”

The latest pressure on Uber’s business processes follows hard on the heels of a major win for Farrer and other former Uber drivers and labor rights activists after years of litigation over the company’s bogus claim that drivers are ‘self employed’, rather than workers under UK law.

On Tuesday Uber responded to last month’s Supreme Court quashing of its appeal saying it would now treat drivers as workers in the market — expanding the benefits it provides.

However the litigants immediately pointed out that Uber’s ‘deal’ ignored the Supreme Court’s assertion that working time should be calculated when a driver logs onto the Uber app. Instead Uber said it would calculate working time entitlements when a driver accepts a job — meaning it’s still trying to avoid paying drivers for time spent waiting for a fare.

The ADCU therefore estimates that Uber’s ‘offer’ underpays drivers by between 40%-50% of what they are legally entitled to — and has said it will continue its legal fight to get a fair deal for Uber drivers.

At an EU level, where regional lawmakers are looking at how to improve conditions for gig workers, the tech giant is now pushing for an employment law carve out for platform work — and has been accused of trying to lower legal standards for workers.

In additional Uber-related news this month, a court in the Netherlands ordered the company to hand over more of the data it holds on drivers, following another ADCU+WIE challenge. Although the court rejected the majority of the drivers’ requests for more data. But notably it did not object to drivers seeking to use data rights established under EU law to obtain information collectively to further their ability to collectively bargain against a platform — paving the way for more (and more carefully worded) challenges as Farrer spins up his data trust for workers.

The applicants also sought to probe Uber’s use of algorithms for fraud-based driver terminations under an article of EU data protection law that provides for a right not to be subject to solely automated decisions in instances where there is a legal or significant effect. In that case the court accepted Uber’s explanation at face value that fraud-related terminations had been investigated by a human team — and that the decisions to terminate involved meaningful human decisions.

But the issue of meaningful human invention/oversight of platforms’ algorithmic suggestions/decisions is shaping up to be a key battleground in the fight to regulate the human impacts of and societal imbalances flowing from powerful platforms which have both god-like view of users’ data and an allergy to complete transparency.

The latest challenge to Uber’s use of facial recognition-linked terminations shows that interrogation of the limits and legality of its automated decisions is far from over — really, this work is just getting started.

Uber’s use of geolocation for driver suspensions is also facing legal challenge.

While pan-EU legislation now being negotiated by the bloc’s institutions also aims to increase platform transparency requirements — with the prospect of added layers of regulatory oversight and even algorithmic audits coming down the pipe for platforms in the near future.

Last week the same Amsterdam court that ruled on the Uber cases also ordered India-based ride-hailing company Ola to disclose data about its facial-recognition-based ‘Guardian’ system — aka its equivalent to Uber’s Real Time ID system. The court said Ola must provided applicants with a wider range of data than it currently does — including disclosing a ‘fraud probability profile’ it maintains on drivers and data within a ‘Guardian’ surveillance system it operates.

Farrer says he’s thus confident that workers will get transparency — “one way or another”. And after years fighting Uber through UK courts over its treatment of workers his tenacity in pursuit of rebalancing platform power cannot be in doubt.

 

Court overturns Amsterdam’s three-district ban on Airbnb rentals

A ban by Amsterdam authorities on housing owners offering their properties for vacation rentals in three central districts of the popular tourist city has been overturned after a court ruled it has no basis in law.

City authorities had been responding to concerns over the impact of tourist platforms like Airbnb on quality of life for residents.

An update to the city’s website notes that, from tomorrow, it will be possible for property owners to apply for a holiday rental permit in the three neighborhoods where vacation rentals had been entirely banned from July 1 last year.

City authorities write that they are studying the court ruling and will update the page “as soon as more is known.”

Amsterdam’s authorities took the step of prohibiting vacation rentals in the Burgwallen-Oude Zijde, Burgwallen-Nieuwe Zijde and Grachtengordel-Zuid districts last summer after a consultation process found widespread support among residents for a ban.

Authorities said strong growth in tourist rentals was impacting quality of life for residents.

It has also previously introduced a permit system to control vacation rentals in other districts of the city — which limits rentals to (currently) a maximum of 30 nights per year and for a maximum of four people per rental.

A further condition of the permit states that: “Your guests [must] not cause any inconvenience.”

Following the court ruling that permit system will operate in the three central districts too.

The city’s ban on vacation rentals in the central districts was challenged by an association (Amsterdam Gastvrij) that represents the interests of homeowners who rent their properties through Airbnb and other platforms. They had argued that the Housing Act 2014 did not provide a legal basis for a prohibition on holiday rental. 

The Court of Amsterdam agreed, writing in its judgement that “a system of permits cannot contain a total prohibition.”

“Anyone who meets the conditions of the permit is in principle eligible for a permit. A total ban is a major infringement of the right to property and the free movement of services and will only be seen as a justified measure in very exceptional circumstances,” it further emphasized. 

An Airbnb spokesperson told us the company was not involved in the proceedings to challenge the ban but the spokesperson was keen to highlight the outcome. 

However the court’s verdict leaves room for the city to amend legislation to add new conditions to the permit system that could include a “quality of life” consideration (which it does not currently).

The court also suggests the possibility of a quota system with a night criterion being introduced under existing legislation, as another means of using the permit system to manage quality of life. It further suggests city authorities could enforce residential (rather than touristic) purposes for houses via a zoning plan. So there are alternative avenues for Amsterdam’s officials to explore as a policy tool to limit activity on Airbnb et al.

At the same time the court ruling underlines the challenges European cities face in trying to regulate the impacts of rental platforms on areas like housing availability (and affordability) and wider quality of life issues for residents dealing with over-tourism (not currently an issue, of course, given ongoing travel restrictions related to the coronavirus pandemic).

In recent years a number of major tourist cities in Europe have expressed public frustration over vacation rental platforms — penning an open letter to the European Commission back in 2019 that called for “strong legal obligations for platforms to cooperate with us in registration-schemes and in supplying rental-data per house that is advertised on their platforms.”

“Cities must protect the public interest and eliminate the adverse effects of short-term holiday rental in various ways. More nuisances, feelings of insecurity and a ‘touristification’ of their neighbourhoods is not what our residents want. Therefore (local) governments should have the possibility to introduce their own regulations depending on the local situation,” they also wrote, urging EU policymakers to support a rethink of the rules.

Since then the Commission has announced a limited data-sharing arrangement with the leading vacation rental platforms, saying it wants to encourage “balanced” development of peer-to-peer rentals.

Last year the Dutch government pressed the Commission to go further over data access to vacation rental platforms — pushing for a provision to be included in a major planned update to pan-EU rules wrapping digital services, aka the Digital Services Act (DSA).

The DSA proposal, which is now going through the EU’s co-legislative process, is broadly targeted at standardizing processes for tackling illegal goods and services — so it could have implications for vacation platforms in areas like data-sharing where it relates to illegal vacation rentals (i.e., where a property is advertised without a required permit).

 

Dutch court rejects Uber drivers’ ‘robo-firing’ charge but tells Ola to explain algo-deductions

Uber has had a good result against litigation in the Netherlands, where its European business is headquartered, that had alleged it uses algorithms to terminate drivers — but which the court has rejected.

The ride-hailing giant has also been largely successful in fending off wide-ranging requests for data from drivers wanting to obtain more of the personal data it holds on them.

A number of Uber drivers filed the suits last year with the support of the App Drivers & Couriers Union (ADCU) in part because they are seeking to port data held on them in Uber’s platform to a data trust (called Worker Info Exchange) that they want to set up, administered by a union, to further their ability to collectively bargain against the platform giant.

The court did not object to them seeking data, saying such a purpose does not stand in the way of exercising their personal data access rights, but it rejected most of their specific requests — at times saying they were too general or had not been sufficiently explained or must be balanced against other rights (such as passenger privacy).

The ruling hasn’t gone entirely Uber’s way, though, as the court ordered the tech giant to hand over a little more data to the litigating drivers than it has so far. While it rejected driver access to information including manual notes about them, tags and reports, Uber has been ordered to provide drivers with individual ratings given by riders on an anonymized basis — with the court giving it two months to comply.

In another win for Uber, the court did not find that its (automated) dispatch system results in a “legal or similarly significant effect” for drivers under EU law — and therefore has allowed that it be applied without additional human oversight.

The court also rejected a request by the applicants that data Uber does provide to them must be provided via a CSV file or API, finding that the PDF format Uber has provider is sufficient to comply with legal requirements.

In response to the judgements, an Uber spokesman sent us this statement:

“This is a crucial decision. The Court has confirmed Uber’s dispatch system does not equate to automated decision making, and that we provided drivers with the data they are entitled to. The Court also confirmed that Uber’s processes have meaningful human involvement. Safety is the number one priority on the Uber platform, so any account deactivation decision is taken extremely seriously with manual reviews by our specialist team.”

The ADCU said the litigation has established that drivers taking collective action to seek access to their data is not an abuse of data protection rights — and lauded the aspects of the judgement where Uber has been ordered to hand over more data.

It also said it sees potential grounds for appeal, saying it’s concerned that some aspects of the judgments unduly restrict the rights of drivers, which it said could interfere with the right of workers to access employment rights — “to the extent they are frustrated in their ability to validate the fare basis and compare earnings and operating costs”.

“We also feel the court has unduly put the burden of proof on workers to show they have been subject to automated decision making before they can demand transparency of such decision making,” it added in a press release. “Similarly, the court has required drivers to provide greater specificity on the personal data sought rather than placing the burden on firms like Uber and Ola to clearly explain what personal data is held and how it is processed.”

The two Court of Amsterdam judgements can be found here and here (both are in Dutch; we’ve used Google Translate for the sections quoted below).

Our earlier reports on the legal challenges can be found here and here.

The Amsterdam court has also ruled on similar litigation filed against India-based Ola last year — ordering the India-based ride-hailing company to hand over a wider array of data than it currently does; and also saying it must explain the main criteria for a ‘penalties and deductions’ algorithm that can be applied to drivers’ earnings.

The judgement is available here (in Dutch). See below for more details on the Ola judgement.

Commenting in a statement, James Farrar, a former Uber driver who is now director of the aforementioned Worker Info Exchange, said: “This judgment is a giant leap forward in the struggle for workers to hold platform employers like Uber and Ola Cabs accountable for opaque and unfair automated management practices. Uber and Ola Cabs have been ordered to make transparent the basis for unfair dismissals, wage deductions and the use of surveillance systems such as Ola’s Guardian system and Uber’s Real Time ID system. The court completely rejected Uber & Ola’s arguments against the right of workers to collectively organize their data and establish a data trust with Worker Info Exchange as an abuse of data access rights.”

In an interesting (related) development in Spain, which we reported on yesterday, the government there has said it will legislate in a reform of the labor law aimed at delivery platforms that will require them to provide workers’ legal representatives with information on the rules of any algorithms that manage and assess them.

Court did not find Uber does ‘robo firings’

In one of the lawsuits, the applicants had argued that Uber had infringed their right not to be subject to automated decision-making when it terminated their driver accounts and also that it has not complied with its transparency obligations (within the meaning of GDPR Articles 13, 14 and 15).

Article 22 GDPR gives EU citizens the right not to be subject to a decision based solely on automated processing (including profiling) where the decision has legal or otherwise significant consequences for them. There must be meaningful human interaction in the decision-making process for it to not be considered solely automated processing.

Uber argued that it does not carry out automated terminations of drivers in the region and therefore that the law does not apply — telling the court that potential fraudulent activities are investigated by a specialized team of Uber employees (aka the ‘EMEA Operational Risk team’).

And while it said that the team makes use of software with which potential fraudulent activities can be detected, investigations are carried out by employees following internal protocols which require them to analyze potential fraud signals and the “facts and circumstances” to confirm or rule out the existence of fraud.

Uber said that if a consistent pattern of fraud is detected, a decision to terminate requires an unanimous decision from two employees of the Risk team. When the two employees do not agree, Uber says a third conducts an investigation — presumably to cast a deciding vote.

It provided the court with explanations for each of the terminations of the litigating applicants — and the court writes that Uber’s explanations of its decision-making process for terminations were not disputed. “In the absence of evidence to the contrary, the court will assume that the explanation provided by Uber is correct,” it wrote.

Interestingly, in the case of one of the applicants, Uber told the court they had been using (unidentified) software to manipulate the Uber Driver app in order to identify more expensive journeys by being able to view the passenger’s destination before accepting the ride — enabling them to cherry pick jobs, a practice that’s against Uber’s terms. Uber said the driver was warned that if they used the software again they would be terminated. But a few days later they did so — leading to another investigation and a termination.

However it’s worth noting that the activity in question dates back to 2018. And Uber has since changed how its service operates to provide drivers with information about the destination before they accept a ride — a change it flagged in response to a recent UK Supreme Court ruling that confirmed drivers who brought the challenge are workers, not self employed.

Some transparency issues were found

On the associated question of whether Uber had violated its transparency obligations to terminated drivers, the court found that in the cases of two of the four applicants Uber had done so (but not for the other two).

Uber did not clarify which specific fraudulent acts resulted in their accounts being deactivated,” the court writes in the case of the two applicants who it found had not been provided with sufficient information related to their terminations. Based on the information provided by Uber, they cannot check which personal data Uber used in the decision-making process that led to this decision. As a result, the decision to deactivate their accounts is insufficiently transparent and verifiable. As a result, Uber must provide [applicant 2] and [applicant 4] with access to their personal data pursuant to Article 15 of the GDPR insofar as they were the basis for the decision to deactivate their accounts, in such a way that they can are able to verify the correctness and lawfulness of the processing of their personal data.”

The court dismissed Uber’s attempt to evade disclosure on the grounds that providing more information would give the drivers insight into its anti-fraud detection systems which it suggested could then be used to circumvent them, writing: “In this state of affairs, Uber’s interest in refusing access to the processed personal data of [applicant 2] and [applicant 4] cannot outweigh the right of [applicant 2] and [applicant 4] to access their personal data.”

Compensation claims related to the charges were rejected — including in the case of the two applicants who were not provided with sufficient data on their terminations, with the court saying that they had not provided “reasons for damage to their humanity or good name or damage to their person in any other way”.

The court has given Uber two months to provide the two applicants with personal data pertaining to their terminations. No penalty has been ordered.

“For the time being, the trust is justified that Uber will voluntarily comply with the order for inspection [of personal data] and will endeavor to provide the relevant personal data,” it adds.

No legal/significant effect from Uber’s aIgo-dispatch

The litigants’ data access case also sought to challenge Uber’s algorithmic management of drivers — through its use of an algorithmic batch matching system to allocate rides — arguing that, under EU law, the drivers had a right to information about automated decision making and profiling used by Uber to run the service in order to be able to assess impacts of that automated processing.

However the court did not find that automated decision-making “within the meaning of Article 22 GDPR” takes place in this instance, accepting Uber’s argument that “the automated allocation of available rides has no legal consequences and does not significantly affect the data subject”.

Again, the court found that the applicants had “insufficiently explained” their request.

From the judgement:

It has been established between the parties that Uber uses personal data to make automated decisions. This also follows from section 9 ‘Automated decision-making’ included in its privacy statement. However, this does not mean that there is an automated decision-making process as referred to in Article 22 GDPR. After all, this requires that there are also legal consequences or that the data subject is otherwise significantly affected. The request is only briefly explained on this point. The Applicants argue that Uber has not provided sufficient concrete information about its anti-fraud processes and has not demonstrated any meaningful human intervention. Unlike in the case with application number C / 13/692003 / HA RK 20/302 in which an order is also given today, the applicants did not explain that Uber concluded that they were guilty of fraud. The extent to which Uber has taken decisions about them based on automated decision-making is therefore insufficiently explained. Although it is obvious that it is The batched matching system and the upfront pricing system will have a certain influence on the performance of the agreement between Uber and the driver, it has not been found that there is a legal consequence or a significant effect, as referred to in the Guidelines. Since Article 15 paragraph 1 under h GDPR only applies to such decisions, the request under I (iv) is rejected.

Ola must hand over data and algo criteria

In this case the court ruled that Ola must provided applicants with a wider range of data than it is currently doing — including a ‘fraud probability profile’ it maintains on drivers and data within a ‘Guardian’ surveillance system it operates.

The court also found that algorithmic decisions Ola uses to make deductions from driver earnings do fall under Article 22 of the GDPR, as there is no significant human intervention while the discounts/fines themselves may have a significant effect on drivers.

On this it ordered Ola to provide applicants with information on how these algorithmic choices are made by communicating “the main assessment criteria and their role in the automated decision… so that [applicants] can understand the criteria on the basis of which the decisions were taken and they are able to check the correctness and lawfulness of the data processing”.

Ola has been contacted for comment.

Facebook challenges FTC’s antitrust case with Big Tech’s tattered playbook

Facebook has challenged the FTC’s antitrust case against it using a standard playbook that questions the agency’s arguably expansive approach to defining monopolies. But the old arguments of “we’re not a monopoly because we never raised prices” and “how can it be anti-competitive if we never allowed competition” may soon be challenged by new doctrine and the new administration.

In a document filed today, which you can read at the bottom of this post, Facebook lays out its case with a tone of aggrieved pathos:

By a one-vote margin, in the fraught environment of relentless criticism of Facebook for matters entirely unrelated to antitrust concerns, the agency decided to bring a case against Facebook that ignores its own prior decisions, controlling precedent, and the limits of its statutory authority.

Yes, Facebook is the victim here, and don’t you forget it. (Incidentally, the FTC, like the FCC, is designed to split 3:2 along party lines, so the “one-vote margin” is what one sees for many important measures.)

But after the requisite crying comes the reluctant explanation that the FTC doesn’t know its own business. The suit against Facebook, the company argues, should be spiked by the judge because it fails along three lines.

First, the FTC does not “allege a plausible relevant market.” After all, to have a monopoly, one must have a market over which to exert that monopoly. And the FTC, Facebook argues, has not done so, alleging only a nebulous “personal social networking” market, and “no court has ever held that such a free goods market exists for antitrust purposes,” and the FTC ignores the “relentlessly competitive” advertising market that actually makes the company money.

Ultimately, the FTC’s efforts to structure a crabbed “use” market for a free service in which it can claim a large Facebook “share” are artificial and incoherent.

The implication here is not just that the FTC has failed to define the social media market (and Facebook won’t do so itself), but that such a market may not even exist because social media is free and the money is made by a different market. This is a variation on a standard Big Tech argument that amounts to “because we do not fall under any of the existing categories, we are effectively unregulated.” After all you cannot regulate a social media company by its advertising practices or vice versa (though they may be intertwined in some ways, they are distinct businesses in others).

Thusly Facebook attempts, like many before it, to squeeze between the cracks in the regulatory framework.

This continues with the second argument, which says that the FTC “cannot establish that Facebook has increased prices or restricted output because the agency acknowledges that Facebook’s products are offered for free and in unlimited quantities.”

The argument is literally that if the product is free to the consumer, it is by definition not possible for the provider to have a monopoly. When the FTC argues that Facebook controls 60% of the social media market (which of course doesn’t exist anyway), what does that even mean? 60% of zero dollars, or 100%, or 20%, is still zero.

The third argument is that the behaviors the FTC singles out — purchasing up-and-coming competitors for enormous sums and nipping others in the bud by restricting its own platform and data — are not only perfectly legal but that the agency has no standing to challenge them, having given its blessing before and having no specific illegal activity to point to at present.

Of course the FTC revisits mergers and acquisitions all the time, and there’s precedent for unraveling them long afterward if, for instance, new information comes to light that was not available during the review process.

“Facebook acquired a small photo-sharing service in 2012, Instagram … after that acquisition was reviewed and cleared by the FTC in a unanimous 5-0 vote,” the company argues. Leaving aside the absurd characterization of the billion-dollar purchase as “small,” leaks and disclosures of internal conversations contemporary with the acquisition have cast it in a completely new light. Facebook, then far less secure than it is today, was spooked and worried that Instagram may eat its lunch, so it was better to buy than compete.

The FTC addresses this and indeed many of the other points Facebook raises in a FAQ it posted around the time of the original filing.

Now, some of these arguments may have seemed a little strange to you. Why should it matter if a market has money from consumers being exchanged if there is value exchanged elsewhere contingent on those users’ engagement with the service, for instance? And how can the depredations of a company in the context of a free product that invades privacy (and has faced enormous fines for doing so) be judged by its actions in an adjacent market, like advertising?

The simple truth is that antitrust law has been stuck in a rut for decades, weighed down by doctrine that states that markets are defined by the price of a product and whether a company can increase it arbitrarily. A steel manufacturer that absorbs its competitors by undercutting them and then later raises prices when it is the only option is a simple example and the type that antitrust laws were created to combat.

If that seems needlessly simplistic, well, it’s more complicated in practice and has been effective in many circumstances — but the last 30 years have shown it to be inadequate to address the more complex multibusiness domains of the likes of Microsoft, Google and Facebook (to say nothing of TechCrunch parent company Verizon, which is a whole other matter).

The ascendance of Amazon is one of the best examples of the failure of antitrust doctrine and resulted in a breakthrough paper called “Amazon’s Antitrust Paradox” that pilloried these outdated ideas and showed how network effects led to subtler but no less effective anti-competitive practices. Establishment voices decried it as naive and overreaching, and progressive voices lauded it as the next wave of antitrust philosophy.

It seems that the latter camp may win out, as the author of this controversial paper, Lina Khan, has just been nominated for the vacant fifth commissioner position at the FTC.

Whether she is confirmed (she will face fierce opposition, no doubt, as an outsider plainly opposed to the status quo), her nomination validates her view as an important one. With Khan and her allies in charge at the FTC and elsewhere, the decades-old assumptions that Facebook relies on for its pro forma rejection of the FTC lawsuit may be challenged.

That may not matter for the present lawsuit, which is unlikely to be subject to said rules given its rather retrospective character, but the gloves will be off for the next round — and make no mistake, there will be a next round.

Federal Trade Commission v Facebook Inc Dcdce-20-03590 0056.1 by TechCrunch on Scribd

Uber loses gig workers rights challenge in UK Supreme Court

Uber has lost a long running employment tribunal challenge in the UK’s Supreme Court — with the court dismissing the ride-hailing giant’s appeal and reaffirming earlier rulings that drivers who brought the case are workers, not independent contractors.

The case, which dates back to 2016, has major ramifications for Uber’s business model in the UK — and likely regionally, as similar challenges are ongoing in European courts.

European Union lawmakers are also actively eyeing conditions for gig workers, so policymakers were already facing pressure to clarify the law around gig work — today’s ruling only increases that.

The UK Supreme Court judgement can be found here.

We’ve reached out to Uber for comment.

This story is developing… refresh for updates… 

In recent days — and likely in anticipation of this verdict — Uber has kicked off a lobbying effort in Europe calling for deregulation of platform work.

Uber argues that without a carve out from employment laws platforms’ hands are tied over how far they can go to offer workers a better deal.

It says it’s pushing for some of the same ‘principles’ that featured in the Prop 22 ballot initiative which ride-hailing giants Uber and Lyft spend hundreds of millions of dollars pushing in California, going on to win a carve out for delivery and transport work from employment reclassification there last year.

However, responding to Uber’s EU white paper this week, the academic research group, Fairwork, accused it of downplaying its ability to make changes to improve working conditions on its platform.

Instead, it said the tech giant is trying to legitimize a lower level of protection for platform workers than most European workers benefit from — urging lawmakers to focus on expanding and strengthening employment protections, not watering them down.