Epic sheathes Infinity Blade after Fortnite fan backlash

Epic, the maker of the insanely popular, cross-platform first-person shooter online game Fortnite, has ‘fessed up to a gameplay misstep when it dropped a super powerful new weapon into the battle royale arena earlier this month — triggering a major fan backlash.

Complaints boiled down to it being unfair for the overpowered weapon to exist in standard game modes, given the massive advantage bestowed on whoever happened to be lucky enough to find it.

Earlier this month Epic had trailed the forthcoming Infinity Blade as “a weapon fit for a king”.

It went on to unleash the super-powered weapon, on December 11, shortly after releasing a Season 7 update — so presumably it had been intending to increase Fortnite fans’ gaming itch.

Instead it managed to drastically upset the balance of play. Without adequate counter weapons/strategies to prevail against the weapon Fortnite fans were rightly mad as hell.

But on Friday, three days after launching the blade, Epic pulled the “overpowered” weapon from the game — admitting it had failed to provide “good counters”, and was “re-evaluating our approach to Mythic items”.

Turns out even billions in funding and tens of millions of obsessively engaged fans can’t shield a games maker against making some piss-poor gameplay decisions.

A few days earlier Epic had posted a discussion thread on Reddit saying it wanted to provide “more context on item philosophy”, and trailing “upcoming changes to the Blade” — such as removing the ability of gamers to build and harvest when wielding the Blade so as to add some risk to holding it — so it was still hoping to win fans over at that point. And indeed appeared to be doubling down on its mythic items push.

Then it also wrote that its intention with adding a mythic tier of items to Fortnite is to provide “new and flavorful ways to interact with the map and generally shake up normal play across default modes”.

Which is of course another way of saying it doesn’t want its highly engaged fanbase to get bored and stop pouring cash into its coffers.

However Epic clearly failed to build in the necessary balance into the Infinity Blade from the start. So pulling the blade was the right move, and Fortnite fans should be happy it’s realized it needs to rethink and factor in their concerns.

It’s not clear whether Epic’s re-evaluation will result in mythic items being ditched entirely.

Although, with the right balancing characteristics — such as being time-limited and/or locked to certain game modes — there could still be a place for a little epic chaos in Fortnite to further up the fun. Just don’t go doing anything too crazy, alright?

UK video games workers unionize over “wide-scale exploitation” and diversity issues

Working in video games might sound like a dream job to a 12-year-old Fortnight-loving kid but the day-to-day reality of grinding in the industry can be as unrelenting as fighting an end of level baddie.

Games devs are routinely corralled to ‘crunch’ to hit sequential release target deadlines to ensure a project gets delivered on time and budget. Unpaid overtime is a norm. Long hours are certainly expected. And taking any holiday across vast swathes of the year can be heavily frowned upon, if not barred entirely.

From the outside looking in it’s hard not to conclude people’s passion for gaming is being exploited in the big business interest of shipping lucrative titles to millions of gamers.

In the UK that view is now more than just a perception, with the decision of a group of video games workers to unionize.

The Independent Workers Union of Great Britain (IWGB) said today it’s setting up a union branch for games workers, the first such in the country — and one of what’s claimed as just a handful in the world — with the aim of tackling what it dubs the “wide-scale exploitation” of video games workers.

In recent years the union has gained attention for supporting workers in the so-called ‘gig economy’, backing protests by delivery riders and drivers for companies including Uber and Deliveroo. But this is its first foray into representing games workers.

As well as seeking to tackle issues of excessive and often unpaid overtime (aka “crunch”) — with the union claiming some workers have reported clocking up as much as 100 hours a week — it says it will focus on the use of zero-hour contracts in the industry, especially among Quality Assurance testers (aka game testers). 

Zero-hour contracts refer to employment contracts with no minimum guaranteed hours of work. 

The IWGB says the branch also intends to shine a light on the industry’s lack of diversity and inclusion — and what it couches as a failure to tackle a “pervasive culture of homophobia and sexism”. So, um, it’s about ethics in the games industry itself this time

Commenting in a statement, game worker and founding member of the IWGB‘s Games Workers Unite branch, Dec Peach, said: For as long as I can remember it has been considered normal for games workers to endure zero-hours contracts, excessive unpaid overtime, and even sexism and homophobia as the necessary price to pay for the privilege of working in the industry. Now, as part of the IWGB, we will have the tools to fix this broken sector and create an ethical industry where it’s not only big game companies that thrive, but workers as well.”

In another supporting statement, IWGB general secretary Dr Jason Moyer-Lee added: The game workers’ decision to unionise with the IWGB should be a wake up call for the UK’s gaming industry. The IWGB is proud to support these workers and looks forward to shining a massive spotlight on the industry.”

The UK games industry employs some 47,000 workers, according to UKIE — making it one of the largest such sectors in Europe.

The IWGB‘s Games Workers Unite branch will hold its first meeting on December 16, which the union says will be open to all past, current and “soon to be” workers in the industry — including contract, agency and casual workers, plus direct employees (with the exception of those with hiring and firing power).

It says it’s expecting “hundreds” of games workers to join in the first few months.

Prisma’s new AI-powered app, Lensa, helps the selfie camera lie

Prisma Labs, the startup behind the style transfer craze of a couple of years ago, has a new AI-powered iOS app for retouching selfies. An Android version of the app — which is called Lensa — is slated as coming in January.

It bills Lensa as a “one-button Photoshop”, offering a curated suite of photo-editing features intended to enhance portrait photos — including teeth whitening; eyebrow tinting; ‘face retouch’ which smooths skin tone and texture (but claims to do so naturally); and ‘eye contrast’ which is supposed to make your eye color pop a bit more (but doesn’t seem to do too much if, like me, you’re naturally dark eyed).

There’s also a background blur option for adding a little bokeh to make your selfie stand out from whatever unattractive clutter you’re surrounded by — much like the portrait mode that Apple added to iOS two years ago.

Lensa can also correct for lens distortion, such as if a selfie has been snapped too close. “Our algorithm reconstructs face in 3D and fixes those disproportions,” is how it explains that.

The last slider on the app’s face menu offers this feature, letting you play around with making micro-adjustments to the 3D mesh underpinning your face. (Which feels as weird to see as it sounds to type.)

Of course there’s no shortage of other smartphone apps out there on stores — and/or baked right into smartphones’ native camera apps — offering to ‘beautify’ selfies.

But the push-button pull here is that Lensa automatically — and, it claims, professionally — performs AI-powered retouching of your selfie. So you don’t have to do any manual tweaking yourself (though you also can if you like).

If you just snap a selfie you’ll see an already enhanced version of you. Who said the camera never lies? Thanks AI…

Prisma Labs’ new app, Lensa, uses machine learning to automagically edit selfies

Lensa also lets you tweak visual parameters across the entire photo, as per a standard photo-editing app, via an ‘adjust’ menu — which (at launch) offers sliders for: Exposure, contrast, saturation, plus fade, sharpen; temperature, tint; highlights, shadows.

While Lensa is free to download, an in-app subscription (costing $4.99 per month) can let you get a bit more serious about editing its AI-enhanced selfies — by unlocking the ability to adjust all those parameters across just the face; or just the background.

Prisma Labs says that might be useful if, for example, you want to fix an underexposed selfie shot against a brighter background.

Paying for a subscription also removes watermarks and in-app ads.

“Lensa utilizes a bunch of Machine Learning algorithms to precisely extract face skin from the image and then retouching portraits like a professional artist,” is how it describes the app, adding: “The process is fully automated, but the user can set up an intensity level of the effect.”

The startup says it’s drawn on its eponymous style transfer app for Lensa’s machine learning as the majority of photos snapped and processed in Prisma are selfies — giving it a relevant heap of face data to train the photo-editing algorithms.

Having played around with Lensa I can say its natural looking instant edits are pretty seductive — in that it’s not immediately clear algorithmic fingers have gone in and done any polishing. At a glance you might just think oh, that’s a nice photo.

On closer inspection you can of course see the airbrushing that’s gone on but the polish is applied with enough subtly that it can pass as naturally pleasing.

And natural edits is one of the USP’s Prisma Labs is claiming for Lensa. “Our mission is to allow people to edit a portrait but keep it looking natural,” it tells us. (The other key feature it touts is automation, so it’s selling the time you’ll save not having to manually tweak your selfies.)

Anyone who suffers from a chronic skin condition might view Lensa as a welcome tool/alternative to make-up in an age of the unrelenting selfies (when cameras that don’t lie can feel, well, exhausting).

But for those who object to AI stripping even skin-deep layers off of the onion of reality, Lensa’s subtle algorithmic fiddling might still come over as an affront.

Oath agrees to pay $5M to settle charges it violated children’s privacy

TechCrunch’s Verizon-owned parent, Oath, an ad tech division made from the merging of AOL and Yahoo, has agreed to pay around $5 million to settle charges that it violated a federal children’s privacy law.

The penalty is said to be the largest ever issued under COPPA.

The New York Times reported the story yesterday, saying the settlement will be announced by the New York attorney general’s office today.

At the time of writing the AG’s office could not be reached for comment.

We reached out to Oath with a number of questions about this privacy failure. But a spokesman did not engage with any of them directly — emailing a short statement instead, in which it writes: “We are pleased to see this matter resolved and remain wholly committed to protecting children’s privacy online.”

The spokesman also did not confirm nor dispute the contents of the NYT report.

According to the newspaper, which cites the as-yet unpublished settlement documents, AOL, via its ad exchange, helped place adverts on hundreds of websites that it knew were targeted at children under 13 — such as Roblox.com and Sweetyhigh.com.

The ads were placed used children’s personal data, including cookies and geolocation, which the attorney general’s office said violated the Children’s Online Privacy Protection Act (COPPA) of 1998.

The NYT quotes attorney general, Barbara D. Underwood, describing AOL’s actions as “flagrantly” in violation of COPPA.

The $5M fine for Oath comes at a time when scrutiny is being dialled up on online privacy and ad tech generally, and around kids’ data specifically — with rising concern about how children are being tracked and ‘datafied’ online.

Earlier this year, a coalition of child advocacy, consumer and privacy groups in the US filed a complaint with the FTC asking it to investigate Google-owned YouTube over COPPA violations — arguing that while the site’s terms claim it’s aimed at children older than 13 content on YouTube is clearly targeting younger children, including by hosting cartoon videos, nursery rhymes, and toy ads.

COPPA requires that companies provide direct notice to parents and verifiable consent parents before collecting under 13’s information online.

Consent must also be sought for using or disclosing personal data from children. Or indeed for targeting kids with adverts linked to what they do online.

Personal data under COPPA includes persistent identifiers (such as cookies) and geolocation information, as well as data such as real names or screen names.

In the case of Oath, the NYT reports that even though AOL’s policies technically prohibited the use of its display ad exchange to auction ad space on kids’ websites, the company did so anyway —  citing settlement documents covering the ad tech firm’s practices between October 2015 and February 2017.

According to these documents, an account manager for AOL in New York repeatedly — and erroneously — told a client, Playwire Media (which represents children’s websites such as Roblox.com), that AOL’s ad exchange could be used to sell ad space while complying with Coppa.

Playwire then used the exchange to place more than a billion ads on space that should have been covered by Coppa, the newspaper adds.

The paper also reports that AOL (via Advertising.com) also bought ad space on websites flagged as COPPA-covered from other ad exchanges.

It says Oath has since introduced technology to identify when ad space is deemed to be covered by Coppa and ‘adjust its practices’ accordingly — again citing the settlement documents.

As part of the settlement the ad tech division of Verizon has agreed to create a COPPA compliance program, to be overseen by a dedicated executive or officer; and to provide annual training on COPPA compliance to account managers and other employees who work with ads on kids’ websites.

Oath also agreed to destroy personal information it has collected from children.

It’s not clear whether the censured practices ended in February 2017 or continued until more recently. We asked Oath for clarification but it did not respond to the question.

It’s also not clear whether AOL was also tracking and targeting adverts at children in the EU. If Oath was doing so but stopped before May 25 this year it should avoid the possibility of any penalty under Europe’s tough new privacy framework, GDPR, which came into force in May this year — beefing up protection around children’s data by setting a cap of between 16- and 13-years-old for children being able to consent to their own data being processed.

GDPR also steeply hikes penalties for privacy violations (up to a maximum of 4% of global annual turnover).

Prior to the regulation a European data protection directive was in force across the bloc but it’s GDPR that has strengthened protections in this area with the new provision on children’s data.

‘Google You Owe Us’ claimants aren’t giving up on UK Safari workaround suit

Lawyers behind a UK class-action style compensation litigation against Google for privacy violations have filed an appeal against a recent High Court ruling blocking the proceeding.

In October Mr Justice Warby ruled the case could not proceed on legal grounds, finding the claimants had not demonstrated a basis for bringing a compensation claim.

The case relates to the so called ‘Safari workaround’ Google used between 2011 and 2012 to override iPhone privacy settings and track users without consent.

The civil legal action — whose claimants refer to themselves as ‘Google You Owe Us’ — was filed last year by one named iPhone user, Richard Lloyd, the former director of consumer group, Which?, seeking to represent millions of UK users whose Safari settings the complaint alleges were similarly ignored by Google, via a representative legal action.

Lawyers for the claimants argued that sensitive personal data such as iPhone users’ political affiliation, sexual orientation, financial situation and more had been gathered by Google and used for targeted advertising without their consent.

Google You Owe Us proposed the sum of £750 per claimant for the company’s improper use of people’s data — which could result in a bill of up to £3BN (based on the suit’s intent to represent ~4.4 million UK iPhone users).

However UK law requires claimants demonstrate they suffered damage as a result of violation of the relevant data protection rules.

And in his October ruling Justice Warby found that the “bare facts pleaded in this case” were not “individualised” — hence he saw no case for damages.

He also ruled against the case proceeding on another legal point, related to defining a class for the case — finding “the essential requirements for a representative action are absent” because he said individuals in the group do not have the “same interest” in the claim.

Lodging its appeal today in the Court of Appeal, Google You Owe us described the High Court judgement as disappointing, and said it highlights the barriers that remain for consumers seeking to use collective actions as a route to redress in England and Wales.

In the US, meanwhile, Google settled with the FTC over a similar cookie tracking issue back in 2012 — agreeing to pay $22.5M in that instance.

Countering Justice Warby’s earlier suggestion that affected class members in the UK case did not care about their data being taken without permission, Google You Owe Us said, on the contrary, affected class members have continued to show their support for the case on Facebook — noting that more than 20,000 have signed up for case updates.

For the appeal, the legal team will argue that the High Court judgment was incorrect in stating the class had not suffered damage within the meaning of the UK’s Data Protection Act, and that the class had not all suffered in the same way as a result of the data breach.

Commenting in a statement, Lloyd said:

Google’s business model is based on using personal data to target adverts to consumers and they must ask permission before using this data. The court accepted that people did not give Google permission to use their data in this case, yet slammed the door shut on holding Google to account.

By appealing this decision, we want to give affected consumers the opportunity to get the compensation they are owed and show that collective actions offer a clear route to justice for data protection claims.

We’ve reached out to Google for comment.

DeepMind claims early progress in AI-based predictive protein modelling

Google -owned AI specialist, DeepMind, has claimed a “significant milestone” in being able to demonstrate the usefulness of artificial intelligence to help with the complex task of predicting 3D structures of proteins based solely on their genetic sequence.

Understanding protein structures is important in disease diagnosis and treatment, and could improve scientists’ understanding of the human body — as well as potentially helping to support protein design and bioengineering.

Writing in a blog post about the project to use AI to predict how proteins fold — now two years in — it writes: “The 3D models of proteins that AlphaFold [DeepMind’s AI] generates are far more accurate than any that have come before — making significant progress on one of the core challenges in biology.”

There are various scientific methods for predicting the native 3D state of protein molecules (i.e. how the protein chain folds to arrive at the native state) from residual amino acids in DNA.

But modelling the 3D structure is a highly complex task, given how many permutations there can be on account of protein folding being dependent on factors such as interactions between amino acids.

There’s even a crowdsourced game (FoldIt) that tries to leverage human intuition to predict workable protein forms.

DeepMind says its approach rests upon years of prior research in using big data to try to predict protein structures.

Specifically it’s applying deep learning approaches to genomic data.

“Fortunately, the field of genomics is quite rich in data thanks to the rapid reduction in the cost of genetic sequencing. As a result, deep learning approaches to the prediction problem that rely on genomic data have become increasingly popular in the last few years. DeepMind’s work on this problem resulted in AlphaFold, which we submitted to CASP [Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction] this year,” it writes in the blog post.

“We’re proud to be part of what the CASP organisers have called “unprecedented progress in the ability of computational methods to predict protein structure,” placing first in rankings among the teams that entered (our entry is A7D).”

“Our team focused specifically on the hard problem of modelling target shapes from scratch, without using previously solved proteins as templates. We achieved a high degree of accuracy when predicting the physical properties of a protein structure, and then used two distinct methods to construct predictions of full protein structures,” it adds.

DeepMind says the two methods it used relied on using deep neural networks trained to predict protein properties from its genetic sequence.

“The properties our networks predict are: (a) the distances between pairs of amino acids and (b) the angles between chemical bonds that connect those amino acids. The first development is an advance on commonly used techniques that estimate whether pairs of amino acids are near each other,” it explains.

“We trained a neural network to predict a separate distribution of distances between every pair of residues in a protein. These probabilities were then combined into a score that estimates how accurate a proposed protein structure is. We also trained a separate neural network that uses all distances in aggregate to estimate how close the proposed structure is to the right answer.”

It then used new methods to try to construct predictions of protein structures, searching known structures that matched its predictions.

“Our first method built on techniques commonly used in structural biology, and repeatedly replaced pieces of a protein structure with new protein fragments. We trained a generative neural network to invent new fragments, which were used to continually improve the score of the proposed protein structure,” it writes.

“The second method optimised scores through gradient descent — a mathematical technique commonly used in machine learning for making small, incremental improvements — which resulted in highly accurate structures. This technique was applied to entire protein chains rather than to pieces that must be folded separately before being assembled, reducing the complexity of the prediction process.”

DeepMind describes the results achieved thus far as “early signs of progress in protein folding” using computational methods — claiming they demonstrate “the utility of AI for scientific discovery”.

Though it also emphasizes it’s still early days for the deep learning approach having any kind of “quantifiable impact”.

“Even though there’s a lot more work to do before we’re able to have a quantifiable impact on treating diseases, managing the environment, and more, we know the potential is enormous,” it writes. “With a dedicated team focused on delving into how machine learning can advance the world of science, we’re looking forward to seeing the many ways our technology can make a difference.”

Lime tries to back-peddle on VP’s line on why it hired Definers

Scooter startup Lime has sought to back peddle on an explanation given by its VP of global expansion late last week when asked why it had hired the controversial PR firm, Definers Public Affairs.

The opposition research firm, which has ties to the Republican Party, has been at the center of a reputation storm for Facebook, after a New York Times report last month suggested the controversial PR firm sought to leverage anti-semitic smear tactics — by sending journalists a document linking anti-Facebook groups to billionaire George Soros (after he had been critical of Facebook).

Last month it also emerged that other tech firms had engaged Definers — Lime being one of them. And speaking during an on stage interview at TechCrunch Disrupt Berlin last Thursday, Lime’s Caen Contee claimed it had not known Definers would use smear tactics.

Yet, as we reported previously, a Definers employee sent us an email pitch in October in which it wrote suggestively that “Bird’s numbers seem off”.

This pitch did not disclose the PR firm was being paid by Lime.

Asked about this last week Contee claimed not to know anything about Definers’ use of smear tactics, saying Lime had engaged the firm to work on its green and carbon free programs — and to try to understand “what were the levers of opportunity for us to really create the messaging and also to do our own research; understanding the life-cycle; all the pieces that are in a very complex business”.

“As soon as we understood they were doing some of these things we parted ways and finished our program with them,” he also said.

However, following the publication of our article reporting on his comments, a Lime spokesperson emailed with what the subject line billed as a “statement for your latest story”, tee-ing this up by writing: “Hoping you can update the piece”.

The statement went on to claim that Contee “misspoke” and “was inaccurate in his description of [Definers] work”.

However it did not specify exactly what Contee had said that was incorrect.

A short while later the same Lime spokesperson sent us another version of the statement with updated wording, now entirely removing the reference to Contee.

You can read both statements below.

As you read them, note how the second version of the statement seeks to obfuscate the exact source of the claimed inaccuracy, using wording that seeks to shift blame in way that a casual reader might interpret as external and outside the company’s control…

Statement 1:

Our VP of Global Expansion misspoke at TechCrunch Disrupt regarding our relationship with Definers and was inaccurate in his description of their work. As previously reported, we engaged them for a three month contract to assist with compiling media coverage reports, limited public relations and fact checking, and we are no longer working with Definers.

Statement 2:

What was presented at Disrupt regarding our relationship with Definers and the description of their work was inaccurate. As previously reported, we engaged them for a three month contract to assist with compiling media coverage reports, limited public relations and fact checking, and we are no longer working with Definers.

Despite the Lime spokesperson’s hope for a swift update to our report, they did not respond when we asked for clarification on what exactly Contee had said that was “inaccurate”.

A claim of inaccuracy that does not provide any detail of the substance upon which the claim rests smells a lot like spin to us.

Three days later we’re still waiting to hear the substance of Lime’s claim because it has still not provided us with an explanation of exactly what Contee said that was ‘wrong’.

Perhaps Lime was hoping for a silent edit to the original report to provide some camouflaging fuzz atop a controversy of the company’s own making. i.e. that a PR firm it hired tried to smear a rival.

If so, oopsy.

Of course we’ll update this report if Lime does get in touch to provide an explanation of what it was that Contee “misspoke”. Frankly we’re all ears at this point.

DoJ charges Autonomy founder with fraud over $11BN sale to HP

U.K. entrepreneur turned billionaire investor Mike Lynch has been charged with fraud in the U.S. over the 2011 sale of his enterprise software company.

Lynch sold Autonomy, the big data company he founded back in 1996, to computer giant HP for around $11 billion some seven years ago.

But within a year around three-quarters of the value of the business had been written off, with HP accusing Autonomy’s management of accounting misrepresentations and disclosure failures.

Lynch has always rejected the allegations, and after HP sought to sue him in U.K. courts he countersued in 2015.

Meanwhile, the U.K.’s own Serious Fraud Office dropped an investigation into the Autonomy sale in 2015 — finding “insufficient evidence for a realistic prospect of conviction.”

But now the DoJ has filed charges in a San Francisco court, accusing Lynch and other senior Autonomy executives of making false statements that inflated the value of the company.

They face 14 counts of conspiracy and fraud, according to Reuters — a charge that carries a maximum penalty of 20 years in prison.

We’ve reached out to Lynch’s fund, Invoke Capital, for comment on the latest development.

The BBC has obtained a statement from his lawyers, Chris Morvillo of Clifford Chance and Reid Weingarten of Steptoe & Johnson, which describes the indictment as “a travesty of justice,”

The statement also claims Lynch is being made a scapegoat for HP’s failures, framing the allegations as a business dispute over the application of U.K. accounting standards. 

Two years ago we interviewed Lynch onstage at TechCrunch Disrupt London and he mocked the morass of allegations still swirling around the acquisition as “spin and bullshit.”

Following the latest developments, the BBC reports that Lynch has stepped down as a scientific adviser to the U.K. government.

“Dr. Lynch has decided to resign his membership of the CST [Council for Science and Technology] with immediate effect. We appreciate the valuable contribution he has made to the CST in recent years,” a government spokesperson told it.

Edtech unicorn Udacity lays off 125 people in global strategy shift

Learning platform Udacity is to axe a big chunk of its workforce — which looks to be between a fifth and a quarter — as part of a global reorganization effort, according to VentureBeat.

It reports the company is cutting 125 staff from now through early 2019.

We’ve also heard from a source that around 100 Udacity staff have been laid off, with affected employees mostly in content, video and services.

We’ve reached out to the company for comment.

According to VentureBeat’s report the firm will close its office in São Paulo, Brazil, with the loss of 70 employees. The remaining cuts will come from departments in the US related to creating courses, it adds.

Two months ago we broke the news the company had quietly let go of 5% of its global workforce.

VentureBeat says now the layoffs will leave Udacity with 330 employees.

The edtech firm was one of the early providers of MOOCs, before low pass rates seemingly triggered a reprogramming of its business model, with Udacity refocusing on the tech space — offering so-called ‘nanodegrees’ in topics like AI and blockchain.

After that shift co-founder Sebastian Thrun stepped away as CEO, handing over to Vishal Makhijani. However the latest cuts come hard on the heels of a reversal of that, with VentureBeat noting Thrun took over day-to-day operations and the exec chairman role last month, following the departure of Makhijani.

Since then the board of directors and Thrun have voted to downsize portions of the company, it adds.

In another notable reversal this year, Udacity suspended a money back guarantee for people who completed a Plus tier nanocourse and couldn’t find a job, pressing pause just a few months after it announced the guarantee.

Thrun told VentureBeat the guarantee remains on pause — with no decision yet taken whether to cancel it outright.

TechCrunch’s Kirsten Korosec contributed to this report

Google faces GDPR complaint over “deceptive” location tracking

A group of European consumer watchdogs has filed a privacy complaint against Google — arguing the company uses manipulative tactics in order to keep tracking web users’ location, for ad-targeting purposes.

The consumer organizations are making the complaint under the EU’s new data protection framework, GDPR, which regulators can use to levy major fines for compliance breaches — of up to 4% of a company’s global annual turnover.

Under GDPR a consent-based legal basis for processing personal data (e.g. person’s location) must be specific, informed and freely given.

In their complaint the groups, which include Norway’s Consumer Council, argue that Google does not have proper legal basis to track users through “Location History” and “Web & App Activity” — settings which are integrated into all Google accounts, and which, for users of Android -based smartphones, they assert are particularly difficult to avoid.

The Google mobile OS remains the dominant smartphone platform globally, as well as across Europe.

“Google is processing incredibly detailed and extensive personal data without proper legal grounds, and the data has been acquired through manipulation techniques,” said Gro Mette Moen, acting head of the Norwegian Consumer Council’s digital services unit in a statement.

“When we carry our phones, Google is recording where we go, down to which floor we are on and how we are moving. This can be combined with other information about us, such as what we search for, and what websites we visit. Such information can in turn be used for things such as targeted advertising meant to affect us when we are receptive or vulnerable.”

Responding to the complaint, a Google spokesperson sent TechCrunch the following statement:

Location History is turned off by default, and you can edit, delete, or pause it at any time. If it’s on, it helps improve services like predicted traffic on your commute. If you pause it, we make clear that — depending on your individual phone and app settings — we might still collect and use location data to improve your Google experience. We enable you to control location data in other ways too, including in a different Google setting called Web & App Activity, and on your device. We’re constantly working to improve our controls, and we’ll be reading this report closely to see if there are things we can take on board.

Earlier this year the Norwegian watchdog produced a damning report calling out dark pattern design tricks being deployed by Google and Facebook meant to manipulate users by nudging them towards “privacy intrusive options”. It also examined Microsoft’s consent flows but judged the company to be leaning less heavily on such unfair tactics.

Among the underhand techniques that the Google-targeted GDPR complaint, which draws on the earlier report, calls out are allegations of deceptive click-flow, with the groups noting that a “location history” setting can be enabled during Android set-up without a user being aware of it; key settings being both buried in menus (hidden) and enabled by default; users being presented at the decision point with insufficient and misleading information; repeat nudges to enable location tracking even after a user has previously turned it off; and the bundling of “invasive location tracking” with other unrelated Google services, such as photo sorting by location.

GDPR remains in the early implementation phrase — just six months since the regulation came into force across Europe. But a large chunk of the first wave of complaints have been focused on consent, according to Europe’s data protection supervisor, who also told us in October that more than 42,000 complaints had been lodged in total since the regulation came into force.

Where Google is concerned, the location complaint is by no means the only GDPR — or GDPR consent-related — complaint it’s facing.

Another complaint, filed back in May also by a consumer-focused organization, took aim at what it dubbed the use of “forced consent” by Google and Facebook — pointing out that the companies were offering users no choice but to have their personal data processed to make use of certain services, yet the GDPR requires consent to be freely given.