How Russia’s online influence campaign engaged with millions for years

Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google, and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.

The report, which you can read here, was published today but given to some outlets over the weekend, summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.

A clearer picture

If you’ve only checked into this narrative occasionally during the last couple years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.

If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.

Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over a hundred million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?

The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.

Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes, and disinformation on captive news sites linked to by the propaganda accounts.

The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations, and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted all generated significant engagement, as this breakdown of the above stats shows:

Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters, and activate right-leaning ones.

Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.

Examples of posts targeting black Americans.

In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”

Instagram on the rise

Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.

In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).

A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.

IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.

Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.

In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”

Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.

Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.

This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence their conclusions are limited. Generally speaking when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.

For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”

In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted.

Desperately seeking transparency

In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh, and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.

“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”

“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”

Predictably, even social networks’ moderation policies became targets for propagandizing.

Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.

Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.

“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)

If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.

3D-printed heads let hackers – and cops – unlock your phone

There’s a lot you can make with a 3D printer: from prosthetics, corneas, and firearms — even an Olympic-standard luge.

You can even 3D print a life-size replica of a human head — and not just for Hollywood. Forbes reporter Thomas Brewster commissioned a 3D printed model of his own head to test the face unlocking systems on a range of phones — four Android models and an iPhone X.

Bad news if you’re an Android user: only the iPhone X defended against the attack.

Gone, it seems, are the days of the trusty passcode, which many still find cumbersome, fiddly, and inconvenient — especially when you unlock your phone dozens of times a day. Phone makers are taking to the more convenient unlock methods. Even if Google’s latest Pixel 3 shunned facial recognition, many Android models — including popular Samsung devices — are relying more on your facial biometrics. In its latest models, Apple effectively killed its fingerprint-reading Touch ID in favor of its newer Face ID.

But that poses a problem for your data if a mere 3D-printed model can trick your phone into giving up your secrets. That makes life much easier for hackers, who have no rulebook to go from. But what about the police or the feds, who do?

It’s no secret that biometrics — your fingerprints and your face — aren’t protected under the Fifth Amendment. That means police can’t compel you to give up your passcode, but they can forcibly depress your fingerprint to unlock your phone, or hold it to your face while you’re looking at it. And the police know it — it happens more often than you might realize.

But there’s also little in the way of stopping police from 3D printing or replicating a set of biometrics to break into a phone.

“Legally, it’s no different from using fingerprints to unlock a device,” said Orin Kerr, professor at USC Gould School of Law, in an email. “The government needs to get the biometric unlocking information somehow,” by either the finger pattern shape or the head shape, he said.

Although a warrant “wouldn’t necessarily be a requirement” to get the biometric data, one would be needed to use the data to unlock a device, he said.

Jake Laperruque, senior counsel at the Project On Government Oversight, said it was doable but isn’t the most practical or cost-effective way for cops to get access to phone data.

“A situation where you couldn’t get the actual person but could use a 3D print model may exist,” he said. “I think the big threat is that a system where anyone — cops or criminals — can get into your phone by holding your face up to it is a system with serious security limits.”

The FBI alone has thousands of devices in its custody — even after admitting the number of encrypted devices is far lower than first reported. With the ubiquitous nature of surveillance, now even more powerful with high-resolution cameras and facial recognition software, it’s easier than ever for police to obtain our biometric data as we go about our everyday lives.

Those cheering on the “death of the password” might want to think again. They’re still the only thing that’s keeping your data safe from the law.

Google agrees not to sell facial recognition tech, citing abuse potential

In recent months, pressure has been mounting for major tech firms to develop strong policies regarding facial recognition. Microsoft has helped lead the way on that front, promising to put in place stricter policies, calling for greater regulation and asking fellow companies to follow suit.

Hidden toward the end of a blog post about using artificial intelligence to benefit health clinics in Asia, Google SVP Kent Walker affirmed the company’s commitment not to sell facial recognition APIs. The executive cites concerns over how the technology could be abused.

“[F]acial recognition technology has benefits in areas like new assistive technologies and tools to help find missing persons, with more promising applications on the horizon,” Walker writes. “However, like many technologies with multiple uses, facial recognition merits careful consideration to ensure its use is aligned with our principles and values, and avoids abuse and harmful outcomes. We continue to work with many organizations to identify and address these challenges, and unlike some other companies, Google Cloud has chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions.”

In an interview this week, CEO Sundar Pichai addressed similar growing concerns around AI ethics. “I think tech has to realize it just can’t build it and then fix it,” he told The Washington Post. “I think that doesn’t work,” adding that artificial intelligence could ultimately prove “far more dangerous than nukes.”

The ACLU, which has offered sharp criticism over privacy and racial profiling concerns, lauded the statement. In the next paragraph, however, the company promised to continue to apply pressure on these large orgs.

“We will continue to put Google’s feet to the fire to make sure it doesn’t build or sell a face surveillance product that violates civil and human rights,” ACLU tech director Nicole Ozer said in a statement. “We also renew our call on Amazon and Microsoft to not provide dangerous face surveillance to the government. Companies have a responsibility to make sure their products can’t be used to attack communities and harm civil rights and liberties — it’s past time all companies own up to that responsibility.”

The organization has offered particularly sharp criticism against Amazon for its Rekognition software. This week, it also called out the company’s patent application for a smart doorbell that uses facial recognition to identify “suspicious” visitors.

Facebook bug exposed up to 6.8M users’ unposted photos to apps

Reset the “days since the last Facebook privacy scandal” counter, as a Facebook has just revealed a Photo API bug gave app developers too much access to the photos of up to 5.6 million users. The bug allowed apps users had approved to pull their timeline photos to also receive their Facebook Stories, Marketplace photos, and most worryingly, photos they’d uploaded to Facebook but never shared. Facebook says the bug ran for 12 days from September 13th to September 25th.

Facebook initially didn’t disclose when it discovered the bug, but in response to TechCrunch’s inquiry, a spokesperson says that it was discovered and fixed on September 25th. They say it took time for the company to investigate whch apps and people were impacted, and build and translate the warning notification it will send impacted users. The delay could put Facebook at risk of GDPR fines for not promptly disclosing the issue within 72 hours that can go up to 20 million pounds or 4 percent of annual global revenue.

Facebook provided merely a glib “We’re sorry this happened” in terms of an apology. It will provide tools next week for app developers to check if they were impacted and it will work with them to delete photos they shouldn’t have. The company plans to notify people it suspects may have been impacted by the bug via Facebook notification that will direct them to the Help Center where they’ll see if they used any apps impacted by the bug. It’s recommending users log into apps to check if they have wrongful photo access. Here’s a look at a mockup of warning notifcation users will see:

The privacy failure will further weaken confidence that Facebook is a reponsible steward for our private data. It follows Facebook’s massive security breach that allowed hackers to scrape 30 million people’s information back in September. There was also November’s bug allowing websites to read users’ Likes, October’s bug that mistakenly deleted people’s Live videos, and May’s bug that changed people’s status update composer privacy settings. It increasingly looks like the social network has gotten too big for the company to secure. Curiously, Facebook discovered the bug on September 25th, the same day as its 30 million user breach. Perhaps it kept a lid on the situation in hopes of not creating an even bigger scandal.

That it keeps photos you partially uploaded but never posted in the first place is creepy, but the fact that these could be exposed to third-party developers is truly unacceptable. And it seems Facebook is so tired of its failings that it couldn’t put forward even a seemingly heartfelt apology is telling. This company’s troubles are not only souring users on Facebook, but employees and the tech industry as large as well. CEO Mark Zuckerberg told Congress earlier this year that “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you.” What does Facebook deserve at this point?

Google CEO Sundar Pichai thinks Android users know how much their phones are tracking them

Google CEO Sundar Pichai thinks Android users have a good understanding of the volume of data Google collects on them, when they agree to use the Android mobile operating system. The exec, who is testifying today in front of the House Judiciary committee for a hearing entitled: Transparency & Accountability: Examining Google and its Data Collection, Use and Filtering Practices, claimed that users are in control of the information Google has on them.

“For Google services, you have a choice of what information is collected, and we make it transparent,” Pichai said, in response to Chairman of the House Judiciary Committee Rep. Bob Goodlatte (R-VA)’s questioning.

The reality is that most people don’t read user agreements in full, and aren’t fully aware of what data their phones and apps are able to access. Even on Apple’s platform, known to be fairly privacy-forward, apps have been collecting user data – including location – and selling it to third parties, as noted by a recent The New York Times investigation.

Google’s defense on the data collection front is similar to Facebook’s – that is, Pichai responded that Google provides tools that put users in control.

But do they actually use them?

“It’s really important for us that average users are able to understand it,” said Pichai, stating that users do understand the user agreement for Android OS.

“We actually…remind users to do a privacy checkup, and we make it very obvious every month. In fact, in the last 28 days, 160 million users went to their My Account settings, where they can clearly see what information we have – we actually show it back to them. We give clear toggles, by category, where they can decide whether that information is collected, stored, or – more importantly – if they decide to stop using it, we work hard to make it possible for users to take their data with them,” he said.

The 160 million users sounds like a large number, but at Google’s scale, where numerous products have over a billion users apiece, it’s not as big as it seems.

In addition, it has become clear that simply “opting out” of Google’s data collection methods is not always enough. For example, earlier this year, it was discovered that Google was continuing to track users’ location even when users had explicitly turned the Location History setting off – a clear indication they did not want their data collected or shared.

Further in the hearing, Pichai was asked if Google could improve its user dashboard and tools to better teach people how to protect their privacy, including turning off data collection and location tracking.

“There’s complexity,” Pichai said, but admitted this is “something I do think we can do better.”

“We want to simplify it, and make it easier for average users to navigate these settings,” he continued. “It’s something we are working on.”

 

 

Google CEO Sundar Pichai thinks Android users know how much their phones are tracking them

Google CEO Sundar Pichai thinks Android users have a good understanding of the volume of data Google collects on them, when they agree to use the Android mobile operating system. The exec, who is testifying today in front of the House Judiciary committee for a hearing entitled: Transparency & Accountability: Examining Google and its Data Collection, Use and Filtering Practices, claimed that users are in control of the information Google has on them.

“For Google services, you have a choice of what information is collected, and we make it transparent,” Pichai said, in response to Chairman of the House Judiciary Committee Rep. Bob Goodlatte (R-VA)’s questioning.

The reality is that most people don’t read user agreements in full, and aren’t fully aware of what data their phones and apps are able to access. Even on Apple’s platform, known to be fairly privacy-forward, apps have been collecting user data – including location – and selling it to third parties, as noted by a recent The New York Times investigation.

Google’s defense on the data collection front is similar to Facebook’s – that is, Pichai responded that Google provides tools that put users in control.

But do they actually use them?

“It’s really important for us that average users are able to understand it,” said Pichai, stating that users do understand the user agreement for Android OS.

“We actually…remind users to do a privacy checkup, and we make it very obvious every month. In fact, in the last 28 days, 160 million users went to their My Account settings, where they can clearly see what information we have – we actually show it back to them. We give clear toggles, by category, where they can decide whether that information is collected, stored, or – more importantly – if they decide to stop using it, we work hard to make it possible for users to take their data with them,” he said.

The 160 million users sounds like a large number, but at Google’s scale, where numerous products have over a billion users apiece, it’s not as big as it seems.

In addition, it has become clear that simply “opting out” of Google’s data collection methods is not always enough. For example, earlier this year, it was discovered that Google was continuing to track users’ location even when users had explicitly turned the Location History setting off – a clear indication they did not want their data collected or shared.

Further in the hearing, Pichai was asked if Google could improve its user dashboard and tools to better teach people how to protect their privacy, including turning off data collection and location tracking.

“There’s complexity,” Pichai said, but admitted this is “something I do think we can do better.”

“We want to simplify it, and make it easier for average users to navigate these settings,” he continued. “It’s something we are working on.”

 

 

Microsoft calls on companies to adopt a facial recognition code of conduct

Over the summer, Microsoft President Brad Smith called for governments to take a closer look at how facial detection technology is being implemented across the globe. This week, he returned with a similar message — only this time the executive is calling out fellow technologies to help address myriad issues around the technology before it becomes too pervasive.

It’s easy enough to suggest that the ship has sailed. After all, facial recognition is already fairly ubiquitous on everything from Facebook to Apple Animojis. But if the past year has taught us anything, it’s that the governments of the world can’t wait to implement the tech in a broader way — and plenty of tech firms are more than happy to help.

Smith points to a trio of potential pitfalls for the tech: biased outcomes, invasion of privacy and mass surveillance. The ACLU has been raising red flags on that first point for some time, asking Congress to implement a moratorium on surveillance technologies. The group found that Amazon’s Rekognition software wrongly associated headshots of members of Congress with criminal mugshots.

The new letter finds Microsoft frustrated at regulatory foot-dragging, instead placing the burden on tech regulation on the companies themselves. “We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition,” writes Smith. “And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.”

In other words, as Smith puts it, “you can’t put the genie back in the bottle.” So Microsoft is looking to set the tone here, committing to its own code, which it plans to implement by the first quarter of next year.

The piece details a number of safeguards and vetting that companies can implement to help avoid some of the more troubling pitfalls here. Among the recommendations are some fairly straightforward suggestions, like transparency, third-party testing, technology reviews by humans and properly identifying where and when the technology is being implemented. All of the above honestly sound pretty straightforward and doable.

Microsoft is set to follow up these suggestions with a more detailed document arriving next week that will more thoroughly detail its plans, while soliciting suggestions from people and groups about how to more broadly implement them.

Australia rushes its ‘dangerous’ anti-encryption bill into parliament, despite massive opposition

Australia’s controversial anti-encryption bill is one step closer to becoming law, after the two leading but sparring party political giants struck a deal to pass the legislation.

The bill, in short, grants Australian police greater powers to issue “technical notices” — a nice way of forcing companies — even websites — operating in Australia to help the government hack, implant malware, undermine encryption or insert backdoors at the behest of the government.

If companies refuse, they could face financial penalties.

Lawmakers say that the law is only meant to target serious criminals — sex offenders, terrorists, homicide and drug offenses. Critics have pointed out that the law could allow mission creep into less serious offenses, such as copyright infringement, despite promises that compelled assistance requests are signed off by two senior government officials.

In all, the proposed provisions have been widely panned by experts, who argue that the bill is vague and contradictory, but powerful, and still contains “dangerous loopholes.” And, critics warn (as they have for years) that any technical backdoors that allow the government to access end-to-end encrypted messages could be exploited by hackers.

But that’s unlikely to get in the way of the bill’s near-inevitable passing.

Australia’s ruling coalition government and its opposition Labor party agreed to have the bill put before parliament this week before its summer break.

Several lawmakers look set to reject the bill, criticizing the government’s efforts to rush through the bill before the holiday.

“Far from being a ‘national security measure’ this bill will have the unintended consequence of diminishing the online safety, security and privacy of every single Australian,” said Jordon Steele-John, a Greens’ senator, in a tweet.

Tim Watts, a Labor member of Parliament for Gellibrand, tweeted a long thread slamming the government’s push to get the legislation passed before Christmas, despite more than 15,000 submissions to a public consultation, largely decrying the bill’s content.

The tech community — arguably the most affected by the bill’s passing — has also slammed the bill. Apple called it “dangerously ambiguous”, while Cisco and Mozilla joined a chorus of other tech firms calling for the government to dial back the provisions.

But the rhetoric isn’t likely to dampen the rush by the global surveillance pact — the U.S., U.K., Canada, Australia and New Zealand, known as the so-called “Five Eyes” group of nations — to push for greater access to encrypted data. Only earlier this year, the governmental coalition said in no uncertain terms that it would force backdoors if companies weren’t willing to help their governments spy.

Australia’s likely to pass the bill — but when exactly remains a mystery. The coalition government has to call an election in less than six months, putting the anti-encryption law on a timer.

Oath agrees to pay $5M to settle charges it violated children’s privacy

TechCrunch’s Verizon-owned parent, Oath, an ad tech division made from the merging of AOL and Yahoo, has agreed to pay around $5 million to settle charges that it violated a federal children’s privacy law.

The penalty is said to be the largest ever issued under COPPA.

The New York Times reported the story yesterday, saying the settlement will be announced by the New York attorney general’s office today.

At the time of writing the AG’s office could not be reached for comment.

We reached out to Oath with a number of questions about this privacy failure. But a spokesman did not engage with any of them directly — emailing a short statement instead, in which it writes: “We are pleased to see this matter resolved and remain wholly committed to protecting children’s privacy online.”

The spokesman also did not confirm nor dispute the contents of the NYT report.

According to the newspaper, which cites the as-yet unpublished settlement documents, AOL, via its ad exchange, helped place adverts on hundreds of websites that it knew were targeted at children under 13 — such as Roblox.com and Sweetyhigh.com.

The ads were placed used children’s personal data, including cookies and geolocation, which the attorney general’s office said violated the Children’s Online Privacy Protection Act (COPPA) of 1998.

The NYT quotes attorney general, Barbara D. Underwood, describing AOL’s actions as “flagrantly” in violation of COPPA.

The $5M fine for Oath comes at a time when scrutiny is being dialled up on online privacy and ad tech generally, and around kids’ data specifically — with rising concern about how children are being tracked and ‘datafied’ online.

Earlier this year, a coalition of child advocacy, consumer and privacy groups in the US filed a complaint with the FTC asking it to investigate Google-owned YouTube over COPPA violations — arguing that while the site’s terms claim it’s aimed at children older than 13 content on YouTube is clearly targeting younger children, including by hosting cartoon videos, nursery rhymes, and toy ads.

COPPA requires that companies provide direct notice to parents and verifiable consent parents before collecting under 13’s information online.

Consent must also be sought for using or disclosing personal data from children. Or indeed for targeting kids with adverts linked to what they do online.

Personal data under COPPA includes persistent identifiers (such as cookies) and geolocation information, as well as data such as real names or screen names.

In the case of Oath, the NYT reports that even though AOL’s policies technically prohibited the use of its display ad exchange to auction ad space on kids’ websites, the company did so anyway —  citing settlement documents covering the ad tech firm’s practices between October 2015 and February 2017.

According to these documents, an account manager for AOL in New York repeatedly — and erroneously — told a client, Playwire Media (which represents children’s websites such as Roblox.com), that AOL’s ad exchange could be used to sell ad space while complying with Coppa.

Playwire then used the exchange to place more than a billion ads on space that should have been covered by Coppa, the newspaper adds.

The paper also reports that AOL (via Advertising.com) also bought ad space on websites flagged as COPPA-covered from other ad exchanges.

It says Oath has since introduced technology to identify when ad space is deemed to be covered by Coppa and ‘adjust its practices’ accordingly — again citing the settlement documents.

As part of the settlement the ad tech division of Verizon has agreed to create a COPPA compliance program, to be overseen by a dedicated executive or officer; and to provide annual training on COPPA compliance to account managers and other employees who work with ads on kids’ websites.

Oath also agreed to destroy personal information it has collected from children.

It’s not clear whether the censured practices ended in February 2017 or continued until more recently. We asked Oath for clarification but it did not respond to the question.

It’s also not clear whether AOL was also tracking and targeting adverts at children in the EU. If Oath was doing so but stopped before May 25 this year it should avoid the possibility of any penalty under Europe’s tough new privacy framework, GDPR, which came into force in May this year — beefing up protection around children’s data by setting a cap of between 16- and 13-years-old for children being able to consent to their own data being processed.

GDPR also steeply hikes penalties for privacy violations (up to a maximum of 4% of global annual turnover).

Prior to the regulation a European data protection directive was in force across the bloc but it’s GDPR that has strengthened protections in this area with the new provision on children’s data.

‘Google You Owe Us’ claimants aren’t giving up on UK Safari workaround suit

Lawyers behind a UK class-action style compensation litigation against Google for privacy violations have filed an appeal against a recent High Court ruling blocking the proceeding.

In October Mr Justice Warby ruled the case could not proceed on legal grounds, finding the claimants had not demonstrated a basis for bringing a compensation claim.

The case relates to the so called ‘Safari workaround’ Google used between 2011 and 2012 to override iPhone privacy settings and track users without consent.

The civil legal action — whose claimants refer to themselves as ‘Google You Owe Us’ — was filed last year by one named iPhone user, Richard Lloyd, the former director of consumer group, Which?, seeking to represent millions of UK users whose Safari settings the complaint alleges were similarly ignored by Google, via a representative legal action.

Lawyers for the claimants argued that sensitive personal data such as iPhone users’ political affiliation, sexual orientation, financial situation and more had been gathered by Google and used for targeted advertising without their consent.

Google You Owe Us proposed the sum of £750 per claimant for the company’s improper use of people’s data — which could result in a bill of up to £3BN (based on the suit’s intent to represent ~4.4 million UK iPhone users).

However UK law requires claimants demonstrate they suffered damage as a result of violation of the relevant data protection rules.

And in his October ruling Justice Warby found that the “bare facts pleaded in this case” were not “individualised” — hence he saw no case for damages.

He also ruled against the case proceeding on another legal point, related to defining a class for the case — finding “the essential requirements for a representative action are absent” because he said individuals in the group do not have the “same interest” in the claim.

Lodging its appeal today in the Court of Appeal, Google You Owe us described the High Court judgement as disappointing, and said it highlights the barriers that remain for consumers seeking to use collective actions as a route to redress in England and Wales.

In the US, meanwhile, Google settled with the FTC over a similar cookie tracking issue back in 2012 — agreeing to pay $22.5M in that instance.

Countering Justice Warby’s earlier suggestion that affected class members in the UK case did not care about their data being taken without permission, Google You Owe Us said, on the contrary, affected class members have continued to show their support for the case on Facebook — noting that more than 20,000 have signed up for case updates.

For the appeal, the legal team will argue that the High Court judgment was incorrect in stating the class had not suffered damage within the meaning of the UK’s Data Protection Act, and that the class had not all suffered in the same way as a result of the data breach.

Commenting in a statement, Lloyd said:

Google’s business model is based on using personal data to target adverts to consumers and they must ask permission before using this data. The court accepted that people did not give Google permission to use their data in this case, yet slammed the door shut on holding Google to account.

By appealing this decision, we want to give affected consumers the opportunity to get the compensation they are owed and show that collective actions offer a clear route to justice for data protection claims.

We’ve reached out to Google for comment.