Even the IAB warned adtech risks EU privacy rules

A privacy complaint targeting the behavioral advertising industry has a new piece of evidence that shows the Internet Advertising Bureau (IAB) shedding doubt on whether it’s possible to obtain informed consent from web users for the programmatic ad industry’s real-time bidding (RTB) system to broadcast their personal data.

The adtech industry functions by harvesting web users’ data, packaging individual identifiers and browsing data in bid requests that are systematically shared with third parties in order to solicit and scale advertiser bids for the user’s attention.

However a series of RTB complaints — filed last fall by Jim Killock, director of the Open Rights Group; Dr Johnny Ryan of private browser Brave; and Michael Veale, a data and policy researcher at University College London — allege this causes “wide-scale and systemic breaches” of European Union data protection rules.

So far complaints have been filed with data protection agencies in Ireland, the UK and Poland, though the intent is for the action to expand across the EU given that behavioral advertising isn’t region specific.

Google and the IAB set the RTB specifications used by the online ad industry and are thus the main targets here, with complainants advocating for amendments to the specification to bring the system into compliance with the bloc’s data protection regime.

We’ve covered the complaint before, including an earlier submission showing the highly sensitive inferences that can be included in bid requests. But documents obtained by the complainants via freedom of information request and newly published this week show the IAB itself warned in 2017 that the RTB system risks falling foul of the bloc’s privacy rules, and specifically the rules around consent under the EU’s General Data Protection Regulation (GDPR), which came into force last May.

The complainants have published the latest evidence on a new campaign website.

At the very least the admission looks awkward for online ad industry body.

“incompatible with consent under GDPR “

In an email sent to senior personnel at the European Commission in June 2017 by Townsend Feehan, the CEO of IAB Europe — and now being used as evidence in the complaints — she writes that she wants to expand on concerns voiced at a roundtable session about the Commission’s ePrivacy proposals that she claims could “mean the end of the online advertising business model”.

Feehan attached an 18-page document to the email in which the IAB can be seen lobbying against the Commission’s ePrivacy proposal — claiming it will have “serious negative impacts on the digital advertising industry, on European media, and ultimately on European citizens’ access to information and other online content and services”.

The IAB goes on to push for specific amendments to the proposed text of the regulation. (As we’ve written before a major lobbying effort has blow up since GDPR was agreed to try to block updating the ePrivacy rules which operate alongside, covering marketing and electronic communications and cookies and other online tracking technologies.)

As it lobbies to water down ePrivacy rules, the IAB suggests it’s “technically impossible” for informed consent to function in a real-time bidding scenario — writing the following, in a segment entitled ‘Prior information requirement will “break” programmatic trading’:

As it is technically impossible for the user to have prior information about every data controller involved in a real-time bidding (RTB) scenario, programmatic trading, the area of fastest growth in digital advertising spend, would seem, at least prima facie, to be incompatible with consent under GDPR – and, as noted above, if a future ePrivacy Regulation makes virtually all interactions with the Internet subject solely to the consent legal basis, and consent is unavailable, then there will be no legal be no basis for such processing to take place or for media to monetise their content in this way.

The notion that it’s impossible to obtain informed consent from web users for processing their personal data prior to doing so is important because the behavioral ad industry, as it currently functions, includes personal data in bid requests that it systematically broadcasts to what can be thousands of third party companies.

Indeed, the crux of the RTB complaints are that personal data should be stripped out of these requests — and only contextual information broadcast for targeting ads, exactly because the current system is systematically breaching the rights of European web users by failing to obtain their consent for personal data to be sucked out and handed over to scores of unknown entities.

In its lobbying efforts to knock the teeth out of the ePrivacy Regulation the IAB can here be seen making a similar point — when it writes that programmatic trading “would seem, at least prima facie, to be incompatible with consent under GDPR”. (Albeit, injecting some of its own qualifiers into the sentence.)

The IAB is certainly seeking to deploy pro-privacy arguments to try to dilute Europeans’ privacy rights.

Despite it’s own claimed reservations about there being no technical fix to get consent for programmatic trading under GDPR the IAB nonetheless went on to launch a technical mechanism for managing — and, it claimed — complying with GDPR consent requirements in April 2018, when it urged the industry to use its GDPR “Consent & Transparency Framework”.

But in another piece of evidence obtained by the group of individuals behind the RTB complaints — an IAB document, dated May 2018, intended for publishers making use of this framework — the IAB also acknowledges that: “Publishers recognize there is no technical way to limit the way data is used after the data is received by a vendor for decisioning/bidding on/after delivery of an ad”.

In a section on liability, the IAB document lays out other publisher concerns that each bid request assumes “indiscriminate rights for vendors” — and that “surfacing thousands of vendors with broad rights to use data without tailoring those rights may be too many vendors/permissions”.

So again, er, awkward.

Another piece of evidence now attached to the RTB complaints shows a set of sample bid requests from the IAB and Google’s documentation for users of their systems — with annotations by the complainants showing exactly how much personal data gets packaged up and systematically shared.

This can include a person’s latitude and longitude GPS coordinates; IP address; device specific identifiers; various ID codes; inferred interests (which could include highly sensitive personal data); and the current webpage they’re looking at;

“The fourteen sample bid requests further prove that very personal data are contained in bid requests,” the complainants argue.

They have also included an estimated breakdown of seven major ad exchanges’ daily bid requests — Index Exchange, OpenX, Rubicon Project, Oath/AOL*, AppNexus, Smaato, Google DoubleClick — showing they collectively broadcast “hundreds of billions of bid requests per day”, to illustrate the scale of data being systematically broadcast by the ad industry.

“This suggests that the New Economics Foundation’s estimate in December that bid requests broadcast data about the average UK internet user 164 times a day was a conservative estimate,” they add.

The IAB has responded to the new evidence by couching the complainants’ claims as “false” and “intentionally damaging to the digital advertising industry and to European digital media”.

Regarding its 2017 document, in which it wrote that it was “technically impossible” for an Internet user to have prior information about every data controller involved in a RTB “scenario”, the IAB responds that “that was true at the time, but has changed since” — pointing to its Transparency & Consent framework (TCF) as the claimed fix for that, and further claiming it “demonstrates that real-time bidding is certainly not ‘incompatible with consent under GDPR'”.

Here are the relevant paras of IAB rebuttal on that:

The TCF provides a way to provide transparency to users about how, and by whom, their personal data is processed. It also enables users to express choices. Moreover, the TCF enables vendors engaged in programmatic advertising to know ahead of time whether their own and/or their partners’ transparency and consent status allows them to lawfully process personal data for online advertising and related purposes. IAB Europe’s submission to the European Commission in April 2017 showed that the industry needed to adapt to meet higher standards for transparency and consent under the GDPR. The TCF demonstrates how complex challenges can be overcome when industry players come together. But most importantly, the TCF demonstrates that real-time bidding is certainly not “incompatible with consent under GDPR”.

The OpenRTB protocol is a tool that can be used to determine which advertisement should be served on a given web page at a given time. Data can inform that determination. Like all technology, OpenRTB must be used in a way that complies with the law. Doing so is entirely possible and greatly facilitated by the IAB Europe Transparency & Consent Framework, whose whole raison d’être is to help ensure that the collection and processing of user data is done in full compliance with EU privacy and data protection rules.

The IAB goes on to couch the complaints as stemming from a “hypothetical possibility for personal data to be processed unlawfully in the course of programmatic advertising processes”.

“This hypothetical possibility arises because neither OpenRTB nor the TCF are capable of physically preventing companies using the protocol to unlawfully process personal data. But the law does not require them to,” the IAB claims.

However the crux of the RTB complaint is that programmatic advertising’s processing of personal data is not adequately secure — and they have GDPR Article 5, paragraph 1, point f to point to; which requires that personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss”.

So it will be down to data protection authorities to determine what “appropriate security of personal data” means in this context. And whether behavioral advertising is inherently hostile to data protection law (not forgetting that other forms of non-personal-data-based advertising remain available, e.g. contextual advertising).

Discussing the complaint with TechCrunch late last year, Brave’s Ryan likened the programmatic ad system to dumping truck-loads of briefcases in the middle of a busy railway station in “the full knowledge that… business partners will all scramble around and try and grab them” — arguing that such a dysfunctional and systematic breaching of people’s data is lurking at the core of the online ad industry.

The solution Ryan and the other complainants are advocating for is not pulling the plug on the online ad industry entirely — but rather an update to the RTB spec to strip out personal data so that it respects Internet users’ rights. Ads can still be targeted contextually and successfully without Internet users having to be surveilled 24/7 online, is the claim.

They also argue that this would lead to a much better situation for quality online publishers because it would make it harder for their high value audiences to be arbitraged and commodified by privacy-hostile tracking technologies which — as it stands — trail Internet users everywhere they go. Albeit they freely concede that purveyors of low quality clickbait might fair less well.

*Disclosure: TechCrunch is owned by Verizon Media Group, aka Oath/AOL . We also don’t consider ourselves to be purveyors of low quality clickbait  

Global smartphone growth stalled in Q4, up just 1.2% for the full year: Gartner

Gartner’s smartphone marketshare data for the just gone holiday quarter highlights the challenge for device makers going into the world’s biggest mobile trade show which kicks off in Barcelona next week: The analyst’s data shows global smartphone sales stalled in Q4 2018, with growth of just 0.1 per cent over 2017’s holiday quarter, and 408.4 million units shipped.

tl;dr: high end handset buyers decided not to bother upgrading their shiny slabs of touch-sensitive glass.

Gartner says Apple recorded its worst quarterly decline (11.8 per cent) since Q1 2016, though the iPhone maker retained its second place position with 15.8 per cent marketshare behind market leader Samsung (17.3 per cent). Last month the company warned investors to expect reduced revenue for its fiscal Q1 — and went on to report iPhone sales down 15 per cent year over year.

The South Korean mobile maker also lost share year over year (declining around 5 per cent), with Gartner noting that high end devices such as the Galaxy S9, S9+ and Note9 struggled to drive growth, even as Chinese rivals ate into its mid-tier share.

Huawei was one of the Android rivals causing a headache for Samsung. It bucked the declining share trend of major vendors to close the gap on Apple from its third placed slot — selling more than 60 million smartphones in the holiday quarter and expanding its share from 10.8 per cent in Q4 2017 to 14.8 per cent.

Gartner has dubbed 2018 “the year of Huawei”, saying it achieved the top growth of the top five global smartphone vendors and grew throughout the year.

This growth was not just in Huawei “strongholds” of China and Europe but also in Asia/Pacific, Latin America and the Middle East, via continued investment in those regions, the analyst noted. While its expanded mid-tier Honor series helped the company exploit growth opportunities in the second half of the year “especially in emerging markets”.

By contrast Apple’s double-digit decline made it the worst performer of the holiday quarter among the top five global smartphone vendors, with Gartner saying iPhone demand weakened in most regions, except North America and mature Asia/Pacific.

It said iPhone sales declined most in Greater China, where it found Apple’s market share dropped to 8.8 percent in Q4 (down from 14.6 percent in the corresponding quarter of 2017). For 2018 as a whole iPhone sales were down 2.7 percent, to just over 209 million units, it added.

“Apple has to deal not only with buyers delaying upgrades as they wait for more innovative smartphones. It also continues to face compelling high-price and midprice smartphone alternatives from Chinese vendors. Both these challenges limit Apple’s unit sales growth prospects,” said Gartner’s Anshul Gupta, senior research director, in a statement.

“Demand for entry-level and midprice smartphones remained strong across markets, but demand for high-end smartphones continued to slow in the fourth quarter of 2018. Slowing incremental innovation at the high end, coupled with price increases, deterred replacement decisions for high-end smartphones,” he added.

Further down the smartphone leaderboard, Chinese OEM, Oppo, grew its global smartphone market share in Q4 to bump Chinese upstart, Xiaomi, and bag fourth place — taking 7.7 per cent vs Xiaomi’s 6.8 per cent for the holiday quarter.

The latter had a generally flat Q4, with just a slight decline in units shipped, according to Gartner’s data — underlining Xiaomi’s motivations for teasing a dual folding smartphone.

Because, well, with eye-catching innovation stalled among the usual suspects (who’re nontheless raising high end handset prices), there’s at least an opportunity for buccaneering underdogs to smash through, grab attention and poach bored consumers.

Or that’s the theory. Consumer interest in ‘foldables’ very much remains to be tested.

In 2018 as a whole, the analyst says global sales of smartphones to end users grew by 1.2 percent year over year, with 1.6 billion units shipped.

The worst declines of the year were in North America, mature Asia/Pacific and Greater China (6.8 percent, 3.4 percent and 3.0 percent, respectively), it added.

“In mature markets, demand for smartphones largely relies on the appeal of flagship smartphones from the top three brands — Samsung, Apple and Huawei — and two of them recorded declines in 2018,” noted Gupta.

Overall, smartphone market leader Samsung took 19.0 percent marketshare in 2018, down from 20.9 per cent in 2017; second placed Apple took 13.4 per cent (down from 14.0 per cent in 2017); third placed Huawei took 13.0 per cent (up from 9.8 per cent the year before); while Xiaomi, in fourth, took a 7.9 per cent share (up from 5.8 per cent); and Oppo came in fifth with 7.6 per cent (up from 7.3 per cent).

Twitter names first international markets to get checks on political advertisers

Twitter has announced it’s expanding checks on political advertisers outside the U.S. to also cover Australia, India and all the member states of the European Union.

This means anyone wanting to run political ads on its platform in those regions will first need to go through its certification process to prove their identity and certify a local location via a verification letter process.

Enforcement of the policies will kick in in the three regions on March 11, Twitter said today in a blog post. “Political advertisers must apply now for certification and go through the every step of the process,” it warns.

The company’s ad guidelines, which were updated last year, are intended to make it harder for foreign entities to target elections by adding a requirement that political advertisers self-identify and certify they’re locally based.

A Twitter spokeswoman told us that advertiser identity requirements include providing a copy of a national ID, and for candidates and political parties specifically it requires an official copy of their registration and national election authority.

The company’s blog post does not explain why it’s selected the three international regions it has named for its first expansion of political checks outside the U.S. But they do all have elections upcoming in the next months.

Elections to the EU parliament take play in May, while India’s general elections are expected to take place in April and May. Australia is also due to hold a federal election by May 2019.

Twitter has been working on ad transparency since 2017, announcing the launch of a self-styled Advertising Transparency Center back in fall that year, following political scrutiny over the role social media platforms in spreading Kremlin-backed disinformation during the 2016 US presidential election. It went on to launch the center in June 2018.

It also announced updated guidelines for political advertisers in May 2018 which also came into effect last summer, ahead of the U.S. midterms.

The ad transparency hub lets anyone (not just Twitter users) see all ads running on its platform, including the content/creative; how long ads have been running; and any ads specifically targeted at them if they are a user. Ads can also be reported to Twitter as inappropriate via the Center.

Political/electioneering ads get a special section that also includes information on who’s paying for the ad, how much they’ve spent, impressions per tweet and demographic targeting.

Though initially the political transparency layer only covered U.S. ads. More than half a year on and Twitter is now preparing to expand the same system of checks to its first international regions.

In regions where it has implemented the checks, organizations buying political ads on its platform are also required to comply with a stricter set of rules for how they present their profiles to enforce a consistent look vis-a-vis how they present themselves online elsewhere — to try to avoid political advertisers trying to pass themselves off as something they’re not.

These consistency rules will apply to those wanting to run political ads in Europe, India and Australia from March. Twitter will also require political advertisers in the regions include a link to a website with valid contact info in their Twitter bio.

While those political advertisers with Twitter handles not related to their certified entity must also include a disclaimer in their bio stating the handle is “owned by”  the certified entity name.

The company’s move to expand political ad checks outside the U.S. is certainly welcome but it does highlight how piecemeal such policies remain with many more international regions with upcoming elections still lacking such checks — nor even a timeline to get them.

Including countries with very fragile democracies where political disinformation could be a hugely potent weapon.

Indonesia, which is a major market for Twitter, is due to hold a general election in April, for instance. The Philippines is also due to hold a general election in May. While Thailand has an election next month.

We asked Twitter whether it has any plans to roll out political ad checks in these three markets ahead of their key votes but the company declined to make a statement on why it had focused on the EU, Australia and India first.

A spokeswoman did tell us that it will be expanding the policy and enforcement globally in the future, though she would not provide a timeline for any further international expansion. 

YouTube under fire for recommending videos of kids with inappropriate comments

More than a year on from a child safety content moderation scandal on YouTube and it takes just a few clicks for the platform’s recommendation algorithms to redirect a search for “bikini haul” videos of adult women towards clips of scantily clad minors engaged in body contorting gymnastics or taking an icebath or ice lolly sucking “challenge”.

A YouTube creator called Matt Watson flagged the issue in a critical Reddit post, saying he found scores of videos of kids where YouTube users are trading inappropriate comments and timestamps below the fold, denouncing the company for failing to prevent what he describes as a “soft-core pedophilia ring” from operating in plain sight on its platform.

He has also posted a YouTube video demonstrating how the platform’s recommendation algorithm pushes users into what he dubs a pedophilia “wormhole”, accusing the company of facilitating and monetizing the sexual exploitation of children.

We were easily able to replicate the YouTube algorithm’s behavior that Watson describes in a history-cleared private browser session which, after clicking on two videos of adult women in bikinis, suggested we watch a video called “sweet sixteen pool party”.

Clicking on that led YouTube’s side-bar to serve up multiple videos of prepubescent girls in its ‘up next’ section where the algorithm tees-up related content to encourage users to keep clicking.

Videos we got recommended in this side-bar included thumbnails showing young girls demonstrating gymnastics poses, showing off their “morning routines”, or licking popsicles or ice lollies.

Watson said it was easy for him to find videos containing inappropriate/predatory comments, including sexually suggestive emoji and timestamps that appear intended to highlight, shortcut and share the most compromising positions and/or moments in the videos of the minors.

We also found multiple examples of timestamps and inappropriate comments on videos of children that YouTube’s algorithm recommended we watch.

Some comments by other YouTube users denounced those making sexually suggestive remarks about the children in the videos.

Back in November 2017 several major advertisers froze spending on YouTube’s platform after an investigation by the BBC and the Times discovered similarly obscene comments on videos of children.

Earlier the same month YouTube was also criticized over low quality content targeting kids as viewers on its platform.

The company went on to announce a number of policy changes related to kid-focused video, including saying it would aggressively police comments on videos of kids and that videos found to have inappropriate comments about the kids in them would have comments turned off altogether.

Some of the videos of young girls that YouTube recommended we watch had already had comments disabled — which suggests its AI had previously identified a large number of inappropriate comments being shared (on account of its policy of switching off comments on clips containing kids when comments are deemed “inappropriate”) — yet the videos themselves were still being suggested for viewing in a test search that originated with the phrase “bikini haul”.

Watson also says he found ads being displayed on some videos of kids containing inappropriate comments, and claims that he found links to child pornography being shared in YouTube comments too.

We were unable to verify those findings in our brief tests.

We asked YouTube why its algorithms skew towards recommending videos of minors, even when the viewer starts by watching videos of adult women, and why inappropriate comments remain a problem on videos of minors more than a year after the same issue was highlighted via investigative journalism.

The company sent us the following statement in response to our questions:

Any content — including comments — that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. We enforce these policies aggressively, reporting it to the relevant authorities, removing it from our platform and terminating accounts. We continue to invest heavily in technology, teams and partnerships with charities to tackle this issue. We have strict policies that govern where we allow ads to appear and we enforce these policies vigorously. When we find content that is in violation of our policies, we immediately stop serving ads or remove it altogether.

A spokesman for YouTube also told us it’s reviewing its policies in light of what Watson has highlighted, adding that it’s in the process of reviewing the specific videos and comments featured in his video — specifying also that some content has been taken down as a result of the review.

Although the spokesman emphasized that the majority of the videos flagged by Watson are innocent recordings of children doing everyday things. (Though of course the problem is that innocent content is being repurposed and time-sliced for abusive gratification and exploitation.)

The spokesman added that YouTube works with the National Center for Missing and Exploited Children to report accounts found making inappropriate comments about kids to law enforcement.

In wider discussion about the issue the spokesman told us that determining context remains a challenge for its AI moderation systems.

On the human moderation front he said the platform now has around 10,000 human reviewers tasked with assessing content flagged for review.

The volume of video content uploaded to YouTube is around 400 hours per minute, he added.

There is still very clearly a massive asymmetry around content moderation on user generated content platforms, with AI poorly suited to plug the gap given ongoing weakness in understanding context, even as platforms’ human moderation teams remain hopelessly under-resourced and outgunned vs the scale of the task.

Another key point which YouTube failed to mention is the clear tension between advertising-based business models that monetize content based on viewer engagement (such as its own), and content safety issues that need to carefully consider the substance of the content and the context it’s been consumed in.

It’s certainly not the first time YouTube’s recommendation algorithms have been called out for negative impacts. In recent years the platform has been accused of automating radicalization by pushing viewers towards extremist and even terrorist content — which led YouTube to announce another policy change in 2017 related to how it handles content created by known extremists.

The wider societal impact of algorithmic suggestions that inflate conspiracy theories and/or promote bogus, anti-factual health or scientific content have also been repeatedly raised as a concern — including on YouTube.

And only last month YouTube said it would reduce recommendations of what it dubbed “borderline content” and content that “could misinform users in harmful ways”, citing examples such as videos promoting a fake miracle cure for a serious illness, or claiming the earth is flat, or making “blatantly false claims” about historic events such as the 9/11 terrorist attack in New York.

“While this shift will apply to less than one percent of the content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community,” it wrote then. “As always, people can still access all videos that comply with our Community Guidelines and, when relevant, these videos may appear in recommendations for channel subscribers and in search results. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.”

YouTube said that change of algorithmic recommendations around conspiracy videos would be gradual, and only initially affect recommendations on a small set of videos in the US.

It also noted that implementing the tweak to its recommendation engine would involve both machine learning tech and human evaluators and experts helping to train the AI systems.

“Over time, as our systems become more accurate, we’ll roll this change out to more countries. It’s just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube,” it added.

It remains to be seen whether YouTube will expand that policy shift and decide it must exercise greater responsibility in how its platform recommends and serves up videos of children for remote consumption in the future.

Political pressure may be one motivating force, with momentum building for regulation of online platforms — including calls for Internet companies to face clear legal liabilities and even a legal duty care towards users vis-a-vis the content they distribute and monetize.

For example UK regulators have made legislating on Internet and social media safety a policy priority — with the government due to publish a White Paper setting out its plans for ruling platforms this winter.

UK parliament calls for antitrust, data abuse probe of Facebook

A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.

In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.

Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.

Interrogating the distribution of ‘fake news’

The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to user data to developers and advertisers in order to increase revenue and/or usage; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.

The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.

Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.

“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”

“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.

We’ve reached out to Facebook for comment on the committee’s report.

Last fall the company was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although Facebook is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.

During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.

Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.

Among the report’s main recommendations are:

  • clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
  • privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
  • a levy on tech companies operating in the UK to support enhanced regulation of such platforms
  • a call for the ICO to investigate Facebook’s platform practices and use of user data
  • a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
  • changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
  • a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
  • a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users

Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.

It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.

Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.

“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” the committee writes. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”

The report calls for tech companies to be regulated as a new category “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.

Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18 — and the government said then that it has not ruled out doing so.

“Digital gangsters”

Competition concerns are also raised several times by the committee.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”. 

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.

The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.

“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.

That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.

“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.

“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”

It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.

“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.

In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by a developer called Six4Three.

The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.

“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .

On Soltani’s evidence, it writes:

Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.

While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.

It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”

The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.

Its interim report, published last summer, made many of the same recommendations.

Russian interest

But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.

The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.

Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.

It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached. 

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.

“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”

The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”

It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…

Source: Web and publications unit, House of Commons

“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.

“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.

Three senior managers knew

Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.

The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.

The committee dubs this as an example of “a profound failure” of internal governance, and also branding it evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.

Here’s the committee’s account of that detail:

We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.

The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

Apple is selling the iPhone 7 and iPhone 8 in Germany again

Two older iPhone models are back on sale in Apple stores in Germany — but only with Qualcomm chips inside.

The iPhone maker was forced to pull the iPhone 7 and iPhone 8 models from shelves in its online shop and physical stores in the country last month, after chipmaker Qualcomm posted security bonds to enforce a December court injunction it secured via patent litigation.

Apple told Reuters it had “no choice” but to stop using some Intel chips for handsets to be sold in Germany. “Qualcomm is attempting to use injunctions against our products to try to get Apple to succumb to their extortionist demands,” it said in a statement provided to the news agency.

Apple and Qualcomm have been embroiled in an increasingly bitter global legal battle around patents and licensing terms for several years.

The litigation follows Cupertino’s move away from using only Qualcomm’s chips in iPhones after, in 2016, Apple began sourcing modem chips from rival Intel — dropping Qualcomm chips entirely for last year’s iPhone models. Though still using some Qualcomm chips for older iPhone models, as it will now for iPhone 7 and iPhone 8 units headed to Germany.

For these handsets Apple is swapping out Intel modems that contain chips from Qorvo which are subject to the local patent litigation injunction. (The litigation relates to a patented smartphone power management technology.) 

Hence Apple’s Germany webstore is once again listing the two older iPhone models for sale…

Newer iPhones containing Intel chips remain on sale in Germany because they do not containing the same components subject to the patent injunction.

“Intel’s modem products are not involved in this lawsuit and are not subject to this or any other injunction,” Intel’s general counsel, Steven Rodgers, said in a statement to Reuters.

While Apple’s decision to restock its shelves with Qualcomm-only iPhone 7s and 8s represents a momentary victory for Qualcomm, a separate German court tossed another of its patent suits against Apple last month — dismissing it as groundless. (Qualcomm said it would appeal.)

The chipmaker has also been pursing patent litigation against Apple in China, and in December Apple appealed a preliminary injunction banning the import and sales of old iPhone models in the country.

At the same time, Qualcomm and Apple are both waiting the result of an antitrust trial brought against Qualcomm’s licensing terms in the U.S.

Two years ago the FTC filed charges against Qualcomm, accusing the chipmaker of operating a monopoly and forcing exclusivity from Apple while charging “excessive” licensing fees for standards-essential patents.

The case was heard last month and is pending a verdict or settlement.

GoEuro rebrands as Omio to take its travel aggregator business global

European multimodal travel booking platform GoEuro has announced a change of name and destination: Its new ambition is to go global, scaling beyond its regional grounding to tackle the challenge of intercity travel internationally — hence needing a more expansive brand name.

The name it’s chosen is Omio, pronounced with the stress on the ‘me’ sound in the middle of the word.

GoEuro unveiled a new brand identity late last year — which it says now was preparing the ground for this full rebranding.

So why Omio? CEO and founder Naren Shaam tells TechCrunch the new name was chosen to be memorable, lighthearted and neutral. A word that travels inoffensively across languages was also clearly essential.

“It took a while — probably eight months — to do the search on the name,” he says. “The hard thing about the name is a few criteria we had. One was that it had to be short, easy to remember, and four letter names are just non-existent now.

“It had to be lighthearted because travel inherently comes with a lot of stress to consumers… Every time you book travel it’s a lot of anxiety and then relief after you book it etc. So we want to change that behavior to customers; saying we will take care of your journey.”

The multimodal travel startup, which was founded back in 2012, also says it’s happy to have been able to retain a ghost of its old brand — thanks to the double ‘o’ in both names — which it intends to suggestively stand in for the beginning and end of a journey.

In Europe the travel aggregator tool that’s been known since launch as GoEuro — and soon, within a matter of weeks, Omio, everywhere it operates — has some 27 million monthly users tapping into the convenience of a platform that knits together train travel, bus trips, flights and most recently ferries to offer the most comprehensive coverage available of longer distance travel options in the region.

Europe is heavily networked for transport, with multiple intercity travel options to choose from. But it is also massively fragmented across a huge mix of providers (and languages) making it challenging for travellers to navigate, compare and book across so many potential options.

Taming this complexity via a multimodal search and comparison tool that now also integrates booking for most ground-based travel options (and some flights) on one platform has been GoEuro’s mission to-date. And now it’s Omio’s tackle globally.

“Global transport is not on a single product. What we bring is way more than just air, in terms of all ground transportation,” says Shaam. “So for me the problem of how do I get from Kyoto to Tokyo, or Rio to Sao Paulo. Or somewhere in Southeast Asia in Thailand is still a global problem. And it’s not yet solved. And so for us it’s the right time to evolve the brand… It’s definitely time to step out and say we want to build a global brand. We want to be that transport product across the world where we can serve all transport globally.”

While GoEuro is in some senses a quintessentially European business — Shaam says he “couldn’t have imagined” building a multimodal transport platform out of the US, for instance, where travel is so dominated by airlines and cars — he suggests that sets the business up to tackle similar complexity elsewhere.

Putting in the hard graft of negotiating partnerships and nailing technical integrations with multiple transport providers, large and tiny, also isn’t the sort of tech business prone to fast-following platform clones. So Omio suggests competition at a global scale will most likely be piecemeal, from multiple regional players.

“When I look beyond Europe the problem that I experienced in Europe in 2010 [which inspired me to set up GoEuro] is definitely a problem I experience still globally,” he says. “So when we can figure out how to bring 100,000 remote train and bus stations plugged into a uniform, normalized product and then give a single-click mobile ticket that works everywhere why not actually solve this problem globally?”

That translates into having “the engineering and the product and the means” to scale what GoEuro has done for travel in Europe internationally, moving to other continents with their own blend and mix of transport options and challenges.

Shaam notes that Omio employs more than 200 engineers within a company that has a staff of 300 — emphasizing also that the partnerships plus all the engineering that sits behind the aggregator’s front end take a lot of resource to maintain.

“I agree it is such a European startup. And it has served us well to get 27M monthly users traveling across Europe. Last year alone we served something like eight million unique routes. So the density of routes that we have is great. We already have global users; we have users from 100+ countries,” he says, adding: “If you look at Europe, European companies are starting to go on the global stage more and more now.

“You can see Spotify being one of the largest global tech companies coming out of Europe. You’ve seen some in the fintech space. Industries where there’s heavy fragmentation in Europe allow us to build global products because Europe is a great product market.”

GoEuro — now Omio — founder and CEO, Naren Shaam

On the international expansion horizon, Omio says its considering expanding into South America, Asia and the U.S. Although Shaam says no decisions have yet been taken as to the regions and markets it might move into first.

He also readily accepts the goal of building a global travel aggregator is a long term mission, with the partnerships, engineering and legacy technology integrations that will have to underpin the expansion all requiring time (and money) to work through.

There’s also no suggestion that Omio intends to offer a more lightweight transport proposition as a strategy to elbow its way into new markets, either.

“If we go into the U.S. the goal is not to just offer another airline product,” he says. “There’s enough websites out there that do exactly that. So we will offer something different. And our competition will also be regional companies that offer something similar in each market.”

In a year’s time, Shaam says he hopes to have further deepened the platform’s coverage and usage in Europe — noting there are more transport dots to connect in markets including Portugal, Ireland, Norway, Sweden, plus parts of Eastern Europe (as well as “very heavily fragmented” bus providers in Spain and Italy).

By then he says he also wants to have “a clear answer to what are the two next big continents we want to expand into and have people that are ready to do that”.

So connecting the dots of intercity travel is very evidently a far slower-paced business than heavily VC-backed innercity transport plays — which have attracted multiple billions in funding in very short order thanks to fast usage velocity and revenue growth vs GoEuro’s modest (by contrast) ~$300M.

Nonetheless Shaam is convinced the intercity opportunity is still “a big market”. Perhaps not as massive as micromobility, ride-hailing and so on but still big and relatively under-invested, as he sees it.

So how will GoEuro as Omio approach scaling a travel business that is, necessarily, so very grounded in fixed and non-uniform transport infrastructure? He suggests the business will be able to draw on what is already years of experience integrating with transport providers of various types and sizes to support the new global push.

It’s developed what he describes as an “a la carte” menu of products for different sized travel providers — arguing this established menu of tools will help scale into new markets in fresh geographies, even while conceding there are other aspects of the business that will not be so easily replicable.

“Over time we built a lot of tooling that adapts to the different types of suppliers. So, for example, if you’re a large state-owned operator… that has very different systems built for decades basically vs a tiny bus company that runs from Naples to Positano that nobody even knows the name of or no technology it stands on we have different products that we offer to each of them.

“We have all the tooling built out so it’s basically ‘plug and play’ for us to do. So this thing doesn’t change. That’s portable.”

What will be new for Omio is international product market fit, with Shaam saying, for example, that it won’t necessarily be able to rely on the same sort of network effects it sees in Europe that help drive usage.

He also notes mobile penetration rates will differ — again requiring a different approach to serving customer needs in new regions such as Latin America.

“It’s not quick,” he concedes. “That’s why we’d rather launch now because I can’t tell you that in three months we’ll have had four more continents covered, right. This is a long term play but we’ve raised enough capital to make sure we’re here for that long term journey.”

“We have a name that people know and we can build technology,” he adds, expanding on what Omio can bring to the table as it tries to sell its platform to travel providers everywhere. “We’ve worked with 800+ suppliers. So from a commercial standpoint, people know who we are and how much scale we can bring in terms of their fixed cost businesses — so we can sell a lot of tickets for all of them. We can bring international tourists from a global audience. And we can really fill up seats. So people know that you put your supply on our product and we instantly scale because the existing demand is just so large.”

The Berlin-based startup closed a $150M funding round last fall so it’s not short of immediate resources to support the new hires it’ll be looking to add to start building out its global roadmap.

Shaam also notes it brought in more Asian capital with its last round, which he says he hopes will help “with this globalization capital”. Most of the investors it added then are also geared towards longer term returns vs traditional VC, he adds.

Omio is not currently in the process of raising another funding round, according to Shaam, though he confirms it does plan to raise more in future as it works towards the global vision of a single platform to help travellers move all over the world.

“The amount of capital that’s gone into intercity transport is tiny compared to innercity transport,” he notes. “That means that if you’re still going after a global problem that we want to solve that means that we need to raise capital at some point in the future. For now we’re just very comfortable with what we have but it doesn’t mean that we’ll stop.”

One potential future market Omio is likely to approach only very cautiously is China.

A b2c partnership with local travel booking platform Qunar, which GoEuro inked back in 2017, to link Chinese consumers with European travel opportunities, means Omio has a commercial reason to be sensitive of any moves into that market.

The complexity and challenge of going into China as an outsider is of course another major reason to go slow.

“I want to say very carefully that China is a market we need a lot more time to understand before we go into, as I think there’s enough lessons learned from all the tech companies from the West,” says Shaam readily. “It’s not going to be a rushed decision. So in that case the partnership with have with Qunar — I don’t see any changes in the near term because going into China is a big step for us. And it’s not an easy decision anyway.”

Is Europe closing in on an antitrust fix for surveillance technologists?

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.

German antitrust office limits Facebook’s data-gathering

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent.

The investigation of Facebook data-gathering practices began in March 2016.

The decision by Germany’s Federal Cartel Office, announced today, also prohibits Facebook from gathering data on users from third party websites — such as via tracking pixels and social plug-ins — without their consent.

Although the decision does not yet have legal force and Facebook has said it’s appealing.

In both cases — i.e. Facebook collecting and linking user data from its own suite of services; and from third party websites — the Bundeskartellamt says consent must be voluntary, so cannot be made a precondition of using Facebook’s service.

The company must therefore “adapt its terms of service and data processing accordingly”, it warns.

“Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved,” it writes, couching Facebook’s conduct as “exploitative abuse”.

“Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data,” it continues.

“This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).”

Commenting further in a statement, Andreas Mundt, president of the Bundeskartellamt, added: “In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.

“The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users.

“Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Mundt added. 

Facebook has responded to the Bundeskartellamt’s decision with a blog post setting out why it disagrees. The company did not respond to specific questions we put to it.

One key consideration is that Facebook also tracks non-users via third party websites. Aka, the controversial issue of ‘shadow profiles’ — which both US and EU politicians questioned founder Mark Zuckerberg about last year.

Which raises the question of how it could comply with the decision on that front, if its appeal fails, given it has no obvious conduit for seeking consent from non-users to gather their data. (Facebook’s tracking of non-users has already previously been judged illegal elsewhere in Europe.)

The German watchdog says that if Facebook intends to continue collecting data from outside its own social network to combine with users’ accounts without consent it “must be substantially restricted”, suggesting a number of different criteria are feasible — such as restrictions including on the amount of data; purpose of use; type of data processing; additional control options for users; anonymization; processing only upon instruction by third party providers; and limitations on data storage periods.

Should the decision come to be legally enforced, the Bundeskartellamt says Facebook will be obliged to develop proposals for possible solutions and submit them to the authority which would then examine whether or not they fulfil its requirements.

While there’s lots to concern Facebook in this decision, it isn’t all bad for the company — or, rather, it could have been worse.

The authority makes a point of saying the social network can continue to make the use of each of its messaging platforms subject to the processing of data generated by their use, writing: “It must be generally acknowledged that the provision of a social network aiming at offering an efficient, data-based business model funded by advertising requires the processing of personal data. This is what the user expects.”

Although it also does not close the door on further scrutiny of that dynamic, either under data protection law (as indeed, there is a current challenge to so called ‘forced consent‘ under Europe’s GDPR); or indeed under competition law.

“The issue of whether these terms can still result in a violation of data protection rules and how this would have to be assessed under competition law has been left open,” it emphasizes.

It also notes that it did not investigate how Facebook subsidiaries WhatsApp and Instagram collect and use user data — leaving the door open for additional investigations of those services.

On the wider EU competition law front, in recent years the European Commission’s competition chief has voiced concerns about data monopolies — going so far as to suggest, in an interview with the BBC last December, that restricting access to data might be a more appropriate solution to addressing monopolistic platform power vs breaking companies up.

In its blog post rejecting the German Federal Cartel Office’s decision, Facebook’s Yvonne Cunnane, head of data protection for its international business, Facebook Ireland, and Nikhil Shanbhag, director and associate general counsel, make three points to counter the decision, writing that: “The Bundeskartellamt underestimates the fierce competition we face in Germany, misinterprets our compliance with GDPR and undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.”

On the competition point, Facebook claims in the blog post that “popularity is not dominance” — suggesting the Bundeskartellamt found 40 per cent of social media users in Germany don’t use Facebook. (Not that that would stop Facebook from tracking those non-users around the mainstream Internet, of course.)

Although, in its announcement of the decision today, the Federal Cartel Office emphasizes that it found Facebook to have a dominant position in the Germany market — with (as of December 2018) 23M daily active users and 32M monthly active users, which it said constitutes a market share of more than 95 per cent (daily active users) and more than 80 per cent (monthly active users).

It also says it views social services such as Snapchat, YouTube and Twitter, and professional networks like LinkedIn and Xing, as only offering “parts of the services of a social network” — saying it therefore excluded them from its consideration of the market.

Though it adds that “even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process”.

The mainstay of Facebook’s argument against the Bundeskartellamt decision appears to fix on the GDPR — with the company both seeking to claim it’s in compliance with the pan-EU data-protection framework (although its business faces multiple complaints under GDPR), while simultaneously arguing that the privacy regulation supersedes regional competition authorities.

So, as ever, Facebook is underlining that its regulator of choice is the Irish Data Protection Commission.

“The GDPR specifically empowers data protection regulators – not competition authorities – to determine whether companies are living up to their responsibilities. And data protection regulators certainly have the expertise to make those conclusions,” Facebook writes.

“The GDPR also harmonizes data protection laws across Europe, so everyone lives by the same rules of the road and regulators can consistently apply the law from country to country. In our case, that’s the Irish Data Protection Commission. The Bundeskartellamt’s order threatens to undermine this, providing different rights to people based on the size of the companies they do business with.”

The final plank of Facebook’s rebuttal focuses on pushing the notion that pooling data across services enhances the consumer experience and increases “safety and security” — the latter point being the same argument Zuckerberg used last year to defend ‘shadow profiles’ (not that he called them that) — with the company claiming now that it needs to pool user data across services to identify abusive behavior online; and disable accounts link to terrorism; child exploitation; and election interference.

So the company is essentially seeking to leverage (you could say ‘legally weaponize’) a smorgasbord of antisocial problems many of which have scaled to become major societal issues in recent years, at least in part as a consequence of the size and scale of Facebook’s social empire, as arguments for defending the size and operational sprawl of its business. Go figure.

Fabula AI is using social spread to spot ‘fake news’

UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $500,000 in angel funding and about another $500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”