Zuckerberg ditches annual challenges, but needs cynics to fix 2030

Mark Zuckerberg won’t be spending 2020 focused on wearing ties, learning Mandarin, or just fixing Facebook. “Rather than having year-to-year challenges, I’ve tried to think about what I hope the world and my life will look in 2030” he wrote today on Facebook. As you might have guessed, though, Zuckerberg’s vision for an improved planet involves a lot more of Facebook’s family of apps.

His biggest proclamations in today’s notes include that:

  • AR – Phones will remain the primary computing platform for most of the decade by augmented reality could get devices out from between us so we can be present together — Facebook is building AR glasses
  • VR – Better virtual reality technology could address the housing crisis by letting people work from anywhere — Facebook is building Oculus
  • Privacy – The internet has created a global community where people find it hard to establish themselves as unique, so smaller online groups could make people feel special again – Facebook is building more private groups and messaging options
  • Regulation – That the big questions facing technology are too thorny for private companies to address by themselves, and governments must step in around elections, content moderation, data portability, and privacy — Facebook is trying to self-regulate on these and everywhere else to deter overly onerous lawmaking

Zuckerberg Elections

These are all reasonable predictions and suggestions. However, Zuckerberg’s post does little to address how the broadening of Facebook’s services in the 2010s also contributed to a lot of the problems he presents.

  • Isolation – Constant passive feed scrolling on Facebook and Instagram has created a way to seem like you’re being social without having true back-and-forther interaction with friends
  • Gentrification – Facebook’s shuttled employees have driven up rents in cities around the world, especially the Bay Area
  • Envy – Facebook’s algorithms can make anyone without a glamorous, Instagram-worthy life look less important, while hackers can steal accounts and its moderation systems can accidentally suspend profiles with little recourse for most users
  • Negligence – The growth-first mentality led Facebook’s policies and safety to lag behind its impact, creating the kind of democracy, content, anti-competition, and privacy questions its now asking the government to answer for it

Noticibly absent from Zuckerberg’s post are explicit mentions some of Facebook’s more controversial products and initiatives. He writes about “decentralizing opportunity” by giving small businesses commerce tools, but never mentions cryptocurrency, blockchain, or Libra directly. Instead he seems to suggest that Instagram store fronts, Messenger customer support, and WhatsApp remittance might be sufficient. He also largely leaves out Portal, Facebook’s smart screen that could help distant families stay closer, but that some see as a surveillance and data collection tool.

I’m glad Zuckerberg is taking his role as a public figure and the steward of one of humanity’s fundamental utilities more seriously. His willingness to even think about some of these long-term issues instead of just quarterly-profits is important. Optimism is necessary to create what doesn’t exist.

Still, if Zuckerberg wants 2030 to look better for the world, and for the world to look more kindly on Facebook, he may need to hire more skeptics and cynics that see a dystopic future instead. Their foresight on where societal problems could arise from Facebook’s products could help temper Zuckerberg’s team of idealists to create a company that balances the potential of the future with the risks to the present.

Every new year of the last decade I set a personal challenge. My goal was to grow in new ways outside my day-to-day work…

Posted by Mark Zuckerberg on Thursday, January 9, 2020

Facebook won’t ban political ads, prefers to keep screwing democracy

It’s 2020 — a key election year in the US — and Facebook is doubling down on its policy of letting people pay it to fuck around with democracy.

Despite trenchant criticism — including from US lawmakers accusing Facebook’s CEO to his face of damaging American democracy the company is digging in, announcing as much today by reiterating its defence of continuing to accept money to run microtargeted political ads.

Instead of banning political ads Facebook is trumpeting a few tweaks to the information it lets users see about political ads — claiming it’s boosting “transparency” and “controls” while leaving its users vulnerable to default settings that offer neither.  

Political ads running on Facebook are able to be targeted at individuals’ preferences as a result of the company’s pervasive tracking and profiling of Internet users. And ethical concerns about microtargeting led the UK’s data protection watchdog to call in 2018 for a pause on the use of digital ad tools like Facebook by political campaigns — warning of grave risks to democracy.

Facebook isn’t for pausing political microtargeting, though. Even though various elements of its data-gathering activities are also subject to privacy and consent complaints, regulatory scrutiny and legal challenge in Europe, under regional data protection legislation.

Instead, the company made it clear last fall that it won’t fact-check political ads, nor block political messages that violate its speech policies — thereby giving politicians carte blanche to run hateful lies, if they so choose.

Facebook’s algorithms also demonstrably select for maximum eyeball engagement, making it simply the ‘smart choice’ for the modern digitally campaigning politician to run outrageous BS on Facebook — as long time Facebook exec Andrew Bosworth recently pointed out in an internal posting that leaked in full to the NYT.

Facebook founder Mark Zuckerberg’s defence of his social network’s political ads policy boils down to repeatedly claiming ‘it’s all free speech man’ (we paraphrase).

This is an entirely nuance-free argument that comedian Sacha Baron Cohen expertly demolished last year, pointing out that: “Under this twisted logic if Facebook were around in the 1930s it would have allowed Hitler to post 30-second ads on his solution to the ‘Jewish problem.’”

Facebook responded to the take-down with a denial that hate speech exists on its platform since it has a policy against it — per its typical crisis PR playbook. And it’s more of the same selectively self-serving arguments being dispensed by Facebook today.

In a blog post attributed to its director of product management, Rob Leathern, it expends more than 1,000 words on why it’s still not banning political ads (it would be bad for advertisers wanting to reaching “key audiences”, is the non-specific claim) — including making a diversionary call for regulators to set ad standards, thereby passing the buck on ‘democratic accountability’ to lawmakers (whose electability might very well depend on how many Facebook ads they run…), while spinning cosmetic, made-for-PR tweaks to its ad settings and what’s displayed in an ad archive that most Facebook users will never have heard of as “expanded transparency” and “more control”. 

In fact these tweaks do nothing to reform the fundamental problem of damaging defaults.

The onus remains on Facebook users to do the leg work on understanding what its platform is pushing at their eyeballs and why.

Even as the ‘extra’ info now being drip-fed to the Ad Library is still highly fuzzy (“We are adding ranges for Potential Reach, which is the estimated target audience size for each political, electoral or social issue ad so you can see how many people an advertiser wanted to reach with every ad,” as Facebook writes of one tweak.)

The new controls similarly require users to delve into complex settings menus in order to avail themselves of inherently incremental limits — such as an option that will let people opt into seeing “fewer” political and social issue ads. (Fewer is naturally relative, ergo the scale of the reduction remains entirely within Facebook’s control — so it’s more meaningless ‘control theatre’ from the lord of dark pattern design. Why can’t people switch off political and issue ads entirely?)

Another incremental setting lets users “stop seeing ads based on an advertiser’s Custom Audience from a list”.

But just imagine trying to explain WTF that means to your parents or grandparents — let alone an average Internet user actually being able to track down the ‘control’ and exercise any meaningful agency over the political junk ads they’re being exposed to on Facebook.

It is, to quote Baron Cohen, “bullshit”.

Nor are outsiders the only ones calling out Zuckerberg on his BS and “twisted logic”: A number of Facebook’s own employees warned in an open letter last year that allowing politicians to lie in Facebook ads essentially weaponizes the platform.

They also argued that the platform’s advanced targeting and behavioral tracking tools make it “hard for people in the electorate to participate in the public scrutiny that we’re saying comes along with political speech” — accusing the company’s leadership of making disingenuous arguments in defence of a toxic, anti-democratic policy. 

Nothing in what Facebook has announced today resets the anti-democratic asymmetry inherent in the platform’s relationship to its users.

Facebook users — and democratic societies — remain, by default, preyed upon by self-interested political interests thanks to Facebook’s policies which are dressed up in a self-interested misappropriation of ‘free speech’ as a cloak for its unfettered exploitation of individual attention as fuel for a propaganda-as-service business.

Yet other policy positions are available.

Twitter announced a total ban on political ads last year — and while the move doesn’t resolve wider disinformation issues attached to its platform, the decision to bar political ads has been widely lauded as a positive, standard-setting example.

Google also followed suit by announcing a ban on “demonstrably false claims” in political ads. It also put limits on the targeting terms that can be used for political advertising buys that appear in search, on display ads and on YouTube.

Still Facebook prefers to exploit “the absence of regulation”, as its blog post puts it, to not do the right thing and keep sticking two fingers up at democratic accountability — because not applying limits on behavioral advertising best serves its business interests. Screw democracy.

“We have based [our policies] on the principle that people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public,” Facebook writes, ignoring the fact that some of its own staff already pointed out the sketchy hypocrisy of trying to claim that complex ad targeting tools and techniques are open to public scrutiny.

Will online privacy make a comeback in 2020?

Last year was a landmark for online privacy in many ways, with something of a consensus emerging that consumers deserve protection from the companies that sell their attention and behavior for profit.

The debate now is largely around how to regulate platforms, not whether it needs to happen.

The consensus among key legislators acknowledges that privacy is not just of benefit to individuals but can be likened to public health; a level of protection afforded to each of us helps inoculate democratic societies from manipulation by vested and vicious interests.

The fact that human rights are being systematically abused at population-scale because of the pervasive profiling of Internet users — a surveillance business that’s dominated in the West by tech giants Facebook and Google, and the adtech and data broker industry which works to feed them — was the subject of an Amnesty International report in November 2019 that urges legislators to take a human rights-based approach to setting rules for Internet companies.

“It is now evident that the era of self-regulation in the tech sector is coming to an end,” the charity predicted.

Democracy disrupted

The dystopian outgrowth of surveillance capitalism was certainly in awful evidence in 2019, with elections around the world attacked at cheap scale by malicious propaganda that relies on adtech platforms’ targeting tools to hijack and skew public debate, while the chaos agents themselves are shielded from democratic view.

Platform algorithms are also still encouraging Internet eyeballs towards polarized and extremist views by feeding a radicalized, data-driven diet that panders to prejudices in the name of maintaining engagement — despite plenty of raised voices calling out the programmed antisocial behavior. So what tweaks there have been still look like fiddling round the edges of an existential problem.

Worse still, vulnerable groups remain at the mercy of online hate speech which platforms not only can’t (or won’t) weed out, but whose algorithms often seem to deliberately choose to amplify — the technology itself being complicit in whipping up violence against minorities. It’s social division as a profit-turning service.

The outrage-loving tilt of these attention-hogging adtech giants has also continued directly influencing political campaigning in the West this year — with cynical attempts to steal votes by shamelessly platforming and amplifying misinformation.

From the Trump tweet-bomb we now see full-blown digital disops underpinning entire election campaigns, such as the UK Conservative Party’s strategy in the 2019 winter General Election, which featured doctored videos seeded to social media and keyword targeted attack ads pointing to outright online fakes in a bid to hack voters’ opinions.

Political microtargeting divides the electorate as a strategy to conquer the poll. The problem is it’s inherently anti-democratic.

No wonder, then, that repeat calls to beef up digital campaigning rules and properly protect voters’ data have so far fallen on deaf ears. The political parties all have their hands in the voter data cookie-jar. Yet it’s elected politicians whom we rely upon to update the law. This remains a grave problem for democracies going into 2020 — and a looming U.S. presidential election.

So it’s been a year when, even with rising awareness of the societal cost of letting platforms suck up everyone’s data and repurpose it to sell population-scale manipulation, not much has actually changed. Certainly not enough.

Yet looking ahead there are signs the writing is on the wall for the ‘data industrial complex’ — or at least that change is coming. Privacy can make a comeback.

Adtech under attack

Developments in late 2019 such as Twitter banning all political ads and Google shrinking how political advertisers can microtarget Internet users are notable steps — even as they don’t go far enough.

But it’s also a relatively short hop from banning microtargeting sometimes to banning profiling for ad targeting entirely.

Alternative online ad models (contextual targeting) are proven and profitable — just ask search engine DuckDuckGo . While the ad industry gospel that only behavioral targeting will do now has academic critics who suggest it offer far less uplift than claimed, even as — in Europe — scores of data protection complaints underline the high individual cost of maintaining the status quo.

Startups are also innovating in the pro-privacy adtech space (see, for example, the Brave browser).

Changing the system — turning the adtech tanker — will take huge effort, but there is a growing opportunity for just such systemic change.

This year, it might be too much to hope for regulators get their act together enough to outlaw consent-less profiling of Internet users entirely. But it may be that those who have sought to proclaim ‘privacy is dead’ will find their unchecked data gathering facing death by a thousand regulatory cuts.

Or, tech giants like Facebook and Google may simple outrun the regulators by reengineering their platforms to cloak vast personal data empires with end-to-end encryption, making it harder for outsiders to regulate them, even as they retain enough of a fix on the metadata to stay in the surveillance business. Fixing that would likely require much more radical regulatory intervention.

European regulators are, whether they like it or not, in this race and under major pressure to enforce the bloc’s existing data protection framework. It seems likely to ding some current-gen digital tracking and targeting practices. And depending on how key decisions on a number of strategic GDPR complaints go, 2020 could see an unpicking — great or otherwise — of components of adtech’s dysfunctional ‘norm’.

Among the technologies under investigation in the region is real-time bidding; a system that powers a large chunk of programmatic digital advertising.

The complaint here is it breaches the bloc’s General Data Protection Regulation (GDPR) because it’s inherently insecure to broadcast granular personal data to scores of entities involved in the bidding chain.

A recent event held by the UK’s data watchdog confirmed plenty of troubling findings. Google responded by removing some information from bid requests — though critics say it does not go far enough. Nothing short of removing personal data entirely will do in their view, which sums to ads that are contextually (not micro)targeted.

Powers that EU data protection watchdogs have at their disposal to deal with violations include not just big fines but data processing orders — which means corrective relief could be coming to take chunks out of data-dependent business models.

As noted above, the adtech industry has already been put on watch this year over current practices, even as it was given a generous half-year grace period to adapt.

In the event it seems likely that turning the ship will take longer. But the message is clear: change is coming. The UK watchdog is due to publish another report in 2020, based on its review of the sector. Expect that to further dial up the pressure on adtech.

Web browsers have also been doing their bit by baking in more tracker blocking by default. And this summer Marketing Land proclaimed the third party cookie dead — asking what’s next?

Alternatives and workarounds will and are springing up (such as stuffing more in via first party cookies). But the notion of tracking by background default is under attack if not quite yet coming unstuck.

Ireland’s DPC is also progressing on a formal investigation of Google’s online Ad Exchange. Further real-time bidding complaints have been lodged across the EU too. This is an issue that won’t be going away soon, however much the adtech industry might wish it.

Year of the GDPR banhammer?

2020 is the year that privacy advocates are really hoping that Europe will bring down the hammer of regulatory enforcement. Thousands of complaints have been filed since the GDPR came into force but precious few decisions have been handed down. Next year looks set to be decisive — even potentially make or break for the data protection regime.

Facebook bans deceptive deepfakes and some misleadingly modified media

Facebook wants to be the arbiter of truth after all. At least when it comes to intentionally misleading deepfakes and heavily manipulated and/or synthesized media content, such as AI-generated photorealistic human faces that look like real people but aren’t.

In a policy update announced late yesterday, the social network’s VP of global policy management, Monika Bickert, writes that it will take a stricter line on manipulated media content from here on in — removing content that’s been edited or synthesized “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say”.

However edits for quality or cuts and splices to videos that simply curtail or change the order of words are not covered by the ban.

Which means that disingenuous doctoring — such as this example from the recent UK General Election (where campaign staff for one political party edited a video of a politician from a rival party who was being asked a question about brexit to make it look like he was lost for words when in fact he wasn’t) — will go entirely untouched by the new ‘tougher’ policy. Ergo there’s little to trouble Internet-savvy political ‘truth’ spinners here. The disingenuousness digital campaigning can go on.

Instead of grappling with that sort of subtle political fakery, Facebook is focusing on quick PR wins — around the most obviously inauthentic stuff where it won’t risk accusations of partisan bias if it pulls bogus content.

Hence the new policy bans deepfake content that involves the use of AI technologies to “merge, replace or superimpose content onto a video, making it appear to be authentic” — which looks as if it will capture the crudest stuff, such as revenge deepfake porn which superimposes a real person’s face onto an adult performer’s body (albeit nudity is already banned on Facebook’s platform).

It’s not a blanket ban on deepfakes either, though — with some big carve outs for “parody or satire”.

So it’s a bit of an open question whether this deepfake video of Mark Zuckerberg, which went viral last summer — seemingly showing the Facebook founder speaking like a megalomaniac — would stay up or not under the new policy. The video’s creators, a pair of artists, described the work as satire so such stuff should survive the ban. (Facebook did also leave it up at the time.)

But, in future, deepfake creators are likely to further push the line to see what they can get away with under the new policy.

The social network’s controversial policy of letting politicians lie in ads also means it could, technically, still give pure political deepfakes a pass — i.e. if a political advertiser was paying it to run purely bogus content as an ad. Though it would be a pretty bold politician to try that.

More likely there’s more mileage for political campaigns and opinion influencers to keep on with more subtle manipulations. Such as the doctored video of House speaker Nancy Pelosi that went viral on Facebook last year, which had slowed down audio that made her sound drunk or ill. The Washington Post suggests that video — while clearly potentially misleading — still wouldn’t qualify to be taken down under Facebook’s new ‘tougher’ manipulated media policy.

Bickert’s blog post stipulates that manipulated content which doesn’t meet Facebook’s new standard for removal may still be reviewed by the independent third party fact-checkers Facebook relies upon for the lion’s share of ‘truth sifting’ on its platform — and who may still rate such content as ‘false’ or ‘partly false’. But she emphasizes it will continue to allow this type of bogus content to circulate (while potentially reducing its distribution), claiming such labelled fakes provide helpful context.

So Facebook’s updated position on manipulated media sums to ‘no to malicious deepfakes but spindoctors please carry on’.

“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false,” Bickert writes, claiming: “This approach is critical to our strategy and one we heard specifically from our conversations with experts.

“If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”

Last month Facebook announced it had unearthed a network of more than 900 fake accounts that had been spreading pro-Trump messaging — some of which had used false profile photos generated by AI.

The dystopian development provides another motivation for the tech giant to ban ‘pure’ AI fakes, given the technology risks supercharging its fake accounts problem. (And, well, that could be bad for business.)

“Our teams continue to proactively hunt for fake accounts and other coordinated inauthentic behavior,” suggests Bickert, arguing that: “Our enforcement strategy against misleading manipulated media also benefits from our efforts to root out the people behind these efforts.”

While still relatively nascent as a technology, deepfakes have shown themselves to be catnip to the media which loves the spectacle they create. As a result, the tech has landed unusually quickly on legislators’ radars as a disinformation risk — California implemented a ban on political deepfakes around elections this fall, for example — so Facebook is likely hoping to score some quick and easy political points by moving in step with legislators even as it applies its own version of a ban.

Bickert’s blog post also fishes for further points, noting Facebook’s involvement in a Deep Fake Detection Challenge which was announced last fall — “to produce more research and open source tools to detect deepfakes”.

While says Facebook has been working with news agency Reuters to offer free online training courses for journalists to help reporters identify manipulated visuals.

“As these partnerships and our own insights evolve, so too will our policies toward manipulated media. In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact,” she adds.

Libra’s critics are missing the forest for the trees

It would be an understatement to say the last few months have been rocky for Libra, Facebook’s proposed stablecoin.

Since its announcement in June, eBay, Mastercard and other members of the cryptocurrency’s elite consortium have jumped ship (many due to direct pressure from legislators), a congressional hearing on Libra turned into an evisceration of Facebook’s data and privacy practices, Federal Reserve Governor Lael Brainard assailed the project’s lack of controls and the Chinese government announced its own competitive digital currency.

Critics, though well-intentioned, are missing the forest for the trees.

In spite of Libra’s well-cataloged risks and unanswered questions, there is a massive opportunity in plain sight for the global financial system; it would be a tragedy to let that opportunity be destroyed on the basis of Facebook’s reputation or Libra’s haphazard go-to-market. 

Governments should heed the lesson of the U.S.-Soviet space race of the 1970s and use the idea behind Libra, if not the project itself, in “coopetition” to build a better, more inclusive global financial architecture. 

A few key points to begin: first, Facebook is probably not the right actor to spearhead this initiative.

Mark Zuckerberg promises that Facebook will only be one board member in a governing consortium and that the project will comply with U.S. regulatory demands and privacy standards. But just as the company reneged on its promise not to integrate the encrypted WhatsApp into Facebook’s platform, it’s easy to picture Facebook pushing through standards that benefit itself at consumer expense. While Facebook would be a great platform to distribute Libra, its track record should make constituents uneasy about giving it any control.

Second, global payment systems are outdated and slow, and many businesses have been set up to extract rents from that fact. This burden disproportionately falls on the shoulders of poor consumers. People living paycheck-to-paycheck are forced into high-interest loans to smooth their cash flow due to slow settlement speeds. Immigrants sending money home pay up to 17 percent to move money across borders, costs that take a sizable bite out of some countries’ GDPs. In a ubiquitous currency regime, however, foreign exchange fees would vanish entirely .

Twitter’s political ads ban is a distraction from the real problem with platforms

Sometimes it feels as if Internet platforms are turning everything upside down, from politics to publishing, culture to commerce, and of course swapping truth for lies.

This week’s bizarro reversal was the vista of Twitter CEO Jack Dorsey, a tech CEO famed for being entirely behind the moral curve of understanding what his product is platforming (i.e. nazis), providing an impromptu ‘tweet storm’ in political speech ethics.

Actually he was schooling Facebook’s Mark Zuckerberg — another techbro renowned for his special disconnect with the real world, despite running a massive free propaganda empire with vast power to influence other people’s lives — in taking a stand for the good of democracy and society.

So not exactly a full reverse then.

In short, Twitter has said it will no longer accept political ads, period.

Whereas Facebook recently announced it will no longer fact-check political ads. Aka: Lies are fine, so long as you’re paying Facebook to spread them.

You could argue there’s a certain surface clarity to Facebook’s position — i.e. it sums to ‘when it comes to politics we just won’t have any ethics’. Presumably with the hoped for sequitur being ‘so you can’t accuse us of bias’.

Though that’s actually a non sequitur; by not applying any ethical standards around political campaigns Facebook is providing succour to those with the least ethics and the basest standards. So its position does actually favor the ‘truth-lite’, to put it politely. (You can decide which political side that might advantage.)

Twitter’s position also has surface clarity: A total ban! Political and issue ads both into the delete bin. But as my colleague Devin Coldewey quickly pointed out it’s likely to get rather more fuzzy around the edges as the company comes to defining exactly what is (and isn’t) a ‘political ad’ — and what its few “exceptions” might be.

Indeed, Twitter’s definitions are already raising eyebrows. For example it has apparently decided climate change is a ‘political issue’ — and will therefore be banning ads about science. While, presumably, remaining open to taking money from big oil to promote their climate-polluting brands… So yeah, messy.

There will clearly be attempts to stress test and circumvent the lines Twitter is setting. The policy may sound simple but it involves all sorts of judgements that expose the company’s political calculations and leave it open to charges of bias and/or moral failure.

Still, setting rules is — or should be — the easy and adult thing to do when it comes to content standards; enforcement is the real sweating toil for these platforms.

Which is also, presumably, why Facebook has decided to experiment with not having any rules around political ads — in the (forlorn) hope of avoiding being forced into the role of political speech policeman.

If that’s the strategy it’s already looking spectacularly dumb and self-defeating. The company has just set itself up for an ongoing PR nightmare where it is indeed forced to police intentionally policy-provoking ads from its own back-foot — having put itself in the position of ‘wilfully corrupt cop’. Slow hand claps all round.

Albeit, it can at least console itself it’s monetizing its own ethics bypass.

Twitter’s opposing policy on political ads also isn’t immune from criticism, as we’ve noted.

Indeed, it’s already facing accusations that a total ban is biased against new candidates who start with a lower public profile. Even if the energy of that argument would be better spent advocating for wide-ranging reform of campaign financing, including hard limits on election spending. If you really want to reboot politics by levelling the playing field between candidates that’s how to do it.

Also essential: Regulations capable of enforcing controls on dark money to protect democracies from being bought and cooked from the inside via the invisible seeding of propaganda that misappropriates the reach and data of Internet platforms to pass off lies as populist truth, cloaking them in the shape-shifting blur of microtargeted hyperconnectivity.

Sketchy interests buying cheap influence from data-rich billionaires, free from accountability or democratic scrutiny, is our new warped ‘normal’. But it shouldn’t be.

There’s another issue being papered over here, too. Twitter banning political ads is really a distracting detail when you consider that it’s not a major platform for running political ads anyway.

During the 2018 US midterms the category generated less than $3M for the company.

And, secondly, anything posted organically as a tweet to Twitter can act as a political call to arms.

It’s these outrageous ‘organic’ tweets where the real political action is on Twitter’s platform. (Hi Trump.)

Including inauthentically ‘organic’ tweets which aren’t a person’s genuinely held opinion but a planted (and often paid for) fake. Call it ‘going native’ advertising; faux tweets intended to pass off lies as truth, inflated and amplified by bot armies (fake accounts) operating in plain sight (often gaming Twitter’s trending topics) as a parallel ‘unofficial’ advertising infrastructure whose mission is to generate attention-grabbing pantomimes of public opinion to try and sway the real thing.

In short: Propaganda.

Who needs to pay to run a political ad on Twitter when you can get a bot network to do the boosterism for you?

Let’s not forget Dorsey is also the tech CEO famed for not applying his platform’s rules of conduct to the tweets of certain high profile politicians. (Er, Trump again, basically.)

So by saying Twitter is banning political ads yet continuing to apply a double standard to world leaders’ tweets — most obviously by allowing the US president to bully, abuse and threaten at will in order to further his populist rightwing political agenda — the company is trying to have its cake and eat it.

More recently Twitter has evolved its policy slightly, saying it will apply some limits on the reach of rule-breaking world leader tweets. But it continues to run two sets of rules.

To Dorsey’s credit he does foreground this tension in his tweet storm — where he writes [emphasis ours]:

Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale.

These challenges will affect ALL internet communication, not just political ads. Best to focus our efforts on the root problems, without the additional burden and complexity taking money brings. Trying to fix both means fixing neither well, and harms our credibility.

This is good stuff from Dorsey. Surprisingly good, given his and Twitter’s long years of free speech fundamentalism — when the company gained a reputation for being wilfully blind and deaf to the fact that for free expression to flourish online it needs a protective shield of civic limits. Otherwise ‘freedom to amplify any awful thing’ becomes a speech chiller that disproportionately harms minorities.

Aka freedom of speech is not the same as freedom of reach, as Dorsey now notes.

Even with Twitter making some disappointing choices in how it defines political issues, for the purposes of this ad ban, the contrast with Facebook and Zuckerberg — still twisting and spinning in the same hot air; trying to justify incoherent platform policies that sell out democracy for a binary ideology which his own company can’t even stick to — looks stark.

The timing of Dorsey’s tweet-storm, during Facebook’s earnings call, was clearly intended to make that point.

“Zuckerberg wants us to believe that one must be for or against free speech with no nuance, complexity or cultural specificity, despite running a company that’s drowning in complexity,” writes cultural historian, Siva Vaidhyanathan, confronting Facebook’s moral vacuousness in a recent Guardian article responding to another Zuckerberg ‘manifesto’ on free speech. “He wants our discussions to be as abstract and idealistic as possible. He wants us not to look too closely at Facebook itself.”

Facebook’s position on speech does only stand up in the abstract. Just as its ad-targeting business can only run free of moral outrage in unregulated obscurity, where the baked in biases — algorithmic and user generated — are safely hidden from view so people can’t joins the dots on how they’re being damaged.

We shouldn’t be surprised at how quickly the scandal-prone company is now being called on its ideological BS. We have a savvier political class as a result of the platform-scale disinformation and global data scandals of the past few years. People who have have seen and experienced what Facebook’s policies translate to in real world practice. Like compromised elections and community violence.

With lawmakers like these turning their attention on platform giants there is a genuine possibility of meaningful regulation coming down the pipe for the antisocial media business.

Not least because Facebook’s self regulation has always been another piece of crisis PR, designed to preempt and steer off the real thing. It’s a cynical attempt to maintain its profitable grip on our attention. The company has never been committed to making the kind of systemic change necessary to fix its toxic speech issues.

The problem is, ultimately, toxicity and division drives engagement, captures attention and makes Facebook a lot of money.

Twitter can claim a little distance from that business model not only because it’s considerably less successful than Facebook at generating money by monopolizing attention, but also because it provides greater leeway for its users to build and follow their own interest networks, free from algorithmic interference (though it does do algorithms too).

It has also been on a self-proclaimed reform path for some time. Most recently saying it wants to be responsible for promoting “conversational health on its platform. No one would say it’s there yet but perhaps we’re finally getting to see some action. Even if banning political ads is mostly a quick PR win for Twitter.

The really hard work continues, though. Namely rooting out bot armies before their malicious propaganda can pollute the public sphere. Twitter hasn’t said it’s close to being able to fix that.

Facebook is also still failing to stem the tide of ‘organic’ politicized fake content on its platform. Fakes that profit at our democratic expense by spreading hate and lies.

For this type of content Facebook offers no searchable archive (as it now does for paid ads which it defines as political) — thereby providing ongoing cover for dark money to do its manipulative hack-job on democracy by free-posting via groups and pages.

Plus, even where Facebook claims to be transparently raising the curtain on paid political influence it’s abjectly failing to do so. Its political ads API is still being blasted by research academics as not fit for purpose. Even as the company policy cranks up pressure on external fact-checkers by giving politicians the green light to run ads that lie.

It has also been accused of applying a biased standard when it comes to weeding out “coordinated inauthentic behavior”, as Facebook euphemistically calls the networks of fake accounts set up to amplify and juice reach — when the propaganda in question is coming from within the US and leans toward the political right.

 

Facebook denies this, claiming for example that a network of pages on its platform reported to be exclusively boosting content from US conservative news site, The Daily Wire, arereal pages run by real people in the U.S., and they don’t violate our policies. (It didn’t offer us any detail on how it reached that conclusion.) 

A company spokesperson also said: “We’re working on more transparency so that in the future people have more information about Pages like these on Facebook.”

So it’s still promising ‘more transparency’ — rather than actually being transparent. And it remains the sole judge interpreting and applying policies that aren’t at all legally binding; so sham regulation then. 

Moreover, while Facebook has at times issued bans on toxic content from certain domestic hate speech preachers’, such as banning some of InfoWars’ Alex Jones’ pages, it’s failed to stop the self-same hate respawning via new pages. Or indeed the same hateful individuals maintaining other accounts on different Facebook-owned social properties. Inconsistency of policy enforcement is Facebook’s DNA.

Set against all that Dorsey’s decision to take a stance against political ads looks positively statesmanlike.

It is also, at a fundamental level, obviously just the right thing to do. Buying a greater share of attention than you’ve earned politically is regressive because it favors those with the deepest pockets. Though of course Twitter’s stance won’t fix the rest of a broken system where money continues to pour in and pollute politics.

We also don’t know the fine-grained detail of how Twitter’s algorithms amplify political speech when it’s packaged in organic tweet form. So whether its algorithmic levers are more likely to be triggered into boosting political tweets that inflame and incite, or those that inform and seek to unite.

As I say, the whole of Twitter’s platform can sum to political advertising. And the company does apply algorithms to surface or suppress tweets based on its proprietary (and commercial) determination of ‘engagement quality’. So its entire business is involved in shaping how visible (or otherwise) tweeted speech is.

That very obviously includes plenty of political speech. Not for nothing is Twitter Trump’s platform of choice.

Nothing about its ban on political ads changes all that. So, as ever, where social media self-regulation is concerned, what we are being given is — at best — just fiddling around the edges.

A cynical eye might say Twitter’s ban is intended to distract attention from more structural problems baked into these attention-harvesting Internet platforms.

The toxic political discourse problem that democracies and societies around the world are being forced to grapple with is as a consequence of how Internet platforms distribute content and shape public discussion. So what’s really key is how these companies use our information to program what we each get to see.

The fact that we’re talking about Twitter’s political ad ban risks distracting from the “root problems” Dorsey referenced in passing. (Though he would probably offer a different definition of their cause. In the tweet storm he just talks about “working hard to stop people from gaming our systems to spread misleading info”.)

Facebook’s public diagnosis of the same problem is always extremely basic and blame-shifting. It just says some humans are bad, ergo some bad stuff will be platformed by Facebook — reflecting the issue back at humanity.

Here’s an alternative take: The core issue underpinning all these problems around how Internet platforms spread toxic propaganda is the underlying fact of taking people’s data in order to manipulate our attention.

This business of microtargeting — or behavioral advertising, as it’s also called — turns everyone into a target for some piece of propaganda or other.

It’s a practice that sucks regardless of whether it’s being done to you by Donald Trump or by Disney. Because it’s asymmetrical. It’s disproportionate. It’s exploitative. And it’s inherently anti-democratic.

It also incentivizes a pervasive, industrial-scale stockpiling of personal data that’s naturally hostile to privacy, terrible for security and gobbles huge amounts of energy and computing resource. So it sucks from an environmental perspective too.

And it does it all for the very basest of purposes. This is platforms selling you out so others can sell you stuff. Be it soap or political opinions.

Zuckerberg’s label of choice for this process — “relevant ads” — is just the slick lie told by a billionaire to grease the pipes that suck out the data required to sell our attention down the river.

Microtargeting is both awful for the individual (meaning creepy ads; loss of privacy; risk of bias and data misuse) and terrible for society for all the same reasons — as well as grave, society-level risks, such as election interference and the undermining of hard-won democratic institutions by hostile forces.

Individual privacy is a common good, akin to public health. Inoculation — against disease or indeed disinformation — helps protect the whole of us from damaging contagion.

To be clear, microtargeting is also not only something that happens when platforms are paid money to target ads. Platforms are doing this all the time; applying a weaponizing layer to customize everything they handle.

It’s how they distribute and program the masses of information users freely upload, creating maximally engaging order out of the daily human chaos they’ve tasked themselves with turning into a compelling and personalized narrative — without paying a massive army of human editors to do the job.

Facebook’s News Feed relies on the same data-driven principles as behavioral ads do to grab and hold attention. As does Twitter’s ‘Top Tweets’ algorithmically ranked view.

This is programmed attention-manipulation at vast scale, repackaged as a ‘social’ service. One which uses what the platforms learn by spying on Internet users as divisive glue to bind our individual attention, even if it means setting some of us against each another.

That’s why you can publish a Facebook post that mentions a particular political issue and — literally within seconds — attract a violently expressed opposing view from a Facebook ‘friend’ you haven’t spoken to in years. The platform can deliver that content ‘gut punch’ because it has a god-like view of everyone via the prism of their data. Data that powers its algorithms to plug content into “relevant” eyeballs, ranked by highest potential for engagement sparks to fly.

It goes without saying that if a real friendship group contained such a game-playing stalker — who had bugged everyone’s phones to snoop and keep tabs on them, and used what they learnt to play friends off against each other — no one would imagine it bringing the group closer together. Yet that’s how Facebook treats its captive eyeballs.

That awkward silence you could hear as certain hard-hitting questions struck Zuckerberg during his most recent turn in the House might just be the penny dropping.

It finally feels as if lawmakers are getting close to an understanding of the real “root problem” embedded in these content-for-data sociotechnical platforms.

Platforms that invite us to gaze into them in order that they can get intimate with us forever — using what they learn from spying to pry further and exploit faster.

So while banning political ads sounds nice it’s just a distraction. What we really need to shatter the black mirror platforms are holding against society, in which they get to view us from all angles while preventing us from seeing what they’re doing, is to bring down a comprehensive privacy screen. No targeting against personal data.

Let them show us content and ads, sure. They can target this stuff contextually based on a few generic pieces of information. They can even ask us to specify if we’d like to see ads about housing today or consumer packaged goods? We can negotiate the rules. Everything else — what we do on or off the platform, who we talk to, what we look at, where we go, what we say — must remain strictly off limits.

Zuckerberg defends political ads that will be 0.5% of 2020 revenue

As Jack Dorsey announced his company Twitter would drop all political ads, Facebook CEO Zuckerberg doubled-down on his policy of refusing to fact check politicians’ ads. “At times of social tension there has often been an urge to pull back on free expression . . . We will be best served over the long term by resisting this urge and defending free expression.”

Still, Zuckerberg failed to delineate between freedom of expression, and freedom of paid amplification of that expression which inherently favors the rich.

During today’s Q3 2019 earnings call where Facebook beat expectations and grew monthly users 2% to 2.45 billion, Zuckerberg spent his time defending the social network’s lenient political ad policy. You can read his full prepared statement here.

One clear objective was to dispel the idea that Facebook was motivated by greed to keep these ads. Zuckerberg explained “We estimate these ads from politicians will be less than 0.5% of our revenue next year.” For reference, Facebook earned $66 billion in the 12 months ending Q3 2019, so Facebook might earn around $330 million to $400 million in political ads next year. Unfortunately, it’s unclear if Zuckerberg meant 0.5% of ads were political or just from politicians without counting issue ads and PACs.

Zuckerberg also said that given Facebook removed 50 million hours per day of viral video watching from its platform to support well-being which hurt ad viewership and the company’s share price, Facebook clearly doesn’t act solely in pursuit of profit.

We just shared our community update and quarterly results. Here’s what I said on our earnings call. — Before we…

Posted by Mark Zuckerberg on Wednesday, October 30, 2019

Facebook’s CEO also tried to bat down the theory that Facebook is allowing misinformation in political ads to cater to conservatives or avoid calls of bias from them. “Some people say that this is just all a cynical political calculation and that we’re acting in a way that we don’t really believe because we’re just trying to appease conservatives” he said, responding that “frankly, if our goal was that we’re trying to make either side happy then we’re not doing a very good job because I’m pretty sure everyone is frustrated.” 

Instead of banning political ads, Zuckerberg voiced support for increasing transparency about how ads look, how much is spent on them, and where they’re run. “I believe that the better approach is to work to increase transparency. Ads on Facebook are already more transparent than anywhere else. We have a political ads archive so anyone can scrutinize every ad that’s run.” 

He mentioned that political ads are run by “Google, YouTube, and most internet platforms”, seeming to stumble for a second as he was likely prepared to cite Twitter too until it announced it would drop all political ads an hour earlier. He omitted that Pinterest and TikTok have also banned political ads.

It doesn’t help that hundreds of Facebook’s own employees have called on their CEO to change the policy. He concluded that no one could accuse Facebook of not deeply thinking through the question and its downstream ramifications. Zuckerberg did leave himself an out if he chooses to change the policy, though. “I’ve considered whether we should not [sell political ads] in the past, and I’ll continue to do so.”

Dorsey had tweeted that “We’ve made the decision to stop all political advertising on Twitter globally. We believe political message reach should be earned, not bought.” Democrat Representative Alexandria Ocasio-Cortez expressed support for Twitter’s move while Trump campaign manager Brad Parscale called it “a very dumb decision”

Twitter’s CEO took some clear swipes at Zuckerberg, countering his common arguments for allowing misinformation in politician’s ads. “Some might argue our actions today could favor incumbents. But we have witnessed many social movements reach massive scale without any political advertising. I trust this will only grow.” Given President Trump had outspent all Democratic candidates on Facebook ads as of March of this year, it’s clear that deep-pocketed incumbents could benefit from Facebook’s policy.

trump 2020 facebook ads 1558042973182 facebookJumbo v10 1 1

Trump continues to massively outspend Democratic rivals on Facebook ads. Via NYT

Miming Facebook’s position, Dorsey tweeted “It‘s not credible for us to say: ‘We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well…they can say whatever they want!”

Twitter doesn’t earn much from political ads, citing only $3 million in revenue from the 2018 mid-term elections, or roughly 0.1% of its $3 billion in total 2018 revenue. That means there will be no major windfall for Facebook from Twitter dropping political ads. But now all eyes will be on Facebook and Google/YouTube. If Sundar Pichai and Susan Wojcicki move in line with Dorsey, it could make Zuckerberg even more vulnerable to criticism.

$330 million might not be a big incentive for Facebook or Zuckerberg, but it still sounds like a lot of money to earn from ads that potentially lie to voters. I respect Facebook’s lenient policy when it comes to speech organically posted to users, organizations, or politicians’ own accounts. But relying on the candidates, press, and public to police speech is dangerously idealistic. We’ve seen how candidates will do anything to win, partisan press will ignore the truth to support their team, and the public aren’t educated or engaged enough to consistently understand what’s false.

Zuckerberg greatest mistakes have come from overestimating humanity. Unfortunately, not everyone wants to bring the world closer together. Without safeguards, Facebook’s tools can help tear it apart. It’s time for Facebook and Zuckerberg to recognize the difference between free expression and paid expression.

Facebook shares rise on strong Q3, users up 2% to 2.45B

Despite ongoing public relations crises, Facebook kept growing in Q3 2019, demonstrating that media backlash does not necessarily equate to poor business performance.

Facebook reached 2.45 billion monthly users, up 1.65%, from 2.41 billion in Q2 2019 when it grew 1.6%, and it now has 1.62 billion daily active users, up 2% from 1.587 billion last quarter when it grew 1.6%. Facebook scored $17.652 billion of revenue, up 29% year-over-year, with $2.12 in earnings per share.

Facebook Q3 2019 DAU

Facebook’s earnings beat expectations compared to Refinitiv’s consensus estimates of $17.37 billion in revenue and $1.91 earnings per share. Facebook’s quarter was mixed compared to Bloomberg’s consensus estimate of $2.28 EPS. Facebook earned $6 billion in profit after only racking up $2.6 billion last quarter due to its SEC settlement.

Facebook shares rose 5.18% in after-hours trading, to $198.01 after earnings were announced, following a day where it closed down 0.56% at $188.25.

Notably, Facebook gained 2 million users in each of its core U.S. & Canada and Europe markets that drive its business, after quarters of shrinkage, no growth or weak growth there in the past two years. Average revenue per user grew healthily across all markets, boding well for Facebook’s ability to monetize the developing world where the bulk of user growth currently comes from.

Facebook says 2.2 billion users access Facebook, Instagram, WhatsApp or Messenger every day, and 2.8 billion use one of this family of apps each month. That’s up from 2.1 billion and 2.7 billion last quarter. Facebook has managed to stay sticky even as it faces increased competition from a revived Snapchat, and more recently TikTok. However, those rivals might more heavily weigh on Instagram, for which Facebook doesn’t routinely disclose user stats.

Facebook ARPU Q3 2019

Zuckerberg defends political ads policy

Facebook’s earnings announcement was somewhat overshadowed by Twitter CEO Jack Dorsey announcing it would ban all political ads — something TechCrunch previously recommended social networks do. That move flies in the face of Facebook CEO Mark Zuckerberg’s staunch support for allowing politicians to spread misinformation without fact-checks via Facebook ads. This should put additional pressure on Facebook to rethink its policy.

Zuckerberg doubled-down on the policy, saying “I believe that the better approach is to work to increase transparency. Ads on Facebook are already more transparent than anywhere else,” he said. Attempting to dispel that the policy is driven by greed, he noted Facebook expects political ads to make up “less than 0.5% of our revenue next year.” Because people will disagree and the issue will keep coming up, Zuckerberg admitted it’s going to be “a very tough year.”

Facebook also announced that lead independent board member Susan D. Desmond-Hellmann has resigned to focus on health issues.

Earnings call highlights

Facebook expects revenue deceleration to be pronounced in Q4. But CFO David Wehner provided some hope, saying “we would expect our revenue growth deceleration in 2020 versus the Q4 rate to be much less pronounced.” That led Facebook’s share price to spike from around $191 to around $198.

However, Facebook will maintain its aggressive hiring to moderate content. While the company has touted how artificial intelligence would increasingly help, Zuckerberg said that hiring would continue because “There’s just so much content. We do need a lot of people.”

Zuckerberg Libra 1

Regarding Libra’s regulatory pushback, Zuckerberg explained that Facebook was already diversified in commerce if that doesn’t work out, citing WhatsApp Payments, Facebook Marketplace and Instagram shopping.

On anti-trust concerns, Zuckerberg reminded analysts that Instagram’s success wasn’t assured when Facebook acquired it, and it has survived a lot of competition thanks to Facebook’s contributions. In a new talking point we’re likely to hear more of, Zuckerberg noted that other competitors had used their success in one vertical to push others, saying “Apple and Google built cameras and private photo sharing and photo management directly into their operating systems.”

Scandals continue, but so does growth

Overall, it was another rough quarter for Facebook’s public perception as it dealt with outages and struggled to get buy-in from regulators for its Libra cryptocurrency project. Former co-founder Chris Hughes (who I’ll be leading a talk with at SXSW) campaigned for the social network to be broken up — a position echoed by Elizabeth Warren and other presidential candidates.

The company did spin up some new revenue sources, including taking a 30% cut of fan patronage subscriptions to content creators. It’s also trying to sell video subscriptions for publishers, and it upped the price of its Workplace collaboration suite. But gains were likely offset as the company continued to rapidly hire to address abusive content on its platform, which saw headcount grow 28% year-over-year, to 43,000. There are still problems with how it treats content moderators, and Facebook has had to repeatedly remove coordinated misinformation campaigns from abroad. Appearing concerned about its waning brand, Facebook moved to add “from Facebook” to the names of Instagram and WhatsApp.

It escaped with just a $5 billion fine as part of its FTC settlement that some consider a slap on the wrist, especially since it won’t have to significantly alter its business model. But the company will have to continue to invest and divert product resources to meet its new privacy, security and transparency requirements. These could slow its response to a growing threat: Chinese tech giant ByteDance’s TikTok.

Facebook is failing to prevent another human rights tragedy playing out on its platform, report warns

A report by campaign group Avaaz examining how Facebook’s platform is being used to spread hate speech in the Assam region of North East India suggests the company is once again failing to prevent its platform from being turned into a weapon to fuel ethnic violence.

Assam has a long-standing Muslim minority population but ethnic minorities in the state look increasingly vulnerable after India’s Hindu nationalist government pushed forward with a National Register of Citizens (NRC), which has resulted in the exclusion from that list of nearly 1.9 million people — mostly Muslims — putting them at risk of statelessness.

In July the United Nations expressed grave concern over the NRC process, saying there’s a risk of arbitrary expulsion and detention, with those those excluded being referred to Foreigners’ Tribunals where they have to prove they are not “irregular”.

At the same time, the UN warned of the rise of hate speech in Assam being spread via social media — saying this is contributing to increasing instability and uncertainty for millions in the region. “This process may exacerbate the xenophobic climate while fuelling religious intolerance and discrimination in the country,” it wrote.

There’s an awful sense of deja-vu about these warnings. In March 2018 the UN criticized Facebook for failing to prevent its platform being used to fuel ethnic violence against the Rohingya people in the neighboring country of Myanmar — saying the service had played a “determining role” in that crisis.

Facebook’s response to devastating criticism from the UN looks like wafer-thin crisis PR to paper over the ethical cracks in its ad business, given the same sorts of alarm bells are being sounded again, just over a year later. (If we measure the company by the lofty goals it attached to a director of human rights policy job last year — when Facebook wrote that the responsibilities included “conflict prevention” and “peace-building” — it’s surely been an abject failure.)

Avaaz’s report on hate speech in Assam takes direct aim at Facebook’s platform, saying it’s being used as a conduit for whipping up anti-Muslim hatred.

In the report, entitled Megaphone for Hate: Disinformation and Hate Speech on Facebook During Assam’s Citizenship Count, the group says it analysed 800 Facebook posts and comments relating to Assam and the NRC, using keywords from the immigration discourse in Assamese, assessing them against the three tiers of prohibited hate speech set out in Facebook’s Community Standards.

Avaaz found that at least 26.5% of the posts and comments constituted hate speech. These posts had been shared on Facebook more than 99,650 times — adding up to at least 5.4 million views for violent hate speech targeting religious and ethnic minorities, according to its analysis.

Bengali Muslims are a particular target on Facebook in Assam, per the report, which found comments referring to them as “criminals,” “rapists,” “terrorists,” “pigs,” and “dogs”, among other dehumanizing terms.

In further disturbing comments there were calls for people to “poison” daughters, and legalise female foeticide, as well as several posts urging “Indian” women to be protected from “rape-obsessed foreigners”.

Avaaz suggests its findings are just a drop in the ocean of hate speech that it says is drowning Assam via Facebook and other social media. But it accuses Facebook directly of failing to provide adequate human resource to police hate speech spread on its dominant platform.

Commenting in a statement, Alaphia Zoyab, senior campaigner, said: “Facebook is being used as a megaphone for hate, pointed directly at vulnerable minorities in Assam, many of whom could be made stateless within months. Despite the clear and present danger faced by these people, Facebook is refusing to dedicate the resources required to keep them safe. Through its inaction, Facebook is complicit in the persecution of some of the world’s most vulnerable people.”

Its key complaint is that Facebook continues to rely on AI to detect hate speech which has not been reported to it by human users — using its limited pool of (human) content moderator staff to review pre-flagged content, rather than proactively detect it.

Facebook founder Mark Zuckerberg has previously said AI has a very long way to go to reliably detect hate speech. Indeed, he’s suggested it may never be able to do that.

In April 2018 he told US lawmakers it might take five to ten years to develop “AI tools that can get into some of the linguistic nuances of different types of content to be more accurate, to be flagging things to our systems”, while admitting: “Today we’re just not there on that.”

That sums to an admission that in regions such as Assam — where inter-ethnic tensions are being whipped up in a politically charged atmosphere that’s also encouraging violence — Facebook is essentially asleep on the job. The job of enforcing its own ‘Community Standards’ and preventing its platform being weaponized to amplify hate and harass the vulnerable, to be clear.

Avaaz says it flagged 213 of “the clearest examples” of hate speech which it found directly to Facebook — including posts from an elected official and pages of a member of an Assamese rebel group banned by the Indian Government. The company removed 96 of these posts following its report.

It argues there are similarities in the type of hate speech being directed at ethnic minorities in Assam via Facebook and that which targeted at Rohingya people in Myanmar, also on Facebook, while noting that the context is different. But it did also find hateful content on Facebook targeting Rohingya people in India.

It is calling on Facebook to do more to protect vulnerable minorities in Assam, arguing it should not rely solely on automated tools for detecting hate speech — and should instead apply a “human-led ‘zero tolerance’ policy” against hate speech, starting by beefing up moderators’ expertise in local languages.

It also recommends Facebook launch an early warning system within its Strategic Response team, again based on human content moderation — and do so for all regions where the UN has warned of the rise of hate speech on social media.

“This system should act preventatively to avert human rights crises, not just reactively to respond to offline harm that has already occurred,” it writes.

Other recommendations include that Facebook should correct the record on false news and disinformation by notifying and providing corrections from fact-checkers to each and every user who has seen content deemed to have been false or purposefully misleading, including if the disinformation came from a politician; that it should be transparent about all page and post takedowns by publishing its rational on the Facebook Newsroom so the issue of hate speech is given proportionate prominence and publicity to the size of the problem on Facebook; and it should agree to an independent audit of hate speech and human rights on its platform in India.

“Facebook has signed up to comply with the UN Guiding Principles on Business and Human Rights,” Avaaz notes. “Which require it to conduct human rights due diligence such as identifying its impact on vulnerable groups like women, children, linguistic, ethnic and religious minorities and others, particularly when deploying AI tools to identify hate speech, and take steps to subsequently avoid or mitigate such harm.”

We reached out to Facebook with a series of questions about Avaaz’s report and also how it has progressed its approach to policing inter-ethnic hate speech since the Myanmar crisis — including asking for details of the number of people it employs to monitor content in the region.

Facebook did not provide responses to our specific questions. It just said it does have content reviewers who are Assamese and who review content in the language, as well as reviewers who have knowledge of the majority of official languages in India, including Assamese, Hindi, Tamil, Telugu, Kannada, Punjabi, Urdu, Bengali and Marathi.

In 2017 India overtook the US as the country with the largest “potential audience” for Facebook ads, with 241M active users, per figures it reports the advertisers.

Facebook also sent us this statement, attributed to a spokesperson:

We want Facebook to be a safe place for all people to connect and express themselves, and we seek to protect the rights of minorities and marginalized communities around the world, including in India. We have clear rules against hate speech, which we define as attacks against people on the basis of things like caste, nationality, ethnicity and religion, and which reflect input we received from experts in India. We take this extremely seriously and remove content that violates these policies as soon as we become aware of it. To do this we have invested in dedicated content reviewers, who have local language expertise and an understanding of the India’s longstanding historical and social tensions. We’ve also made significant progress in proactively detecting hate speech on our services, which helps us get to potentially harmful content faster.

But these tools aren’t perfect yet, and reports from our community are still extremely important. That’s why we’re so grateful to Avaaz for sharing their findings with us. We have carefully reviewed the content they’ve flagged, and removed everything that violated our policies. We will continue to work to prevent the spread of hate speech on our services, both in India and around the world.

Facebook did not tell us exactly how many people it employs to police content for an Indian state with a population of more than 30 million people.

Globally the company maintains it has around 35,000 people working on trust and safety, less than half of whom (~15,000) are dedicated content reviewers. But with such a tiny content reviewer workforce for a global platform with 2.2BN+ users posting night and day all around the world there’s no plausible no way for it to stay on top of its hate speech problem.

Certainly not in every market it operates in. Which is why Facebook leans so heavily on AI — shrinking the cost to its business but piling content-related risk onto everyone else.

Facebook claims its automated tools for detecting hate speech have got better, saying that in Q1 this year it increased the proactive detection rate for hate speech to 65.4% — up from 58.8% in Q4 2017 and 38% in Q2 2017.

However it also says it only removed 4 million pieces of hate speech globally in Q1. Which sounds incredibly tiny vs the size of Facebook’s platform and the volume of content that will be generated daily by its millions and millions of active users.

Without tools for independent researchers to query the substance and spread of content on Facebook’s platform it’s simply not possible to know how many pieces of hate speech are going undetected. But — to be clear — this unregulated company still gets to mark its own homework. 

In just one example of how Facebook is able to shrink perception of the volume of problematic content it’s fencing, of the 213 pieces of content related to Assam and the NCR that Avaaz judged to be hate speech and reported to Facebook it removed less than half (96).

Yet Facebook also told us it takes down all content that violates its community standards — suggesting it is applying a far more dilute definition of hate speech than Avaaz. Unsurprising for a US company whose nascent crisis PR content review board‘s charter includes the phrase “free expression is paramount”. But for a company that also claims to want to prevent conflict and peace-build it’s rather conflicted, to say the least. 

As things stand, Facebook’s self-reported hate speech performance metrics are meaningless. It’s impossible for anyone outside the company to quantify or benchmark platform data. Because no one except Facebook has the full picture — and it’s not opening its platform for ethnical audit. Even as the impacts of harmful, hateful stuff spread on Facebook continue to bleed out and damage lives around the world. 

Facebook staff demand Zuckerberg limit lies in politcal ads

Submit campaign ads to fact checking, limit microtargeting, cap spending, observe silence periods, or at least warn users. These are the solutions Facebook employees put forward in an open letter pleading with CEO Mark Zuckerberg and company leadership to address misinformation in political ads.

The letter, obtained by the New York Times’ Mike Isaac, insists that “Free speech and paid speech are not the same thing . . . Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for.” The letter was posted to Facebook’s internal collaboration forum a few weeks ago.

The sentiments echo what I called for in a TechCrunch opinion piece on October 13th calling on Facebook to ban political ads. Unfettered misinformation in political ads on Facebook lets politicians and their supporters spread inflammatory and inaccurate claims about their views and their rivals while racking up donations to buy more of these ads.

The social network can still offer freedom of expression to political campaigns on their own Facebook Pages while limiting the ability of the richest and most dishonest to pay to make their lies the loudest. We suggested that if Facebook won’t drop political ads, they should be fact checked and/or use an array of generic “vote for me” or “donate here” ad units that don’t allow accusations. We also criticized how microtargeting of communities vulnerable to misinformation and instant donation links make Facebook ads more dangerous than equivalent TV or radio spots.

Mark Zuckerberg Hearing In Congress

The Facebook CEO, Mark Zuckerberg, testified before the House Financial Services Committee on Wednesday October 23, 2019 Washington, D.C. (Photo by Aurora Samperio/NurPhoto via Getty Images)

Over 250 employees of Facebook’s 35,000 staffers have signed the letter, that declares “We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.” It suggests the current policy undermines Facebook’s election integrity work, confuses users about where misinformation is allowed, and signals Facebook is happy to profit from lies.

The solutions suggested include:

  1. Don’t accept political ads unless they’re subject to third-party fact checks
  2. Use visual design to more strongly differentiate between political ads and organic non-ad posts
  3. Restrict microtargeting for political ads including the use of Custom Audiences since microtargeted hides ads from as much public scrutiny that Facebook claims keeps politicians honest
  4. Observe pre-election silence periods for political ads to limit the impact and scale of misinformation
  5. Limit ad spending per politician or candidate, with spending by them and their supporting political action committees combined
  6. Make it more visually clear to users that political ads aren’t fact-checked

A combination of these approaches could let Facebook stop short of banning political ads without allowing rampant misinformation or having to police individual claims.

Facebook’s response to the letter was “We remain committed to not censoring political speech, and will continue exploring additional steps we can take to bring increased transparency to political ads.” But that straw-man’s the letter’s request. Employees aren’t asking politicians to be kicked off Facebook or have their posts/ads deleted. They’re asking for warning labels and limits on paid reach. That’s not censorship.

Zuckerberg Elections 1

Zuckerberg had stood resolute on the policy despite backlash from the press and lawmakers including Representative Alexandria Ocasio-Cortez (D-NY). She left him tongue-tied during a congressional testimony when she asked exactly what kinds of misinfo were allowed in ads.

But then Friday Facebook blocked an ad designed to test its limits by claiming Republican Lindsey Graham had voted for Ocasio-Cortez’s Green Deal he actually opposes. Facebook told Reuters it will fact-check PAC ads

One sensible approach for politicians’ ads would be for Facebook to ramp up fact-checking, starting with Presidential candidates until it has the resources to scan more. Those fact-checked as false should receive an interstitial warning blocking their content rather than just a “false” label. That could be paired with giving political ads a bigger disclaimer without making them too prominent looking in general and only allowing targeting by state.

Deciding on potential spending limits and silent periods would be more messy. Low limits could even the playing field and broad silent periods especially during voting periods could prevent voter suppression. Perhaps these specifics should be left to Facebook’s upcoming independent Oversight Board that acts as a supreme court for moderation decisions and policies.

fb arbiter of truth

Zuckerberg’s core argument for the policy is that over time, history bends towards more speech, not censorship. But that succumbs to utopic fallacy that assumes technology evenly advantages the honest and dishonest. In reality, sensational misinformation spreads much further and faster than level-headed truth. Microtargeted ads with thousands of variants undercut and overwhelm the democratic apparatus designed to punish liars, while partisan news outlets counter attempts to call them out.

Zuckerberg wants to avoid Facebook becoming the truth police. But as we and employees have put forward, there a progressive approaches to limiting misinformation if he’s willing to step back from his philosophical orthodoxy.

The full text of the letter from Facebook employees to leadership about political ads can be found below, via the New York Times:

We are proud to work here.

Facebook stands for people expressing their voice. Creating a place where we can debate, share different opinions, and express our views is what makes our app and technologies meaningful for people all over the world.

We are proud to work for a place that enables that expression, and we believe it is imperative to evolve as societies change. As Chris Cox said, “We know the effects of social media are not neutral, and its history has not yet been written.”

This is our company.

We’re reaching out to you, the leaders of this company, because we’re worried we’re on track to undo the great strides our product teams have made in integrity over the last two years. We work here because we care, because we know that even our smallest choices impact communities at an astounding scale. We want to raise our concerns before it’s too late.

Free speech and paid speech are not the same thing.

Misinformation affects us all. Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for. We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.

Allowing paid civic misinformation to run on the platform in its current state has the potential to:

— Increase distrust in our platform by allowing similar paid and organic content to sit side-by-side — some with third-party fact-checking and some without. Additionally, it communicates that we are OK profiting from deliberate misinformation campaigns by those in or seeking positions of power.

— Undo integrity product work. Currently, integrity teams are working hard to give users more context on the content they see, demote violating content, and more. For the Election 2020 Lockdown, these teams made hard choices on what to support and what not to support, and this policy will undo much of that work by undermining trust in the platform. And after the 2020 Lockdown, this policy has the potential to continue to cause harm in coming elections around the world.

Proposals for improvement

Our goal is to bring awareness to our leadership that a large part of the employee body does not agree with this policy. We want to work with our leadership to develop better solutions that both protect our business and the people who use our products. We know this work is nuanced, but there are many things we can do short of eliminating political ads altogether.

These suggestions are all focused on ad-related content, not organic.

1. Hold political ads to the same standard as other ads.

a. Misinformation shared by political advertisers has an outsized detrimental impact on our community. We should not accept money for political ads without applying the standards that our other ads have to follow.

2. Stronger visual design treatment for political ads.

a. People have trouble distinguishing political ads from organic posts. We should apply a stronger design treatment to political ads that makes it easier for people to establish context.

3. Restrict targeting for political ads.

a. Currently, politicians and political campaigns can use our advanced targeting tools, such as Custom Audiences. It is common for political advertisers to upload voter rolls (which are publicly available in order to reach voters) and then use behavioral tracking tools (such as the FB pixel) and ad engagement to refine ads further. The risk with allowing this is that it’s hard for people in the electorate to participate in the “public scrutiny” that we’re saying comes along with political speech. These ads are often so micro-targeted that the conversations on our platforms are much more siloed than on other platforms. Currently we restrict targeting for housing and education and credit verticals due to a history of discrimination. We should extend similar restrictions to political advertising.

4. Broader observance of the election silence periods

a. Observe election silence in compliance with local laws and regulations. Explore a self-imposed election silence for all elections around the world to act in good faith and as good citizens.

5. Spend caps for individual politicians, regardless of source

a. FB has stated that one of the benefits of running political ads is to help more voices get heard. However, high-profile politicians can out-spend new voices and drown out the competition. To solve for this, if you have a PAC and a politician both running ads, there would be a limit that would apply to both together, rather than to each advertiser individually.

6. Clearer policies for political ads

a. If FB does not change the policies for political ads, we need to update the way they are displayed. For consumers and advertisers, it’s not immediately clear that political ads are exempt from the fact-checking that other ads go through. It should be easily understood by anyone that our advertising policies about misinformation don’t apply to original political content or ads, especially since political misinformation is more destructive than other types of misinformation.

Therefore, the section of the policies should be moved from “prohibited content” (which is not allowed at all) to “restricted content” (which is allowed with restrictions).

We want to have this conversation in an open dialog because we want to see actual change.

We are proud of the work that the integrity teams have done, and we don’t want to see that undermined by policy. Over the coming months, we’ll continue this conversation, and we look forward to working towards solutions together.

This is still our company.