Libra’s critics are missing the forest for the trees

It would be an understatement to say the last few months have been rocky for Libra, Facebook’s proposed stablecoin.

Since its announcement in June, eBay, Mastercard and other members of the cryptocurrency’s elite consortium have jumped ship (many due to direct pressure from legislators), a congressional hearing on Libra turned into an evisceration of Facebook’s data and privacy practices, Federal Reserve Governor Lael Brainard assailed the project’s lack of controls and the Chinese government announced its own competitive digital currency.

Critics, though well-intentioned, are missing the forest for the trees.

In spite of Libra’s well-cataloged risks and unanswered questions, there is a massive opportunity in plain sight for the global financial system; it would be a tragedy to let that opportunity be destroyed on the basis of Facebook’s reputation or Libra’s haphazard go-to-market. 

Governments should heed the lesson of the U.S.-Soviet space race of the 1970s and use the idea behind Libra, if not the project itself, in “coopetition” to build a better, more inclusive global financial architecture. 

A few key points to begin: first, Facebook is probably not the right actor to spearhead this initiative.

Mark Zuckerberg promises that Facebook will only be one board member in a governing consortium and that the project will comply with U.S. regulatory demands and privacy standards. But just as the company reneged on its promise not to integrate the encrypted WhatsApp into Facebook’s platform, it’s easy to picture Facebook pushing through standards that benefit itself at consumer expense. While Facebook would be a great platform to distribute Libra, its track record should make constituents uneasy about giving it any control.

Second, global payment systems are outdated and slow, and many businesses have been set up to extract rents from that fact. This burden disproportionately falls on the shoulders of poor consumers. People living paycheck-to-paycheck are forced into high-interest loans to smooth their cash flow due to slow settlement speeds. Immigrants sending money home pay up to 17 percent to move money across borders, costs that take a sizable bite out of some countries’ GDPs. In a ubiquitous currency regime, however, foreign exchange fees would vanish entirely .

Twitter’s political ads ban is a distraction from the real problem with platforms

Sometimes it feels as if Internet platforms are turning everything upside down, from politics to publishing, culture to commerce, and of course swapping truth for lies.

This week’s bizarro reversal was the vista of Twitter CEO Jack Dorsey, a tech CEO famed for being entirely behind the moral curve of understanding what his product is platforming (i.e. nazis), providing an impromptu ‘tweet storm’ in political speech ethics.

Actually he was schooling Facebook’s Mark Zuckerberg — another techbro renowned for his special disconnect with the real world, despite running a massive free propaganda empire with vast power to influence other people’s lives — in taking a stand for the good of democracy and society.

So not exactly a full reverse then.

In short, Twitter has said it will no longer accept political ads, period.

Whereas Facebook recently announced it will no longer fact-check political ads. Aka: Lies are fine, so long as you’re paying Facebook to spread them.

You could argue there’s a certain surface clarity to Facebook’s position — i.e. it sums to ‘when it comes to politics we just won’t have any ethics’. Presumably with the hoped for sequitur being ‘so you can’t accuse us of bias’.

Though that’s actually a non sequitur; by not applying any ethical standards around political campaigns Facebook is providing succour to those with the least ethics and the basest standards. So its position does actually favor the ‘truth-lite’, to put it politely. (You can decide which political side that might advantage.)

Twitter’s position also has surface clarity: A total ban! Political and issue ads both into the delete bin. But as my colleague Devin Coldewey quickly pointed out it’s likely to get rather more fuzzy around the edges as the company comes to defining exactly what is (and isn’t) a ‘political ad’ — and what its few “exceptions” might be.

Indeed, Twitter’s definitions are already raising eyebrows. For example it has apparently decided climate change is a ‘political issue’ — and will therefore be banning ads about science. While, presumably, remaining open to taking money from big oil to promote their climate-polluting brands… So yeah, messy.

There will clearly be attempts to stress test and circumvent the lines Twitter is setting. The policy may sound simple but it involves all sorts of judgements that expose the company’s political calculations and leave it open to charges of bias and/or moral failure.

Still, setting rules is — or should be — the easy and adult thing to do when it comes to content standards; enforcement is the real sweating toil for these platforms.

Which is also, presumably, why Facebook has decided to experiment with not having any rules around political ads — in the (forlorn) hope of avoiding being forced into the role of political speech policeman.

If that’s the strategy it’s already looking spectacularly dumb and self-defeating. The company has just set itself up for an ongoing PR nightmare where it is indeed forced to police intentionally policy-provoking ads from its own back-foot — having put itself in the position of ‘wilfully corrupt cop’. Slow hand claps all round.

Albeit, it can at least console itself it’s monetizing its own ethics bypass.

Twitter’s opposing policy on political ads also isn’t immune from criticism, as we’ve noted.

Indeed, it’s already facing accusations that a total ban is biased against new candidates who start with a lower public profile. Even if the energy of that argument would be better spent advocating for wide-ranging reform of campaign financing, including hard limits on election spending. If you really want to reboot politics by levelling the playing field between candidates that’s how to do it.

Also essential: Regulations capable of enforcing controls on dark money to protect democracies from being bought and cooked from the inside via the invisible seeding of propaganda that misappropriates the reach and data of Internet platforms to pass off lies as populist truth, cloaking them in the shape-shifting blur of microtargeted hyperconnectivity.

Sketchy interests buying cheap influence from data-rich billionaires, free from accountability or democratic scrutiny, is our new warped ‘normal’. But it shouldn’t be.

There’s another issue being papered over here, too. Twitter banning political ads is really a distracting detail when you consider that it’s not a major platform for running political ads anyway.

During the 2018 US midterms the category generated less than $3M for the company.

And, secondly, anything posted organically as a tweet to Twitter can act as a political call to arms.

It’s these outrageous ‘organic’ tweets where the real political action is on Twitter’s platform. (Hi Trump.)

Including inauthentically ‘organic’ tweets which aren’t a person’s genuinely held opinion but a planted (and often paid for) fake. Call it ‘going native’ advertising; faux tweets intended to pass off lies as truth, inflated and amplified by bot armies (fake accounts) operating in plain sight (often gaming Twitter’s trending topics) as a parallel ‘unofficial’ advertising infrastructure whose mission is to generate attention-grabbing pantomimes of public opinion to try and sway the real thing.

In short: Propaganda.

Who needs to pay to run a political ad on Twitter when you can get a bot network to do the boosterism for you?

Let’s not forget Dorsey is also the tech CEO famed for not applying his platform’s rules of conduct to the tweets of certain high profile politicians. (Er, Trump again, basically.)

So by saying Twitter is banning political ads yet continuing to apply a double standard to world leaders’ tweets — most obviously by allowing the US president to bully, abuse and threaten at will in order to further his populist rightwing political agenda — the company is trying to have its cake and eat it.

More recently Twitter has evolved its policy slightly, saying it will apply some limits on the reach of rule-breaking world leader tweets. But it continues to run two sets of rules.

To Dorsey’s credit he does foreground this tension in his tweet storm — where he writes [emphasis ours]:

Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale.

These challenges will affect ALL internet communication, not just political ads. Best to focus our efforts on the root problems, without the additional burden and complexity taking money brings. Trying to fix both means fixing neither well, and harms our credibility.

This is good stuff from Dorsey. Surprisingly good, given his and Twitter’s long years of free speech fundamentalism — when the company gained a reputation for being wilfully blind and deaf to the fact that for free expression to flourish online it needs a protective shield of civic limits. Otherwise ‘freedom to amplify any awful thing’ becomes a speech chiller that disproportionately harms minorities.

Aka freedom of speech is not the same as freedom of reach, as Dorsey now notes.

Even with Twitter making some disappointing choices in how it defines political issues, for the purposes of this ad ban, the contrast with Facebook and Zuckerberg — still twisting and spinning in the same hot air; trying to justify incoherent platform policies that sell out democracy for a binary ideology which his own company can’t even stick to — looks stark.

The timing of Dorsey’s tweet-storm, during Facebook’s earnings call, was clearly intended to make that point.

“Zuckerberg wants us to believe that one must be for or against free speech with no nuance, complexity or cultural specificity, despite running a company that’s drowning in complexity,” writes cultural historian, Siva Vaidhyanathan, confronting Facebook’s moral vacuousness in a recent Guardian article responding to another Zuckerberg ‘manifesto’ on free speech. “He wants our discussions to be as abstract and idealistic as possible. He wants us not to look too closely at Facebook itself.”

Facebook’s position on speech does only stand up in the abstract. Just as its ad-targeting business can only run free of moral outrage in unregulated obscurity, where the baked in biases — algorithmic and user generated — are safely hidden from view so people can’t joins the dots on how they’re being damaged.

We shouldn’t be surprised at how quickly the scandal-prone company is now being called on its ideological BS. We have a savvier political class as a result of the platform-scale disinformation and global data scandals of the past few years. People who have have seen and experienced what Facebook’s policies translate to in real world practice. Like compromised elections and community violence.

With lawmakers like these turning their attention on platform giants there is a genuine possibility of meaningful regulation coming down the pipe for the antisocial media business.

Not least because Facebook’s self regulation has always been another piece of crisis PR, designed to preempt and steer off the real thing. It’s a cynical attempt to maintain its profitable grip on our attention. The company has never been committed to making the kind of systemic change necessary to fix its toxic speech issues.

The problem is, ultimately, toxicity and division drives engagement, captures attention and makes Facebook a lot of money.

Twitter can claim a little distance from that business model not only because it’s considerably less successful than Facebook at generating money by monopolizing attention, but also because it provides greater leeway for its users to build and follow their own interest networks, free from algorithmic interference (though it does do algorithms too).

It has also been on a self-proclaimed reform path for some time. Most recently saying it wants to be responsible for promoting “conversational health on its platform. No one would say it’s there yet but perhaps we’re finally getting to see some action. Even if banning political ads is mostly a quick PR win for Twitter.

The really hard work continues, though. Namely rooting out bot armies before their malicious propaganda can pollute the public sphere. Twitter hasn’t said it’s close to being able to fix that.

Facebook is also still failing to stem the tide of ‘organic’ politicized fake content on its platform. Fakes that profit at our democratic expense by spreading hate and lies.

For this type of content Facebook offers no searchable archive (as it now does for paid ads which it defines as political) — thereby providing ongoing cover for dark money to do its manipulative hack-job on democracy by free-posting via groups and pages.

Plus, even where Facebook claims to be transparently raising the curtain on paid political influence it’s abjectly failing to do so. Its political ads API is still being blasted by research academics as not fit for purpose. Even as the company policy cranks up pressure on external fact-checkers by giving politicians the green light to run ads that lie.

It has also been accused of applying a biased standard when it comes to weeding out “coordinated inauthentic behavior”, as Facebook euphemistically calls the networks of fake accounts set up to amplify and juice reach — when the propaganda in question is coming from within the US and leans toward the political right.

 

Facebook denies this, claiming for example that a network of pages on its platform reported to be exclusively boosting content from US conservative news site, The Daily Wire, arereal pages run by real people in the U.S., and they don’t violate our policies. (It didn’t offer us any detail on how it reached that conclusion.) 

A company spokesperson also said: “We’re working on more transparency so that in the future people have more information about Pages like these on Facebook.”

So it’s still promising ‘more transparency’ — rather than actually being transparent. And it remains the sole judge interpreting and applying policies that aren’t at all legally binding; so sham regulation then. 

Moreover, while Facebook has at times issued bans on toxic content from certain domestic hate speech preachers’, such as banning some of InfoWars’ Alex Jones’ pages, it’s failed to stop the self-same hate respawning via new pages. Or indeed the same hateful individuals maintaining other accounts on different Facebook-owned social properties. Inconsistency of policy enforcement is Facebook’s DNA.

Set against all that Dorsey’s decision to take a stance against political ads looks positively statesmanlike.

It is also, at a fundamental level, obviously just the right thing to do. Buying a greater share of attention than you’ve earned politically is regressive because it favors those with the deepest pockets. Though of course Twitter’s stance won’t fix the rest of a broken system where money continues to pour in and pollute politics.

We also don’t know the fine-grained detail of how Twitter’s algorithms amplify political speech when it’s packaged in organic tweet form. So whether its algorithmic levers are more likely to be triggered into boosting political tweets that inflame and incite, or those that inform and seek to unite.

As I say, the whole of Twitter’s platform can sum to political advertising. And the company does apply algorithms to surface or suppress tweets based on its proprietary (and commercial) determination of ‘engagement quality’. So its entire business is involved in shaping how visible (or otherwise) tweeted speech is.

That very obviously includes plenty of political speech. Not for nothing is Twitter Trump’s platform of choice.

Nothing about its ban on political ads changes all that. So, as ever, where social media self-regulation is concerned, what we are being given is — at best — just fiddling around the edges.

A cynical eye might say Twitter’s ban is intended to distract attention from more structural problems baked into these attention-harvesting Internet platforms.

The toxic political discourse problem that democracies and societies around the world are being forced to grapple with is as a consequence of how Internet platforms distribute content and shape public discussion. So what’s really key is how these companies use our information to program what we each get to see.

The fact that we’re talking about Twitter’s political ad ban risks distracting from the “root problems” Dorsey referenced in passing. (Though he would probably offer a different definition of their cause. In the tweet storm he just talks about “working hard to stop people from gaming our systems to spread misleading info”.)

Facebook’s public diagnosis of the same problem is always extremely basic and blame-shifting. It just says some humans are bad, ergo some bad stuff will be platformed by Facebook — reflecting the issue back at humanity.

Here’s an alternative take: The core issue underpinning all these problems around how Internet platforms spread toxic propaganda is the underlying fact of taking people’s data in order to manipulate our attention.

This business of microtargeting — or behavioral advertising, as it’s also called — turns everyone into a target for some piece of propaganda or other.

It’s a practice that sucks regardless of whether it’s being done to you by Donald Trump or by Disney. Because it’s asymmetrical. It’s disproportionate. It’s exploitative. And it’s inherently anti-democratic.

It also incentivizes a pervasive, industrial-scale stockpiling of personal data that’s naturally hostile to privacy, terrible for security and gobbles huge amounts of energy and computing resource. So it sucks from an environmental perspective too.

And it does it all for the very basest of purposes. This is platforms selling you out so others can sell you stuff. Be it soap or political opinions.

Zuckerberg’s label of choice for this process — “relevant ads” — is just the slick lie told by a billionaire to grease the pipes that suck out the data required to sell our attention down the river.

Microtargeting is both awful for the individual (meaning creepy ads; loss of privacy; risk of bias and data misuse) and terrible for society for all the same reasons — as well as grave, society-level risks, such as election interference and the undermining of hard-won democratic institutions by hostile forces.

Individual privacy is a common good, akin to public health. Inoculation — against disease or indeed disinformation — helps protect the whole of us from damaging contagion.

To be clear, microtargeting is also not only something that happens when platforms are paid money to target ads. Platforms are doing this all the time; applying a weaponizing layer to customize everything they handle.

It’s how they distribute and program the masses of information users freely upload, creating maximally engaging order out of the daily human chaos they’ve tasked themselves with turning into a compelling and personalized narrative — without paying a massive army of human editors to do the job.

Facebook’s News Feed relies on the same data-driven principles as behavioral ads do to grab and hold attention. As does Twitter’s ‘Top Tweets’ algorithmically ranked view.

This is programmed attention-manipulation at vast scale, repackaged as a ‘social’ service. One which uses what the platforms learn by spying on Internet users as divisive glue to bind our individual attention, even if it means setting some of us against each another.

That’s why you can publish a Facebook post that mentions a particular political issue and — literally within seconds — attract a violently expressed opposing view from a Facebook ‘friend’ you haven’t spoken to in years. The platform can deliver that content ‘gut punch’ because it has a god-like view of everyone via the prism of their data. Data that powers its algorithms to plug content into “relevant” eyeballs, ranked by highest potential for engagement sparks to fly.

It goes without saying that if a real friendship group contained such a game-playing stalker — who had bugged everyone’s phones to snoop and keep tabs on them, and used what they learnt to play friends off against each other — no one would imagine it bringing the group closer together. Yet that’s how Facebook treats its captive eyeballs.

That awkward silence you could hear as certain hard-hitting questions struck Zuckerberg during his most recent turn in the House might just be the penny dropping.

It finally feels as if lawmakers are getting close to an understanding of the real “root problem” embedded in these content-for-data sociotechnical platforms.

Platforms that invite us to gaze into them in order that they can get intimate with us forever — using what they learn from spying to pry further and exploit faster.

So while banning political ads sounds nice it’s just a distraction. What we really need to shatter the black mirror platforms are holding against society, in which they get to view us from all angles while preventing us from seeing what they’re doing, is to bring down a comprehensive privacy screen. No targeting against personal data.

Let them show us content and ads, sure. They can target this stuff contextually based on a few generic pieces of information. They can even ask us to specify if we’d like to see ads about housing today or consumer packaged goods? We can negotiate the rules. Everything else — what we do on or off the platform, who we talk to, what we look at, where we go, what we say — must remain strictly off limits.

Zuckerberg defends political ads that will be 0.5% of 2020 revenue

As Jack Dorsey announced his company Twitter would drop all political ads, Facebook CEO Zuckerberg doubled-down on his policy of refusing to fact check politicians’ ads. “At times of social tension there has often been an urge to pull back on free expression . . . We will be best served over the long term by resisting this urge and defending free expression.”

Still, Zuckerberg failed to delineate between freedom of expression, and freedom of paid amplification of that expression which inherently favors the rich.

During today’s Q3 2019 earnings call where Facebook beat expectations and grew monthly users 2% to 2.45 billion, Zuckerberg spent his time defending the social network’s lenient political ad policy. You can read his full prepared statement here.

One clear objective was to dispel the idea that Facebook was motivated by greed to keep these ads. Zuckerberg explained “We estimate these ads from politicians will be less than 0.5% of our revenue next year.” For reference, Facebook earned $66 billion in the 12 months ending Q3 2019, so Facebook might earn around $330 million to $400 million in political ads next year. Unfortunately, it’s unclear if Zuckerberg meant 0.5% of ads were political or just from politicians without counting issue ads and PACs.

Zuckerberg also said that given Facebook removed 50 million hours per day of viral video watching from its platform to support well-being which hurt ad viewership and the company’s share price, Facebook clearly doesn’t act solely in pursuit of profit.

We just shared our community update and quarterly results. Here’s what I said on our earnings call. — Before we…

Posted by Mark Zuckerberg on Wednesday, October 30, 2019

Facebook’s CEO also tried to bat down the theory that Facebook is allowing misinformation in political ads to cater to conservatives or avoid calls of bias from them. “Some people say that this is just all a cynical political calculation and that we’re acting in a way that we don’t really believe because we’re just trying to appease conservatives” he said, responding that “frankly, if our goal was that we’re trying to make either side happy then we’re not doing a very good job because I’m pretty sure everyone is frustrated.” 

Instead of banning political ads, Zuckerberg voiced support for increasing transparency about how ads look, how much is spent on them, and where they’re run. “I believe that the better approach is to work to increase transparency. Ads on Facebook are already more transparent than anywhere else. We have a political ads archive so anyone can scrutinize every ad that’s run.” 

He mentioned that political ads are run by “Google, YouTube, and most internet platforms”, seeming to stumble for a second as he was likely prepared to cite Twitter too until it announced it would drop all political ads an hour earlier. He omitted that Pinterest and TikTok have also banned political ads.

It doesn’t help that hundreds of Facebook’s own employees have called on their CEO to change the policy. He concluded that no one could accuse Facebook of not deeply thinking through the question and its downstream ramifications. Zuckerberg did leave himself an out if he chooses to change the policy, though. “I’ve considered whether we should not [sell political ads] in the past, and I’ll continue to do so.”

Dorsey had tweeted that “We’ve made the decision to stop all political advertising on Twitter globally. We believe political message reach should be earned, not bought.” Democrat Representative Alexandria Ocasio-Cortez expressed support for Twitter’s move while Trump campaign manager Brad Parscale called it “a very dumb decision”

Twitter’s CEO took some clear swipes at Zuckerberg, countering his common arguments for allowing misinformation in politician’s ads. “Some might argue our actions today could favor incumbents. But we have witnessed many social movements reach massive scale without any political advertising. I trust this will only grow.” Given President Trump had outspent all Democratic candidates on Facebook ads as of March of this year, it’s clear that deep-pocketed incumbents could benefit from Facebook’s policy.

trump 2020 facebook ads 1558042973182 facebookJumbo v10 1 1

Trump continues to massively outspend Democratic rivals on Facebook ads. Via NYT

Miming Facebook’s position, Dorsey tweeted “It‘s not credible for us to say: ‘We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well…they can say whatever they want!”

Twitter doesn’t earn much from political ads, citing only $3 million in revenue from the 2018 mid-term elections, or roughly 0.1% of its $3 billion in total 2018 revenue. That means there will be no major windfall for Facebook from Twitter dropping political ads. But now all eyes will be on Facebook and Google/YouTube. If Sundar Pichai and Susan Wojcicki move in line with Dorsey, it could make Zuckerberg even more vulnerable to criticism.

$330 million might not be a big incentive for Facebook or Zuckerberg, but it still sounds like a lot of money to earn from ads that potentially lie to voters. I respect Facebook’s lenient policy when it comes to speech organically posted to users, organizations, or politicians’ own accounts. But relying on the candidates, press, and public to police speech is dangerously idealistic. We’ve seen how candidates will do anything to win, partisan press will ignore the truth to support their team, and the public aren’t educated or engaged enough to consistently understand what’s false.

Zuckerberg greatest mistakes have come from overestimating humanity. Unfortunately, not everyone wants to bring the world closer together. Without safeguards, Facebook’s tools can help tear it apart. It’s time for Facebook and Zuckerberg to recognize the difference between free expression and paid expression.

Facebook shares rise on strong Q3, users up 2% to 2.45B

Despite ongoing public relations crises, Facebook kept growing in Q3 2019, demonstrating that media backlash does not necessarily equate to poor business performance.

Facebook reached 2.45 billion monthly users, up 1.65%, from 2.41 billion in Q2 2019 when it grew 1.6%, and it now has 1.62 billion daily active users, up 2% from 1.587 billion last quarter when it grew 1.6%. Facebook scored $17.652 billion of revenue, up 29% year-over-year, with $2.12 in earnings per share.

Facebook Q3 2019 DAU

Facebook’s earnings beat expectations compared to Refinitiv’s consensus estimates of $17.37 billion in revenue and $1.91 earnings per share. Facebook’s quarter was mixed compared to Bloomberg’s consensus estimate of $2.28 EPS. Facebook earned $6 billion in profit after only racking up $2.6 billion last quarter due to its SEC settlement.

Facebook shares rose 5.18% in after-hours trading, to $198.01 after earnings were announced, following a day where it closed down 0.56% at $188.25.

Notably, Facebook gained 2 million users in each of its core U.S. & Canada and Europe markets that drive its business, after quarters of shrinkage, no growth or weak growth there in the past two years. Average revenue per user grew healthily across all markets, boding well for Facebook’s ability to monetize the developing world where the bulk of user growth currently comes from.

Facebook says 2.2 billion users access Facebook, Instagram, WhatsApp or Messenger every day, and 2.8 billion use one of this family of apps each month. That’s up from 2.1 billion and 2.7 billion last quarter. Facebook has managed to stay sticky even as it faces increased competition from a revived Snapchat, and more recently TikTok. However, those rivals might more heavily weigh on Instagram, for which Facebook doesn’t routinely disclose user stats.

Facebook ARPU Q3 2019

Zuckerberg defends political ads policy

Facebook’s earnings announcement was somewhat overshadowed by Twitter CEO Jack Dorsey announcing it would ban all political ads — something TechCrunch previously recommended social networks do. That move flies in the face of Facebook CEO Mark Zuckerberg’s staunch support for allowing politicians to spread misinformation without fact-checks via Facebook ads. This should put additional pressure on Facebook to rethink its policy.

Zuckerberg doubled-down on the policy, saying “I believe that the better approach is to work to increase transparency. Ads on Facebook are already more transparent than anywhere else,” he said. Attempting to dispel that the policy is driven by greed, he noted Facebook expects political ads to make up “less than 0.5% of our revenue next year.” Because people will disagree and the issue will keep coming up, Zuckerberg admitted it’s going to be “a very tough year.”

Facebook also announced that lead independent board member Susan D. Desmond-Hellmann has resigned to focus on health issues.

Earnings call highlights

Facebook expects revenue deceleration to be pronounced in Q4. But CFO David Wehner provided some hope, saying “we would expect our revenue growth deceleration in 2020 versus the Q4 rate to be much less pronounced.” That led Facebook’s share price to spike from around $191 to around $198.

However, Facebook will maintain its aggressive hiring to moderate content. While the company has touted how artificial intelligence would increasingly help, Zuckerberg said that hiring would continue because “There’s just so much content. We do need a lot of people.”

Zuckerberg Libra 1

Regarding Libra’s regulatory pushback, Zuckerberg explained that Facebook was already diversified in commerce if that doesn’t work out, citing WhatsApp Payments, Facebook Marketplace and Instagram shopping.

On anti-trust concerns, Zuckerberg reminded analysts that Instagram’s success wasn’t assured when Facebook acquired it, and it has survived a lot of competition thanks to Facebook’s contributions. In a new talking point we’re likely to hear more of, Zuckerberg noted that other competitors had used their success in one vertical to push others, saying “Apple and Google built cameras and private photo sharing and photo management directly into their operating systems.”

Scandals continue, but so does growth

Overall, it was another rough quarter for Facebook’s public perception as it dealt with outages and struggled to get buy-in from regulators for its Libra cryptocurrency project. Former co-founder Chris Hughes (who I’ll be leading a talk with at SXSW) campaigned for the social network to be broken up — a position echoed by Elizabeth Warren and other presidential candidates.

The company did spin up some new revenue sources, including taking a 30% cut of fan patronage subscriptions to content creators. It’s also trying to sell video subscriptions for publishers, and it upped the price of its Workplace collaboration suite. But gains were likely offset as the company continued to rapidly hire to address abusive content on its platform, which saw headcount grow 28% year-over-year, to 43,000. There are still problems with how it treats content moderators, and Facebook has had to repeatedly remove coordinated misinformation campaigns from abroad. Appearing concerned about its waning brand, Facebook moved to add “from Facebook” to the names of Instagram and WhatsApp.

It escaped with just a $5 billion fine as part of its FTC settlement that some consider a slap on the wrist, especially since it won’t have to significantly alter its business model. But the company will have to continue to invest and divert product resources to meet its new privacy, security and transparency requirements. These could slow its response to a growing threat: Chinese tech giant ByteDance’s TikTok.

Facebook is failing to prevent another human rights tragedy playing out on its platform, report warns

A report by campaign group Avaaz examining how Facebook’s platform is being used to spread hate speech in the Assam region of North East India suggests the company is once again failing to prevent its platform from being turned into a weapon to fuel ethnic violence.

Assam has a long-standing Muslim minority population but ethnic minorities in the state look increasingly vulnerable after India’s Hindu nationalist government pushed forward with a National Register of Citizens (NRC), which has resulted in the exclusion from that list of nearly 1.9 million people — mostly Muslims — putting them at risk of statelessness.

In July the United Nations expressed grave concern over the NRC process, saying there’s a risk of arbitrary expulsion and detention, with those those excluded being referred to Foreigners’ Tribunals where they have to prove they are not “irregular”.

At the same time, the UN warned of the rise of hate speech in Assam being spread via social media — saying this is contributing to increasing instability and uncertainty for millions in the region. “This process may exacerbate the xenophobic climate while fuelling religious intolerance and discrimination in the country,” it wrote.

There’s an awful sense of deja-vu about these warnings. In March 2018 the UN criticized Facebook for failing to prevent its platform being used to fuel ethnic violence against the Rohingya people in the neighboring country of Myanmar — saying the service had played a “determining role” in that crisis.

Facebook’s response to devastating criticism from the UN looks like wafer-thin crisis PR to paper over the ethical cracks in its ad business, given the same sorts of alarm bells are being sounded again, just over a year later. (If we measure the company by the lofty goals it attached to a director of human rights policy job last year — when Facebook wrote that the responsibilities included “conflict prevention” and “peace-building” — it’s surely been an abject failure.)

Avaaz’s report on hate speech in Assam takes direct aim at Facebook’s platform, saying it’s being used as a conduit for whipping up anti-Muslim hatred.

In the report, entitled Megaphone for Hate: Disinformation and Hate Speech on Facebook During Assam’s Citizenship Count, the group says it analysed 800 Facebook posts and comments relating to Assam and the NRC, using keywords from the immigration discourse in Assamese, assessing them against the three tiers of prohibited hate speech set out in Facebook’s Community Standards.

Avaaz found that at least 26.5% of the posts and comments constituted hate speech. These posts had been shared on Facebook more than 99,650 times — adding up to at least 5.4 million views for violent hate speech targeting religious and ethnic minorities, according to its analysis.

Bengali Muslims are a particular target on Facebook in Assam, per the report, which found comments referring to them as “criminals,” “rapists,” “terrorists,” “pigs,” and “dogs”, among other dehumanizing terms.

In further disturbing comments there were calls for people to “poison” daughters, and legalise female foeticide, as well as several posts urging “Indian” women to be protected from “rape-obsessed foreigners”.

Avaaz suggests its findings are just a drop in the ocean of hate speech that it says is drowning Assam via Facebook and other social media. But it accuses Facebook directly of failing to provide adequate human resource to police hate speech spread on its dominant platform.

Commenting in a statement, Alaphia Zoyab, senior campaigner, said: “Facebook is being used as a megaphone for hate, pointed directly at vulnerable minorities in Assam, many of whom could be made stateless within months. Despite the clear and present danger faced by these people, Facebook is refusing to dedicate the resources required to keep them safe. Through its inaction, Facebook is complicit in the persecution of some of the world’s most vulnerable people.”

Its key complaint is that Facebook continues to rely on AI to detect hate speech which has not been reported to it by human users — using its limited pool of (human) content moderator staff to review pre-flagged content, rather than proactively detect it.

Facebook founder Mark Zuckerberg has previously said AI has a very long way to go to reliably detect hate speech. Indeed, he’s suggested it may never be able to do that.

In April 2018 he told US lawmakers it might take five to ten years to develop “AI tools that can get into some of the linguistic nuances of different types of content to be more accurate, to be flagging things to our systems”, while admitting: “Today we’re just not there on that.”

That sums to an admission that in regions such as Assam — where inter-ethnic tensions are being whipped up in a politically charged atmosphere that’s also encouraging violence — Facebook is essentially asleep on the job. The job of enforcing its own ‘Community Standards’ and preventing its platform being weaponized to amplify hate and harass the vulnerable, to be clear.

Avaaz says it flagged 213 of “the clearest examples” of hate speech which it found directly to Facebook — including posts from an elected official and pages of a member of an Assamese rebel group banned by the Indian Government. The company removed 96 of these posts following its report.

It argues there are similarities in the type of hate speech being directed at ethnic minorities in Assam via Facebook and that which targeted at Rohingya people in Myanmar, also on Facebook, while noting that the context is different. But it did also find hateful content on Facebook targeting Rohingya people in India.

It is calling on Facebook to do more to protect vulnerable minorities in Assam, arguing it should not rely solely on automated tools for detecting hate speech — and should instead apply a “human-led ‘zero tolerance’ policy” against hate speech, starting by beefing up moderators’ expertise in local languages.

It also recommends Facebook launch an early warning system within its Strategic Response team, again based on human content moderation — and do so for all regions where the UN has warned of the rise of hate speech on social media.

“This system should act preventatively to avert human rights crises, not just reactively to respond to offline harm that has already occurred,” it writes.

Other recommendations include that Facebook should correct the record on false news and disinformation by notifying and providing corrections from fact-checkers to each and every user who has seen content deemed to have been false or purposefully misleading, including if the disinformation came from a politician; that it should be transparent about all page and post takedowns by publishing its rational on the Facebook Newsroom so the issue of hate speech is given proportionate prominence and publicity to the size of the problem on Facebook; and it should agree to an independent audit of hate speech and human rights on its platform in India.

“Facebook has signed up to comply with the UN Guiding Principles on Business and Human Rights,” Avaaz notes. “Which require it to conduct human rights due diligence such as identifying its impact on vulnerable groups like women, children, linguistic, ethnic and religious minorities and others, particularly when deploying AI tools to identify hate speech, and take steps to subsequently avoid or mitigate such harm.”

We reached out to Facebook with a series of questions about Avaaz’s report and also how it has progressed its approach to policing inter-ethnic hate speech since the Myanmar crisis — including asking for details of the number of people it employs to monitor content in the region.

Facebook did not provide responses to our specific questions. It just said it does have content reviewers who are Assamese and who review content in the language, as well as reviewers who have knowledge of the majority of official languages in India, including Assamese, Hindi, Tamil, Telugu, Kannada, Punjabi, Urdu, Bengali and Marathi.

In 2017 India overtook the US as the country with the largest “potential audience” for Facebook ads, with 241M active users, per figures it reports the advertisers.

Facebook also sent us this statement, attributed to a spokesperson:

We want Facebook to be a safe place for all people to connect and express themselves, and we seek to protect the rights of minorities and marginalized communities around the world, including in India. We have clear rules against hate speech, which we define as attacks against people on the basis of things like caste, nationality, ethnicity and religion, and which reflect input we received from experts in India. We take this extremely seriously and remove content that violates these policies as soon as we become aware of it. To do this we have invested in dedicated content reviewers, who have local language expertise and an understanding of the India’s longstanding historical and social tensions. We’ve also made significant progress in proactively detecting hate speech on our services, which helps us get to potentially harmful content faster.

But these tools aren’t perfect yet, and reports from our community are still extremely important. That’s why we’re so grateful to Avaaz for sharing their findings with us. We have carefully reviewed the content they’ve flagged, and removed everything that violated our policies. We will continue to work to prevent the spread of hate speech on our services, both in India and around the world.

Facebook did not tell us exactly how many people it employs to police content for an Indian state with a population of more than 30 million people.

Globally the company maintains it has around 35,000 people working on trust and safety, less than half of whom (~15,000) are dedicated content reviewers. But with such a tiny content reviewer workforce for a global platform with 2.2BN+ users posting night and day all around the world there’s no plausible no way for it to stay on top of its hate speech problem.

Certainly not in every market it operates in. Which is why Facebook leans so heavily on AI — shrinking the cost to its business but piling content-related risk onto everyone else.

Facebook claims its automated tools for detecting hate speech have got better, saying that in Q1 this year it increased the proactive detection rate for hate speech to 65.4% — up from 58.8% in Q4 2017 and 38% in Q2 2017.

However it also says it only removed 4 million pieces of hate speech globally in Q1. Which sounds incredibly tiny vs the size of Facebook’s platform and the volume of content that will be generated daily by its millions and millions of active users.

Without tools for independent researchers to query the substance and spread of content on Facebook’s platform it’s simply not possible to know how many pieces of hate speech are going undetected. But — to be clear — this unregulated company still gets to mark its own homework. 

In just one example of how Facebook is able to shrink perception of the volume of problematic content it’s fencing, of the 213 pieces of content related to Assam and the NCR that Avaaz judged to be hate speech and reported to Facebook it removed less than half (96).

Yet Facebook also told us it takes down all content that violates its community standards — suggesting it is applying a far more dilute definition of hate speech than Avaaz. Unsurprising for a US company whose nascent crisis PR content review board‘s charter includes the phrase “free expression is paramount”. But for a company that also claims to want to prevent conflict and peace-build it’s rather conflicted, to say the least. 

As things stand, Facebook’s self-reported hate speech performance metrics are meaningless. It’s impossible for anyone outside the company to quantify or benchmark platform data. Because no one except Facebook has the full picture — and it’s not opening its platform for ethnical audit. Even as the impacts of harmful, hateful stuff spread on Facebook continue to bleed out and damage lives around the world. 

Facebook staff demand Zuckerberg limit lies in politcal ads

Submit campaign ads to fact checking, limit microtargeting, cap spending, observe silence periods, or at least warn users. These are the solutions Facebook employees put forward in an open letter pleading with CEO Mark Zuckerberg and company leadership to address misinformation in political ads.

The letter, obtained by the New York Times’ Mike Isaac, insists that “Free speech and paid speech are not the same thing . . . Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for.” The letter was posted to Facebook’s internal collaboration forum a few weeks ago.

The sentiments echo what I called for in a TechCrunch opinion piece on October 13th calling on Facebook to ban political ads. Unfettered misinformation in political ads on Facebook lets politicians and their supporters spread inflammatory and inaccurate claims about their views and their rivals while racking up donations to buy more of these ads.

The social network can still offer freedom of expression to political campaigns on their own Facebook Pages while limiting the ability of the richest and most dishonest to pay to make their lies the loudest. We suggested that if Facebook won’t drop political ads, they should be fact checked and/or use an array of generic “vote for me” or “donate here” ad units that don’t allow accusations. We also criticized how microtargeting of communities vulnerable to misinformation and instant donation links make Facebook ads more dangerous than equivalent TV or radio spots.

Mark Zuckerberg Hearing In Congress

The Facebook CEO, Mark Zuckerberg, testified before the House Financial Services Committee on Wednesday October 23, 2019 Washington, D.C. (Photo by Aurora Samperio/NurPhoto via Getty Images)

Over 250 employees of Facebook’s 35,000 staffers have signed the letter, that declares “We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.” It suggests the current policy undermines Facebook’s election integrity work, confuses users about where misinformation is allowed, and signals Facebook is happy to profit from lies.

The solutions suggested include:

  1. Don’t accept political ads unless they’re subject to third-party fact checks
  2. Use visual design to more strongly differentiate between political ads and organic non-ad posts
  3. Restrict microtargeting for political ads including the use of Custom Audiences since microtargeted hides ads from as much public scrutiny that Facebook claims keeps politicians honest
  4. Observe pre-election silence periods for political ads to limit the impact and scale of misinformation
  5. Limit ad spending per politician or candidate, with spending by them and their supporting political action committees combined
  6. Make it more visually clear to users that political ads aren’t fact-checked

A combination of these approaches could let Facebook stop short of banning political ads without allowing rampant misinformation or having to police individual claims.

Facebook’s response to the letter was “We remain committed to not censoring political speech, and will continue exploring additional steps we can take to bring increased transparency to political ads.” But that straw-man’s the letter’s request. Employees aren’t asking politicians to be kicked off Facebook or have their posts/ads deleted. They’re asking for warning labels and limits on paid reach. That’s not censorship.

Zuckerberg Elections 1

Zuckerberg had stood resolute on the policy despite backlash from the press and lawmakers including Representative Alexandria Ocasio-Cortez (D-NY). She left him tongue-tied during a congressional testimony when she asked exactly what kinds of misinfo were allowed in ads.

But then Friday Facebook blocked an ad designed to test its limits by claiming Republican Lindsey Graham had voted for Ocasio-Cortez’s Green Deal he actually opposes. Facebook told Reuters it will fact-check PAC ads

One sensible approach for politicians’ ads would be for Facebook to ramp up fact-checking, starting with Presidential candidates until it has the resources to scan more. Those fact-checked as false should receive an interstitial warning blocking their content rather than just a “false” label. That could be paired with giving political ads a bigger disclaimer without making them too prominent looking in general and only allowing targeting by state.

Deciding on potential spending limits and silent periods would be more messy. Low limits could even the playing field and broad silent periods especially during voting periods could prevent voter suppression. Perhaps these specifics should be left to Facebook’s upcoming independent Oversight Board that acts as a supreme court for moderation decisions and policies.

fb arbiter of truth

Zuckerberg’s core argument for the policy is that over time, history bends towards more speech, not censorship. But that succumbs to utopic fallacy that assumes technology evenly advantages the honest and dishonest. In reality, sensational misinformation spreads much further and faster than level-headed truth. Microtargeted ads with thousands of variants undercut and overwhelm the democratic apparatus designed to punish liars, while partisan news outlets counter attempts to call them out.

Zuckerberg wants to avoid Facebook becoming the truth police. But as we and employees have put forward, there a progressive approaches to limiting misinformation if he’s willing to step back from his philosophical orthodoxy.

The full text of the letter from Facebook employees to leadership about political ads can be found below, via the New York Times:

We are proud to work here.

Facebook stands for people expressing their voice. Creating a place where we can debate, share different opinions, and express our views is what makes our app and technologies meaningful for people all over the world.

We are proud to work for a place that enables that expression, and we believe it is imperative to evolve as societies change. As Chris Cox said, “We know the effects of social media are not neutral, and its history has not yet been written.”

This is our company.

We’re reaching out to you, the leaders of this company, because we’re worried we’re on track to undo the great strides our product teams have made in integrity over the last two years. We work here because we care, because we know that even our smallest choices impact communities at an astounding scale. We want to raise our concerns before it’s too late.

Free speech and paid speech are not the same thing.

Misinformation affects us all. Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for. We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.

Allowing paid civic misinformation to run on the platform in its current state has the potential to:

— Increase distrust in our platform by allowing similar paid and organic content to sit side-by-side — some with third-party fact-checking and some without. Additionally, it communicates that we are OK profiting from deliberate misinformation campaigns by those in or seeking positions of power.

— Undo integrity product work. Currently, integrity teams are working hard to give users more context on the content they see, demote violating content, and more. For the Election 2020 Lockdown, these teams made hard choices on what to support and what not to support, and this policy will undo much of that work by undermining trust in the platform. And after the 2020 Lockdown, this policy has the potential to continue to cause harm in coming elections around the world.

Proposals for improvement

Our goal is to bring awareness to our leadership that a large part of the employee body does not agree with this policy. We want to work with our leadership to develop better solutions that both protect our business and the people who use our products. We know this work is nuanced, but there are many things we can do short of eliminating political ads altogether.

These suggestions are all focused on ad-related content, not organic.

1. Hold political ads to the same standard as other ads.

a. Misinformation shared by political advertisers has an outsized detrimental impact on our community. We should not accept money for political ads without applying the standards that our other ads have to follow.

2. Stronger visual design treatment for political ads.

a. People have trouble distinguishing political ads from organic posts. We should apply a stronger design treatment to political ads that makes it easier for people to establish context.

3. Restrict targeting for political ads.

a. Currently, politicians and political campaigns can use our advanced targeting tools, such as Custom Audiences. It is common for political advertisers to upload voter rolls (which are publicly available in order to reach voters) and then use behavioral tracking tools (such as the FB pixel) and ad engagement to refine ads further. The risk with allowing this is that it’s hard for people in the electorate to participate in the “public scrutiny” that we’re saying comes along with political speech. These ads are often so micro-targeted that the conversations on our platforms are much more siloed than on other platforms. Currently we restrict targeting for housing and education and credit verticals due to a history of discrimination. We should extend similar restrictions to political advertising.

4. Broader observance of the election silence periods

a. Observe election silence in compliance with local laws and regulations. Explore a self-imposed election silence for all elections around the world to act in good faith and as good citizens.

5. Spend caps for individual politicians, regardless of source

a. FB has stated that one of the benefits of running political ads is to help more voices get heard. However, high-profile politicians can out-spend new voices and drown out the competition. To solve for this, if you have a PAC and a politician both running ads, there would be a limit that would apply to both together, rather than to each advertiser individually.

6. Clearer policies for political ads

a. If FB does not change the policies for political ads, we need to update the way they are displayed. For consumers and advertisers, it’s not immediately clear that political ads are exempt from the fact-checking that other ads go through. It should be easily understood by anyone that our advertising policies about misinformation don’t apply to original political content or ads, especially since political misinformation is more destructive than other types of misinformation.

Therefore, the section of the policies should be moved from “prohibited content” (which is not allowed at all) to “restricted content” (which is allowed with restrictions).

We want to have this conversation in an open dialog because we want to see actual change.

We are proud of the work that the integrity teams have done, and we don’t want to see that undermined by policy. Over the coming months, we’ll continue this conversation, and we look forward to working towards solutions together.

This is still our company.

Mark Zuckerberg makes the case for Facebook News

While Facebook CEO Mark Zuckerberg seemed cheerful and even jokey when he took the stage today in front of journalists and media executives (at one point, he described the event as “by far the best thing” he’d done this week), he acknowledged that there are reasons for the news industry to be skeptical.

Facebook, after all, has been one of the main forces creating a difficult economic reality for the industry over the past decade. And there are plenty of people (including our own Josh Constine) who think it would be foolish for publishers to trust the company again.

For one thing, there’s the question of how Facebook’s algorithm prioritizes different types of content, and how changes to the algorithm can be enormously damaging to publishers.

“We can do a better job of working with partners to have more transparency and also lead time about what we see in the pipeline,” Zuckerberg said, adding, “I think stability is a big theme.” So Facebook might be trying something out as an “experiment,” but “if it kind of just causes a spike, it can be hard for your business to plan for that.”

At the same time, Zuckerberg argued that Facebook’s algorithms are “one of the least understood things about what we do.” Specifically, he noted that many people accuse the company of simply optimizing the feed to keep users on the service for as long as possible.

“That’s actually not true,” he said. “For many years now, I’ve prohibited any of our feed teams … from optimizing the systems to encourage the maximum amount of time to be spent. We actually optimize the system for facilitating as many meaningful interactions as possible.”

For example, he said that when Facebook changed the algorithm to prioritize friends and family content over other types of content (like news), it effectively eliminated 50 million hours of viral video viewing each day. After the company reported its subsequent earnings, Facebook had the biggest drop in market capitalization in U.S. history.

Zuckerberg was onstage in New York with News Corp CEO Robert Thomson to discuss the launch of Facebook News, a new tab within the larger Facebook product that’s focused entirely on news. Thomson began the conversation with a simple question: “What took you so long?”

The Facebook CEO took this in stride, responding that the question was “one of the nicest things he could have said — that actually means he thinks we did something good.”

Zuckerberg went on to suggest that the company has had a long interest in supporting journalism (“I just think that every internet platform has a responsibility to try to fund and form partnerships to help news”), but that its efforts were initially focused on the News Feed, where the “fundamental architecture” made it hard to find much room for news stories — particularly when most users are more interested in that content from friends and family.

So Facebook News could serve as a more natural home for this news (to be clear, the company says news content will continue to appear in the main feed as well). Zuckerberg also said that since past experiments have created such “thrash in the ecosystem,” Facebook wanted to make sure it got this right before launching it.

In particular, he said the company needed to show that tabs within Facebook, like Facebook Marketplace and Facebook Watch, could attract a meaningful audience. Zuckerberg acknowledged that the majority of Facebook users aren’t interested in these other tabs, but when you’ve got such an enormous user base, even a small percentage can be meaningful.

“I think we can probably get to maybe 20 or 30 million people [visiting Facebook News] over a few years,” he said. “That by itself would be very meaningful.”

Facebook is also paying some of the publishers who are participating in Facebook News. Zuckerberg described this as “the first time we’re forming long-term, stable relationships and partnerships with a lot of publishers.”

Several journalists asked for more details about how Facebook decided which publishers to pay, and how much to pay them. Zuckerberg said it’s based on a number of factors, like ensuring a wide range of content in Facebook News, including from publishers who hadn’t been publishing much on the site previously. The company also had to compensate publishers who are taking some of their content out from behind their paywalls.

“This is not an exact formula — maybe we’ll get to that over time — but it’s all within a band,” he said.

Zuckerberg was also asked about how Facebook will deal with accuracy and quality, particularly given the recent controversy over its unwillingness to fact check political ads.

He sidestepped the political ads question, arguing that it’s unrelated to the day’s topics, then said, “This is a different kind of thing.” In other words, he argued that the company has much more leeway here to determine what is and isn’t included — both by requiring any participating publishers to abide by Facebook’s publisher guidelines, and by hiring a team of journalists to curate the headlines that show up in the Top Stories section.

“People have a different expectation in a space dedicated to high-quality news than they do in a space where the goal is to make sure everyone can have a voice and can share their opinion,” he said.

As for whether Facebook News will include negative stories about Facebook, Zuckerberg seemed delighted to learn that Bloomberg (mostly) doesn’t cover Bloomberg.

“I didn’t know that was a thing a person could do,” he joked. More seriously, he said, “For better or worse, we’re a prominent part of a lot of the news cycles. I don’t think it would be reasonable to try to have a news tab that didn’t cover the stuff that Facebook is doing. In order to make this a trusted source over time, they have to be covered objectively.”

British parliament presses Facebook on letting politicians lie in ads

In yet another letter seeking to pry accountability from Facebook, the chair of a British parliamentary committee has pressed the company over its decision to adopt a policy on political ad that supports flagrant lying.

In the letter Damian Collins, chair of the DCMS committee, asks the company to explain why it recently took the decision to change its policy regarding political ads — “given the heavy constraint this will place on Facebook’s ability to combat online disinformation in the run-up to elections around the world”.

“The change in policy will absolve Facebook from the responsibility of identifying and tackling the widespread content of bad actors, such as Russia’s Internet Research Agency,” he warns, before going on to cite a recent tweet by the former chief of Facebook’s global efforts around political ads transparency and election integrity  who has claimed that senior management ignored calls from lower down for ads to be scanned for misinformation.

“I also note that Facebook’s former head of global elections integrity ops, Yael Eisenstat, has described that when she advocated for the scanning of adverts to detect misinformation efforts, despite engineers’ enthusiasm she faced opposition from upper management,” writes Collins.

 

In a further question, Collins asks what specific proposals Eisenstat’s team made; to what extent Facebook determined them to be feasible; and on what grounds were they not progressed.

He also asks what plans Facebook has to formalize a working relationship with fact-checkers over the long run.

A Facebook spokesperson declined to comment on the DCMS letter, saying the company would respond in due course.

In a naked display of its platform’s power and political muscle, Facebook deployed a former politician to endorse its ‘fake ads are fine’ position last month — when head of global policy and communication, Nick Clegg, who used to be the deputy prime minister of the UK, said: ” We do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.”

So, in other words, if you’re a politician you get a green light to run lying ads on Facebook.

Clegg was giving a speech on the company’s plans to prevent interference in the 2020 US presidential election. The only line he said Facebook would be willing to draw was if a politician’s speech “can lead to real world violence and harm”. But from a company that abjectly failed to prevent its platform from being misappropriated to accelerate genocide in Myanmar that’s the opposite of reassuring.

“At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves,” said Clegg. “We have a responsibility to protect the platform from outside interference, and to make sure that when people pay us for political ads we make it as transparent as possible. But it is not our role to intervene when politicians speak.”

In truth Facebook roundly fails to protect its platform from outside interference too. Inauthentic behavior and fake content is a ceaseless firefight that Facebook is nowhere close to being on top of, let alone winning. But on political ads it’s not even going to try — giving politicians around the world carte blanche to use outrage-fuelling disinformation and racist dogwhistles as a low budget, broad reach campaign strategy.

We’ve seen this before on Facebook of course, during the UK’s Brexit referendum — when scores of dark ads sought to whip up anti-immigrant sentiment and drive a wedge between voters and the European Union.

And indeed Collins’ crusade against Facebook as a conduit for disinformation began in the wake of that 2016 EU referendum.

Since then the company has faced major political scrutiny over how it accelerates disinformation — and has responded by creating a degree of transparency on political ads, launching an archive where this type of advert can be searched. But that appears as far as Facebook is willing to go on tackling the malicious propaganda problem its platform accelerates.

In the US, senator Elizabeth Warren has been duking it out publicly with Facebook on the same point as Collins rather more directly — by running ads on Facebook saying it’s endorsing Trump by supporting his lies.

There’s no sign of Facebook backing down, though. On the contrary. A recent leak from an internal meeting saw founder Mark Zuckerberg attacking Warren as an “existential” threat to the company. While, this week, Bloomberg reports that Facebook’s executive has been quietly advising a Warren rival for the Democratic nomination, Pete Buttigieg, on campaign hires.

So a company that hires politicians to senior roles, advises high profile politicians on election campaigns, tweaks its policy on political ads after a closed door meeting with the current holder of the office of US president, Donald Trump, and ignores internal calls to robustly police political ads, is rapidly sloughing off any residual claims to be ‘just a technology company’. (Though, really, we knew that already.)

In the letter Collins also presses Facebook on its plan to rollout end-to-end encryption across its messaging app suite, asking why it can’t limit the tech to WhatsApp only — something the UK government has also been pressing it on this month.

He also raises questions about Facebook’s access to metadata — asking whether it will use inferences gleaned from the who, when and where of e2e encrypted comms (even though it can’t access the what) to target users with ads.

Facebook’s self-proclaimed ‘pivot to privacy‘ — when it announced earlier this year a plan to unify its separate messaging platforms onto a single e2e encrypted backend — has been widely interpreted as an attempt to make it harder for antitrust regulators to break up its business empire, as well as a strategy to shirk responsibility for content moderation by shielding itself from much of the substance that flows across its platform while retaining access to richer cross-platform metadata so it can continue to target users with ads…

Daily Crunch: Zuckerberg has thoughts on free speech

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Zuckerberg on Chinese censorship: Is that the internet we want?

The Facebook CEO spoke yesterday at Georgetown University, sharing his thoughts on speech and “how we might address the challenges that more voice and the internet introduce, and the major threats to free expression around the world.”

Among his arguments: China is exporting its social values, political ads are an important part of free expression and the definition of dangerous speech must be kept in check.

2. Atlassian acquires Code Barrel, makers of Automation for Jira

Sydney-based Code Barrel was founded by two of the first engineers who built Jira at Atlassian, Nick Menere and Andreas Knecht. With this acquisition, they are returning to Atlassian after four years in startup land.

3. Swarm gets green light from FCC for its 150-satellite constellation

Swarm Technologies aims to connect smart devices around the world with a low-bandwidth but ever-present network provided by satellites — and it just got approval from the FCC to do so. Apparently the agency is no longer worried that Swarm’s sandwich-sized satellites are too small to be tracked.

4. Nintendo Switch hits another sales milestone

Nintendo’s North American Switch unit sales have already surpassed the lifetime worldwide unit sales of the Wii U. The company announced Thursday that they had sold 15 million units of the popular handheld console in North America.

5. HBO Max scores all 21 Studio Ghibli films

WarnerMedia has been on a shopping spree for its HBO Max service. It bought the rights to “Friends” and “The Big Bang Theory,” and now it’s using its outsized checkbook to bring beloved Japanese animation group Studio Ghibli’s films onto the web exclusively on its platform for U.S. subscribers.

6. Volvo creates a dedicated business for autonomous industrial and commercial transport

The vehicle-maker has already been active in putting autonomous technology to work in various industries, with self-driving projects at quarries and mines, and in the busy port located at Gothenburg, Sweden.

7. How Unity built the world’s most popular game engine

Unity’s growth is a case study of Clayton Christensen’s theory of disruptive innovation. While other game engines targeted the big AAA game makers at the top of the console and PC markets, Unity went after independent developers with a less robust product that was better suited to their needs and budget. (Extra Crunch membership required.)

An interview with Dr. Stuart Russell, author of “Human Compatible, Artificial Intelligence and the Problem of Control”

(UC Berkeley’s Dr. Stuart Russell’s new book, “Human Compatible: Artificial Intelligence and the Problem of Control, goes on sale Oct. 8. I’ve written a review, Human Compatible” is a provocative prescription to re-think AI before it’s too late,” and the following in an interview I conducted with Dr. Russell in his UC Berkeley office on September 3, 2019.)

Ned Desmond: Why did you write Human Compatible?

Dr. Russell: I’ve been thinking about this problem – what if we succeed with AI? – on and off since the early 90s. The more I thought about it, the more I saw that the path we were on doesn’t end well.

(AI Researchers) had mostly just doing toy stuff in the lab, or games, none of which represented any threat to anyone. It’s a little like a physicist playing tiny bits of uranium. Nothing happens, right? So we’ll just make more of it, and everything will be fine. But it just doesn’t work that way.  When you start crossing over to systems that are more intelligent, operating on a global scale, and having real-world impact, like trading algorithms, for example, or social media content selection, then all of a sudden, you are having a big impact on real-world, and it’s hard to control. It’s hard to undo. And that’s just going to get worse and worse and worse.

Stuart Russell HUMAN COMPATIBLE Credit Peg Skorpinski

Dean’s Society – October 23, 2006; Stuart Russell

Desmond: Who should read Human Compatible?

Dr. Russell: I think everyone, because everyone is going to be affected by this.  As progress occurs towards human level (AI), each big step is going to magnify the impact by another factor of 10, or another factor of 100. Everyone’s life is going to be radically affected by this. People need to understand it. More specifically, it would be policymakers, the people who run the large companies like Google and Amazon, and people in AI, related disciplines, like control theory, cognitive science and so on.

My basic view was so much of this debate is going on without any understanding of what AI is.  It’s just this magic potion that will make things intelligent. And in these debates, people don’t understand the building blocks, how it fits together, how it works, how you make an intelligent system. So chapter two (of Human Compatible was) sort of mammoth and some people said, “Oh, this is too much to get through and others said, “No, you absolutely have to keep it.”  So I compromised and put the pedagogical stuff in the appendices.

Desmond: Why did computer scientists tend to overlook the issue of uncertainty in the objective function for AI systems?

Dr. Russell: Funnily enough, in AI, we took uncertainty (in the decision-making function) to heart starting in the 80s. Before that, most AI people said let’s just work on cases where we have definite knowledge, and we can come up with guaranteed plans.