Facebook makes another push to shape and define its own oversight

Facebook’s head of global spin and policy, former UK deputy prime minister Nick Clegg, will give a speech later today providing more detail of the company’s plan to set up an ‘independent’ external oversight board to which people can appeal content decisions so that Facebook itself is not the sole entity making such decisions.

In the speech in Berlin, Clegg will apparently admit to Facebook having made mistakes. Albeit, it would be pretty awkward if he came on stage claiming Facebook is flawless and humanity needs to take a really long hard look at itself.

“I don’t think it’s in any way conceivable, and I don’t think it’s right, for private companies to set the rules of the road for something which is as profoundly important as how technology serves society,” Clegg told BBC Radio 4’s Today program this morning, discussing his talking points ahead of the speech. “In the end this is not something that big tech companies… can or should do on their own.

“I want to see… companies like Facebook play an increasingly mature role — not shunning regulation but advocating it in a sensible way.”

The idea of creating an oversight board for content moderation and appeals was previously floated by Facebook founder, Mark Zuckerberg. Though it raises way more questions than it resolves — not least how a board whose existence depends on the underlying commercial platform it is supposed to oversee can possibly be independent of that selfsame mothership; or how board appointees will be selected and recompensed; and who will choose the mix of individuals to ensure the board can reflect the full spectrum diversity of humanity that’s now using Facebook’s 2BN+ user global platform?

None of these questions were raised let alone addressed in this morning’s BBC Radio 4 interview with Clegg.

Asked by the interviewer whether Facebook will hand control of “some of these difficult decisions” to an outside body, Clegg said: “Absolutely. That’s exactly what it means. At the end of the day there is something quite uncomfortable about a private company making all these ethical adjudications on whether this bit of content stays up or this bit of content gets taken down.

“And in the really pivotal, difficult issues what we’re going to do — it’s analogous to a court — we’re setting up an independent oversight board where users and indeed Facebook will be able to refer to that board and say well what would you do? Would you take it down or keep it up? And then we will commit, right at the outset, to abide by whatever rulings that board makes.”

Speaking shortly afterwards on the same radio program, Damian Collins, who chairs a UK parliamentary committee that has called for Facebook to be investigated by the UK’s privacy and competition regulators, suggested the company is seeking to use self-serving self-regulation to evade wider responsibility for the problems its platform creates — arguing that what’s really needed are state-set broadcast-style regulations overseen by external bodies with statutory powers.

“They’re trying to pass on the responsibility,” he said of Facebook’s oversight board. “What they’re saying to parliaments and governments is well you make things illegal and we’ll obey your laws but other than that don’t expect us to exercise any judgement about how people use our services.

“We need as level of regulation beyond that as well. Ultimately we need — just as have in broadcasting — statutory regulation based on principles that we set, and an investigatory regulator that’s got the power to go in and investigate, which, under this board that Facebook is going to set up, this will still largely be dependent on Facebook agreeing what data and information it shares, setting the parameters for investigations. Where we need external bodies with statutory powers to be able to do this.”

Clegg’s speech later today is also slated to spin the idea that Facebook is suffering unfairly from a wider “techlash”.

Asked about that during the interview, the Facebook PR seized the opportunity to argue that if Western society imposes too stringent regulations on platforms and their use of personal data there’s a risk of “throw[ing] the baby out with the bathwater”, with Clegg smoothly reaching for the usual big tech talking points — claiming innovation would be “almost impossible” if there’s not enough of a data free for all, and the West risks being dominated by China, rather than friendly US giants.

By that logic we’re in a rights race to the bottom — thanks to the proliferation of technology-enabled global surveillance infrastructure, such as the one operated by Facebook’s business.

Clegg tried to pass all that off as merely ‘communications as usual’, making no reference to the scale of the pervasive personal data capture that Facebook’s business model depends upon, and instead arguing its business should be regulated in the same way society regulates “other forms of communication”. Funnily enough, though, your phone isn’t designed to record what you say the moment you plug it in…

“People plot crimes on telephones, they exchange emails that are designed to hurt people. If you hold up any mirror to humanity you will always see everything that is both beautiful and grotesque about human nature,” Clegg argued, seeking to manage expectations vis-a-vis what regulating Facebook should mean. “Our job — and this is where Facebook has a heavy responsibility and where we have to work in partnership with governments — is to minimize the bad and to maximize the good.”

He also said Facebook supports “new rules of the road” to ensure a “level playing field” for regulations related to privacy; election rules; the boundaries of hate speech vs free speech; and data portability —  making a push to flatten regulatory variation which is often, of course, based on societal, cultural and historical differences, as well as reflecting regional democratic priorities.

It’s not at all clear how any of that nuance would or could be factored into Facebook’s preferred universal global ‘moral’ code — which it’s here, via Clegg (a former European politician), leaning on regional governments to accept.

Instead of societies setting the rules they choose for platforms like Facebook, Facebook’s lobbying muscle is being flexed to make the case for a single generalized set of ‘standards’ which won’t overly get in the way of how it monetizes people’s data.

And if we don’t agree to its ‘Western’ style surveillance, the threat is we’ll be at the mercy of even lower Chinese standards…

“You’ve got this battle really for tech dominance between the United States and China,” said Clegg, reheating Zuckerberg’s senate pitch last year when the Facebook founder urged a trade off of privacy rights to allow Western companies to process people’s facial biometrics to not fall behind China. “In China there’s no compunction about how data is used, there’s no worry about privacy legislation, data protection and so on — we should not emulate what the Chinese are doing but we should keep our ability in Europe and North America to innovate and to use data proportionately and innovat[iv]ely.

“Otherwise if we deprive ourselves of that ability I can predict that within a relatively short period of time we will have tech domination from a country with wholly different sets of values to those that are shared in this country and elsewhere.”

What’s rather more likely is the emergence of discrete Internets where regions set their own standards — and indeed we’re already seeing signs of splinternets emerging.

Clegg even briefly brought this up — though it’s not clear why (and he avoided this point entirely) Europeans should fear the emergence of a regional digital ecosystem that bakes respect for human rights into digital technologies.

With European privacy rules also now setting global standards by influencing policy discussions elsewhere — including the US — Facebook’s nightmare is that higher standards than it wants to offer Internet users will become the new Western norm.

Collins made short work of Clegg’s techlash point, pointing out that if Facebook wants to win back users’ and society’s trust it should stop acting like it has everything to hide and actually accept public scrutiny.

“They’ve done this to themselves,” he said. “If they want redemption, if they want to try and wipe the slate clean for Mack Zuckerberg he should open himself up more. He should be prepared to answer more questions publicly about the data that they gather, whether other companies like Cambridge Analytica had access to it, the nature of the problem of disinformation on the platform. Instead they are incredibly defensive, incredibly secretive a lot of the time. And it arouses suspicion.

“I think people were quite surprised to discover the lengths to which people go to to gather data about us — even people who don’t even use Facebook. And that’s what’s made them suspicious. So they have to put their own house in order if they want to end this.”

Last year Collins’ DCMS committee repeatedly asked Zuckerberg to testify to its enquiry into online disinformation — and was repeatedly snubbed…

Collins also debunked an attempt by Clegg to claim there’s no evidence of any Russian meddling on Facebook’s platform targeting the UK’s 2016 EU referendum — pointing out that Facebook previously admitted to a small amount of Russian ad spending that did target the EU referendum, before making the wider point that it’s very difficult for anyone outside Facebook to know how its platform gets used/misused; Ads are just the tip of the political disinformation iceberg.

“It’s very difficult to investigate externally, because the key factors — like the use of tools like groups on Facebook, the use of inauthentic fake accounts boosting Russian content, there have been studies showing that’s still going on and was going on during the [US] parliamentary elections, there’s been no proper audit done during the referendum, and in fact when we first went to Facebook and said there’s evidence of what was going on in America in 2016, did this happen during the referendum as well, they said to us well we won’t look unless you can prove it happened,” he said.

“There’s certainly evidence of suspicious Russian activity during the referendum and elsewhere,” Collins added.

We asked Facebook for Clegg’s talking points for today’s speech but the company declined to share more detail ahead of time.

Facebook urged to give users greater control over what they see

Academics at the universities of Oxford and Stanford think Facebook should give users greater transparency and control over the content they see on its platform.

They also believe the social networking giant should radically reform its governance structures and processes to throw more light on content decisions, including by looping in more external experts to steer policy.

Such changes are needed to address widespread concerns about Facebook’s impact on democracy and on free speech, they argue in a report published today, which includes a series of recommendations for reforming Facebook (entitled: Glasnost! Nine Ways Facebook Can Make Itself a Better Forum for Free Speech and Democracy.)

“There is a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities as well as international human rights norms,” writes lead author Timothy Garton Ash.

“Executive decisions made by Facebook have major political, social, and cultural consequences around the world. A single small change to the News Feed algorithm, or to content policy, can have an impact that is both faster and wider than that of any single piece of national (or even EU-wide) legislation.”

Here’s a rundown of the report’s nine recommendations:

  1. Tighten Community Standards wording on hate speech — the academics argue that Facebook’s current wording on key areas is “overbroad, leading to erratic, inconsistent and often context-insensitive takedowns;” and also generating “a high proportion of contested cases.” Clear and tighter wording could make consistent implementation easier, they believe.
  2. Hire more and contextually expert content reviewers — “the issue is quality as well as quantity,” the report points out, pressing Facebook to hire more human content reviewers plus a layer of senior reviewers with “relevant cultural and political expertise;” and also to engage more with trusted external sources such as NGOs. “It remains clear that AI will not resolve the issues with the deeply context-dependent judgements that need to be made in determining when, for example, hate speech becomes dangerous speech,” they write.
  3. Increase “decisional transparency” — Facebook still does not offer adequate transparency around content moderation policies and practices, they suggest, arguing it needs to publish more detail on its procedures, including specifically calling for the company to “post and widely publicize case studies” to provide users with more guidance and to provide potential grounds for appeals.
  4. Expand and improve the appeals process — also on appeals, the report recommends Facebook gives reviewers much more context around disputed pieces of content, and also provide appeals statistics data to analysts and users. “Under the current regime, the initial internal reviewer has very limited information about the individual who posted a piece of content, despite the importance of context for adjudicating appeals,” they write. “A Holocaust image has a very different significance when posted by a Holocaust survivor or by a Neo-Nazi.” They also suggest Facebook should work on developing “a more functional and usable for the average user” appeals due process, in dialogue with users — such as with the help of a content policy advisory group.
  5. Provide meaningful News Feed controls for users — the report suggests Facebook users should have more meaningful controls over what they see in the News Feed, with the authors dubbing current controls as “altogether inadequate,” and advocating for far more. Such as the ability to switch off the algorithmic feed entirely (without the chronological view being defaulted back to algorithm when the user reloads, as is the case now for anyone who switches away from the AI-controlled view). The report also suggests adding a News Feed analytics feature, to give users a breakdown of sources they’re seeing and how that compares with control groups of other users. Facebook could also offer a button to let users adopt a different perspective by exposing them to content they don’t usually see, they suggest.
  6. Expand context and fact-checking facilities — the report pushes for “significant” resources to be ploughed into identifying “the best, most authoritative, and trusted sources” of contextual information for each country, region and culture — to help feed Facebook’s existing (but still inadequate and not universally distributed) fact-checking efforts.
  7. Establish regular auditing mechanisms — there have been some civil rights audits of Facebook’s processes (such as this one, which suggested Facebook formalizes a human rights strategy), but the report urges the company to open itself up to more of these, suggesting the model of meaningful audits should be replicated and extended to other areas of public concern, including privacy, algorithmic fairness and bias, diversity and more.
  8. Create an external content policy advisory group — key content stakeholders from civil society, academia and journalism should be enlisted by Facebook for an expert policy advisory group to provide ongoing feedback on its content standards and implementation; as well as also to review its appeals record. “Creating a body that has credibility with the extraordinarily wide geographical, cultural, and political range of Facebook users would be a major challenge, but a carefully chosen, formalized, expert advisory group would be a first step,” they write, noting that Facebook has begun moving in this direction but adding: “These efforts should be formalized and expanded in a transparent manner.”
  9. Establish an external appeals body — the report also urges “independent, external” ultimate control of Facebook’s content policy, via an appeals body that sits outside the mothership and includes representation from civil society and digital rights advocacy groups. The authors note Facebook is already flirting with this idea, citing comments made by Mark Zuckerberg last November, but also warn this needs to be done properly if power is to be “meaningfully” devolved. “Facebook should strive to make this appeals body as transparent as possible… and allow it to influence broad areas of content policy… not just rule on specific content takedowns,” they warn.

In conclusion, the report notes that the content issues it’s focused on are not only attached to Facebook’s business but apply widely across various internet platforms — hence growing interest in some form of “industry-wide self-regulatory body.” Though it suggests that achieving that kind of overarching regulation will be “a long and complex task.”

In the meanwhile, the academics remain convinced there is “a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities, as well as international human rights norms” — with the company front and center of the frame given its massive size (2.2 billion+ active users).

“We recognize that Facebook employees are making difficult, complex, contextual judgements every day, balancing competing interests, and not all those decisions will benefit from full transparency. But all would be better for more regular, active interchange with the worlds of academic research, investigative journalism, and civil society advocacy,” they add.

We’ve reached out to Facebook for comment on their recommendations.

The report was prepared by the Free Speech Debate project of the Dahrendorf Programme for the Study of Freedom, St. Antony’s College, Oxford, in partnership with the Reuters Institute for the Study of Journalism, University of Oxford, the Project on Democracy and the Internet, Stanford University and the Hoover Institution, Stanford University.

Last year we offered a few of our own ideas for fixing Facebook — including suggesting the company hire orders of magnitude more expert content reviewers, as well as providing greater transparency into key decisions and processes.

Singapore activist found guilty of hosting ‘illegal assembly’ via Skype

An ongoing case in Singapore is testing the legal boundaries of virtual conferences. A court in the Southeast Asian city-state this week convicted human rights activist Jolovan Wham of organizing a public assembly via Skype without a permit and refusing to sign his statement when ordered by the police.

Wham will be sentenced on January 23 and faces a fine of up to S$5,000 or a jail term of up to three years. The judge in charge of the case, however, has not provided grounds of his decision, Wham wrote on Twitter.

Wham, 39, is a social worker at Community Action Network Singapore consisting of a group of activists, social workers and journalists advocating civil and political rights. He previously served as executive director of migrant worker advocacy group Humanitarian Organisation for Migration Economics.

On November 26, 2016, Wham organized an indoor forum called “Civil Disobedience and Social Movements” at a small event space inside a shopping mall in Singapore. The event featured prominent Hong Kong student activist Joshua Wong who addressed the audience remotely via a Skype video call.

The event’s Facebook Page indicates that 355 people were interested and 121 went. The Skype discussion, which lasted around two hours, was also live streamed on Facebook by The Online Citizen SG, a social media platform focused on political activism, and garnered 5,700 views.

Despite being advised by the police prior to the event to obtain a permit, Wham proceeded without said consent, according to a statement by the Singapore Police Force. Wham faced similar charges of organizing public assemblies without police permits and refusing to sign statements under the Penal Code.

In Singapore, it is a criminal offence under the Public Order Act to organize or participate in a public assembly without a police permit. The Police described Wham’s act as “recalcitrant” in regard to organizing and participating in illegal public assemblies.

Commenting on the charge against Wham, a joint statement from Joshua Wong and members of CAN Singapore argued that the event was “closed-door”.

“Skype conversations that take place within the confines of a private space are private matters that should logically, not require permits before they can be carried out,” raged the statement. “Wham’s discussion with Wong ended peacefully and would not have drawn any further attention if authorities hadn’t decided to act.”

“It was a discussion about civil disobedience and social movements,” Wham pointed out in another Twitter post. “The law says that any event which is open to the public, and is ’cause related’, requires a permit when a foreigner speaks. What is considered ’cause related’ isn’t clear.”

Twitter will tell users if content was blocked to comply with local laws or legal demands

 Twitter will now display messages to inform users if blocked tweets were withheld to comply with local laws or court orders, which it calls Country Withheld Content (CWC). The public already has information about CWC through notices sent directly to affected accounts, Lumen, a database of legal requests for the removal of online content, and Twitter’s own biannual transparency reports.… Read More

Google wins ‘right to be forgotten’ battle in Japan

google-shop13 Google has won a long-standing battle in Japan that drew parallels with Europe’s “right to be forgotten” ruling. The Japanese Supreme Court today dismissed four cases against the U.S. company seeking the removal of allegedly defamatory comments in its Google Maps service, including one high-profile case involving a medical clinic. Back in April 2015, the Chiba District… Read More

Google wins ‘right to be forgotten’ battle in Japan

google-shop13 Google has won a long-standing battle in Japan that drew parallels with Europe’s “right to be forgotten” ruling. The Japanese Supreme Court today dismissed four cases against the U.S. company seeking the removal of allegedly defamatory comments in its Google Maps service, including one high-profile case involving a medical clinic. Back in April 2015, the Chiba District… Read More

Twitter Forms A “Trust & Safety Council” To Balance Abuse Vs Free Speech

twitter rainbow Twitter’s latest step in the tricky balancing act of championing free speech without also handing a free pass to orchestrated harassment via its platform is the announcement today that it’s formed a “Trust & Safety Council” comprised of multiple external organizations with various interests in the two issues. The company said today that the Twitter Trust &… Read More