Facebook urged to give users greater control over what they see

Academics at the universities of Oxford and Stanford think Facebook should give users greater transparency and control over the content they see on its platform.

They also believe the social networking giant should radically reform its governance structures and processes to throw more light on content decisions, including by looping in more external experts to steer policy.

Such changes are needed to address widespread concerns about Facebook’s impact on democracy and on free speech, they argue in a report published today, which includes a series of recommendations for reforming Facebook (entitled: Glasnost! Nine Ways Facebook Can Make Itself a Better Forum for Free Speech and Democracy.)

“There is a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities as well as international human rights norms,” writes lead author Timothy Garton Ash.

“Executive decisions made by Facebook have major political, social, and cultural consequences around the world. A single small change to the News Feed algorithm, or to content policy, can have an impact that is both faster and wider than that of any single piece of national (or even EU-wide) legislation.”

Here’s a rundown of the report’s nine recommendations:

  1. Tighten Community Standards wording on hate speech — the academics argue that Facebook’s current wording on key areas is “overbroad, leading to erratic, inconsistent and often context-insensitive takedowns;” and also generating “a high proportion of contested cases.” Clear and tighter wording could make consistent implementation easier, they believe.
  2. Hire more and contextually expert content reviewers — “the issue is quality as well as quantity,” the report points out, pressing Facebook to hire more human content reviewers plus a layer of senior reviewers with “relevant cultural and political expertise;” and also to engage more with trusted external sources such as NGOs. “It remains clear that AI will not resolve the issues with the deeply context-dependent judgements that need to be made in determining when, for example, hate speech becomes dangerous speech,” they write.
  3. Increase “decisional transparency” — Facebook still does not offer adequate transparency around content moderation policies and practices, they suggest, arguing it needs to publish more detail on its procedures, including specifically calling for the company to “post and widely publicize case studies” to provide users with more guidance and to provide potential grounds for appeals.
  4. Expand and improve the appeals process — also on appeals, the report recommends Facebook gives reviewers much more context around disputed pieces of content, and also provide appeals statistics data to analysts and users. “Under the current regime, the initial internal reviewer has very limited information about the individual who posted a piece of content, despite the importance of context for adjudicating appeals,” they write. “A Holocaust image has a very different significance when posted by a Holocaust survivor or by a Neo-Nazi.” They also suggest Facebook should work on developing “a more functional and usable for the average user” appeals due process, in dialogue with users — such as with the help of a content policy advisory group.
  5. Provide meaningful News Feed controls for users — the report suggests Facebook users should have more meaningful controls over what they see in the News Feed, with the authors dubbing current controls as “altogether inadequate,” and advocating for far more. Such as the ability to switch off the algorithmic feed entirely (without the chronological view being defaulted back to algorithm when the user reloads, as is the case now for anyone who switches away from the AI-controlled view). The report also suggests adding a News Feed analytics feature, to give users a breakdown of sources they’re seeing and how that compares with control groups of other users. Facebook could also offer a button to let users adopt a different perspective by exposing them to content they don’t usually see, they suggest.
  6. Expand context and fact-checking facilities — the report pushes for “significant” resources to be ploughed into identifying “the best, most authoritative, and trusted sources” of contextual information for each country, region and culture — to help feed Facebook’s existing (but still inadequate and not universally distributed) fact-checking efforts.
  7. Establish regular auditing mechanisms — there have been some civil rights audits of Facebook’s processes (such as this one, which suggested Facebook formalizes a human rights strategy), but the report urges the company to open itself up to more of these, suggesting the model of meaningful audits should be replicated and extended to other areas of public concern, including privacy, algorithmic fairness and bias, diversity and more.
  8. Create an external content policy advisory group — key content stakeholders from civil society, academia and journalism should be enlisted by Facebook for an expert policy advisory group to provide ongoing feedback on its content standards and implementation; as well as also to review its appeals record. “Creating a body that has credibility with the extraordinarily wide geographical, cultural, and political range of Facebook users would be a major challenge, but a carefully chosen, formalized, expert advisory group would be a first step,” they write, noting that Facebook has begun moving in this direction but adding: “These efforts should be formalized and expanded in a transparent manner.”
  9. Establish an external appeals body — the report also urges “independent, external” ultimate control of Facebook’s content policy, via an appeals body that sits outside the mothership and includes representation from civil society and digital rights advocacy groups. The authors note Facebook is already flirting with this idea, citing comments made by Mark Zuckerberg last November, but also warn this needs to be done properly if power is to be “meaningfully” devolved. “Facebook should strive to make this appeals body as transparent as possible… and allow it to influence broad areas of content policy… not just rule on specific content takedowns,” they warn.

In conclusion, the report notes that the content issues it’s focused on are not only attached to Facebook’s business but apply widely across various internet platforms — hence growing interest in some form of “industry-wide self-regulatory body.” Though it suggests that achieving that kind of overarching regulation will be “a long and complex task.”

In the meanwhile, the academics remain convinced there is “a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities, as well as international human rights norms” — with the company front and center of the frame given its massive size (2.2 billion+ active users).

“We recognize that Facebook employees are making difficult, complex, contextual judgements every day, balancing competing interests, and not all those decisions will benefit from full transparency. But all would be better for more regular, active interchange with the worlds of academic research, investigative journalism, and civil society advocacy,” they add.

We’ve reached out to Facebook for comment on their recommendations.

The report was prepared by the Free Speech Debate project of the Dahrendorf Programme for the Study of Freedom, St. Antony’s College, Oxford, in partnership with the Reuters Institute for the Study of Journalism, University of Oxford, the Project on Democracy and the Internet, Stanford University and the Hoover Institution, Stanford University.

Last year we offered a few of our own ideas for fixing Facebook — including suggesting the company hire orders of magnitude more expert content reviewers, as well as providing greater transparency into key decisions and processes.

Singapore activist found guilty of hosting ‘illegal assembly’ via Skype

An ongoing case in Singapore is testing the legal boundaries of virtual conferences. A court in the Southeast Asian city-state this week convicted human rights activist Jolovan Wham of organizing a public assembly via Skype without a permit and refusing to sign his statement when ordered by the police.

Wham will be sentenced on January 23 and faces a fine of up to S$5,000 or a jail term of up to three years. The judge in charge of the case, however, has not provided grounds of his decision, Wham wrote on Twitter.

Wham, 39, is a social worker at Community Action Network Singapore consisting of a group of activists, social workers and journalists advocating civil and political rights. He previously served as executive director of migrant worker advocacy group Humanitarian Organisation for Migration Economics.

On November 26, 2016, Wham organized an indoor forum called “Civil Disobedience and Social Movements” at a small event space inside a shopping mall in Singapore. The event featured prominent Hong Kong student activist Joshua Wong who addressed the audience remotely via a Skype video call.

The event’s Facebook Page indicates that 355 people were interested and 121 went. The Skype discussion, which lasted around two hours, was also live streamed on Facebook by The Online Citizen SG, a social media platform focused on political activism, and garnered 5,700 views.

Despite being advised by the police prior to the event to obtain a permit, Wham proceeded without said consent, according to a statement by the Singapore Police Force. Wham faced similar charges of organizing public assemblies without police permits and refusing to sign statements under the Penal Code.

In Singapore, it is a criminal offence under the Public Order Act to organize or participate in a public assembly without a police permit. The Police described Wham’s act as “recalcitrant” in regard to organizing and participating in illegal public assemblies.

Commenting on the charge against Wham, a joint statement from Joshua Wong and members of CAN Singapore argued that the event was “closed-door”.

“Skype conversations that take place within the confines of a private space are private matters that should logically, not require permits before they can be carried out,” raged the statement. “Wham’s discussion with Wong ended peacefully and would not have drawn any further attention if authorities hadn’t decided to act.”

“It was a discussion about civil disobedience and social movements,” Wham pointed out in another Twitter post. “The law says that any event which is open to the public, and is ’cause related’, requires a permit when a foreigner speaks. What is considered ’cause related’ isn’t clear.”

Twitter will tell users if content was blocked to comply with local laws or legal demands

 Twitter will now display messages to inform users if blocked tweets were withheld to comply with local laws or court orders, which it calls Country Withheld Content (CWC). The public already has information about CWC through notices sent directly to affected accounts, Lumen, a database of legal requests for the removal of online content, and Twitter’s own biannual transparency reports.… Read More

Google wins ‘right to be forgotten’ battle in Japan

google-shop13 Google has won a long-standing battle in Japan that drew parallels with Europe’s “right to be forgotten” ruling. The Japanese Supreme Court today dismissed four cases against the U.S. company seeking the removal of allegedly defamatory comments in its Google Maps service, including one high-profile case involving a medical clinic. Back in April 2015, the Chiba District… Read More

Google wins ‘right to be forgotten’ battle in Japan

google-shop13 Google has won a long-standing battle in Japan that drew parallels with Europe’s “right to be forgotten” ruling. The Japanese Supreme Court today dismissed four cases against the U.S. company seeking the removal of allegedly defamatory comments in its Google Maps service, including one high-profile case involving a medical clinic. Back in April 2015, the Chiba District… Read More

Twitter Forms A “Trust & Safety Council” To Balance Abuse Vs Free Speech

twitter rainbow Twitter’s latest step in the tricky balancing act of championing free speech without also handing a free pass to orchestrated harassment via its platform is the announcement today that it’s formed a “Trust & Safety Council” comprised of multiple external organizations with various interests in the two issues. The company said today that the Twitter Trust &… Read More

Twitter Announces Deal To Revive Politwoops

twitter-censor Political gaffes on Twitter will once again be preserved on the website Politwoops, according to an announcement from Twitter. The company blocked API access for Politwoops earlier this year, a move that apparently crippled the service’s ability to archive politicians’ deleted tweets. A number of open government and human rights groups (and not just the ones operating the… Read More