China bans Scratch, MIT’s programming language for kids

China’s enthusiasm for teaching children to code is facing a new roadblock as organizations and students lose an essential tool: the Scratch programming language developed by the Lifelong Kindergarten Group at the MIT Media Lab.

China-based internet users can no longer access Scratch’s website. Greatfire.org, an organization that monitors internet censorship in China, shows that the website was 100% blocked as early as August 20, while a Scratch user flagged the ban on August 14.

Nearly 60 million children around the world have used Scratch’s visual programming language to make games, animations, stories and the likes. That includes students in China, which is seeing a gold rush to early coding as the country tries to turn its 200 million kids into world-class tech talents.

At last count, 5.65% or 3 million of Scratch’s registered users are based in China, though its reach is greater than the figure suggests as many Chinese developers have built derivatives based on Scratch, an open-source software.

Projects on Scratch contains “a great deal of humiliating, fake, and libelous content about China,” including placing Hong Kong, Macau and Taiwan in a dropdown list of “countries”, a state-run news outlet reported on August 21.

The article added that “any service distributing information in China” must comply with local regulations, and Scratch’s website and user forum had been shut down in the country.

The Scratch editor, which claims coders in every country in the world and available in more than 50 languages, is downloadable and used offline. That means Chinese users who have installed the software can continue using it for now. It’s unclear whether the restriction will extend to and hamper the software’s future version updates.

The Scratch team cannot be immediately reached for comment. Its ban in China, if proven permanent, will likely drum up support for home-grown replacements.

“Scratch is very widely used in China by student users. Inside schools, it’s used in many official information technology textbooks for primary school students,” said Anqi Zhou, chief executive of Shenzhen-based Dream Codes True, a coding startup targeting primary and secondary school kids. “There are many coding competitions for kids using Scratch.”

Indeed, the infiltration of Scratch into the public school system is what had initially alarmed the Chinese authority. An article published August 11 on a youth-focused state outlet blasted:

“Platforms like Scratch have a large number of young Chinese users. That’s exactly why the platform must exercise self-discipline. Allowing the free flow of anti-China and separatist discourse will cause harm to Chinese people’s feelings, cross China’s red line, and poison China’s future generation.”

The article headline captured Beijing’s attitude towards imported technologies, including those that are open-source and meant to be educational and innocuous: An open China is not “xenophobic” but must “detoxify”.

Regardless of the “problematic” user-generated content on Scratch, China will likely encourage more indigenous tech players to grow, as it has done in a sweeping effort to localize semiconductors and even source code hosting.

Outside textbooks, Scratch had also found its way into pricey afterschool centers across China. Some companies attribute Scratch’s open-source codes as their foundation while others build lookalikes that claim to be in-house made, several Chinese founders working in the industry told TechCrunch.

“Scratch is like the benchmark for kids’ programming software. Most parents learn about Scratch from extracurricular programs, which tend to keep all the web traffic to themselves rather than directing users to Scratch,” said Yi Zhang, founder of Tangiplay, a Shenzhen-based startup teaching children to code through hardware.

Despite Scratch’s popularity in China, competitors of all sizes have cropped up. That includes five-year-old Code Mao, a Shenzhen startup that’s an early and major player in the space — and well-financed by venture capital firms. With its own Kitten language “more robust than Scratch,” the startup boasts a footprint in 21 countries, over 30 million users, and about 11,000 institutional customers. Internet incumbents NetEase and Tencent have also come up with their own products for young coders.

“If it’s something permanent and if mainstream competitions and schools stop using it, we too will consider stopping using it,” said Zhou, whose startup is also based in Shenzhen, which has turned into a hub for early coding thanks to its emerging players like Code Mao and Makeblock.

VPN booms as countries around the world mull TikTok bans

As countries around the world ban or threaten to restrict TikTok, interest in virtual private networks has spiked.

The use of VPNs can let users access an online service from an encrypted tunnel and thus bypass app blocks. “We are seeing an increasing number of governments around the world attempting to control the information their citizens can access,” observes Harold Li, vice president of ExpressVPN, which claims to have over 3,000 servers across 94 countries. “For this reason, VPNs are used to access blocked sites and services by many worldwide.”

Indeed, ExpressVPN’s website saw a 10% week-over-week increase in traffic following the U.S. government’s announcement of a potential TikTok ban. The VPN service recorded similar trends in Japan and Australia, where it saw a 19% and 41% WoW increase in traffic respectively after the governments said they might block TikTok.

When India officially shut down TikTok, ExpressVPN saw a 22% WoW jump in web traffic. In Hong Kong, where TikTok voluntarily pulled out following the enactment of the national security law, the VPN service logged a 10% WoW traffic growth.

Not a ‘magic bullet’

VPNs have long been a popular solution for people to elude restrictions on the internet, be it censored content or app bans. We wrote about Hong Kong residents flocking to VPN services in anticipation of heightened censorship, but the use of a VPN is not a ‘magic bullet,’ as a Hong Kong media scholar warned.

Governments can make it difficult for average users to access VPNs by removing them from local app stores. Users will have to register in another regional app store, which often involves roadblocks like owning a local credit card. Countries can also illegalize the use of VPNs, imposing fines on users and even imprisoning VPN vendors as China did.

Depending on how an app block plays out in practice, there may be other challenges unsolvable by VPNs. “We don’t know how potential bans may be enforced yet, and it may require users to jump through other hoops on top of using a VPN, such as removing their local SIM card,” suggested Li.

Users can look for alternatives to banned apps, but switching services can entail high costs, especially when a product has strong network effects. TikTok, for instance, enjoys a ‘content network effect’ that makes it difficult for rivals to match its user experience, as my former colleague Josh Constine pointed out.

Similarly, those who worry about a potential WeChat ban in the U.S. may simply lack a viable alternative to the Chinese messenger with over 1 billion users. For members of the Chinese diaspora in the U.S., WeChat is the only way for them to reach their families and friends in China, where it’s the dominant chat app while major Western social networks are unavailable.

Smaller apps are flying under the radar of the authority. Unlike rivals Telegram and Whatsapp, encrypted messenger Signal is still accessible in China — for now, and the app climbed 51 spots in the China rank of iOS social apps just between August 7-9, currently sitting in the 36th place. Others in China use iMessenger, which also remains unblocked, to stay in touch with their U.S. contacts, but the option is exclusive to iPhone users.

Individuals and businesses worldwide increasingly need to adapt to service shutdowns or risk losing access to the free and open internet. As Telegram founder Pavel Durov lamented: “[T]he U.S. move against TikTok is setting a dangerous precedent that may eventually kill the internet as a truly global network (or what is left of it).”

On illegal hate speech, EU lawmakers eye binding transparency for platforms

It’s more than four years since major tech platforms signed up to a voluntary pan-EU Code of Conduct on illegal hate speech removals. Yesterday the European Commission’s latest assessment of the non-legally binding agreement lauds “overall positive” results — with 90% of flagged content assessed within 24 hours and 71% of the content deemed to be illegal hate speech removed. The latter is up from just 28% in 2016.

However the report cards finds platforms are still lacking in transparency. Nor are they providing users with adequate feedback on the issue of hate speech removals, in the Commission’s view.

Platforms responded and gave feedback to 67.1% of the notifications received, per the report card — up from 65.4% in the previous monitoring exercise. Only Facebook informs users systematically — with the Commission noting: “All the other platforms have to make improvements.”

In another criticism, its assessment of platforms’ performance in dealing with hate speech reports found inconsistencies in their evaluation processes — with “separate and comparable” assessments of flagged content that were carried out over different time periods showing “divergences” in how they were handled.

Signatories to the EU online hate speech code are: Dailymotion, Facebook, Google+, Instagram, Jeuxvideo.com, Microsoft, Snapchat, Twitter and YouTube.

This is now the fifth biannual evaluation of the code. It may not yet be the final assessment but EU lawmakers’ eyes are firmly tilted toward a wider legislative process — with commissioners now busy consulting on and drafting a package of measures to update the laws wrapping digital services.

A draft of this Digital Services Act is slated to land by the end of the year, with commissioners signalling they will update the rules around online liability and seek to define platform responsibilities vis-a-vis content.

Unsurprisingly, then, the hate speech code is now being talked about as feeding that wider legislative process — while the self-regulatory effort looks to be reaching the end of the road. 

The code’s signatories are also clearly no longer a comprehensive representation of the swathe of platforms in play these days. There’s no WhatsApp, for example, nor TikTok (which did just sign up to a separate EU Code of Practice targeted at disinformation). But that hardly matters if legal limits on illegal content online are being drafted — and likely to apply across the board. 

Commenting in a statement, Věra Jourová, Commission VP for values and transparency, said: “The Code of conduct remains a success story when it comes to countering illegal hate speech online. It offered urgent improvements while fully respecting fundamental rights. It created valuable partnerships between civil society organisations, national authorities and the IT platforms. Now the time is ripe to ensure that all platforms have the same obligations across the entire Single Market and clarify in legislation the platforms’ responsibilities to make users safer online. What is illegal offline remains illegal online.”

In another supporting statement, Didier Reynders, commissioner for Justice, added: The forthcoming Digital Services Act will make a difference. It will create a European framework for digital services, and complement existing EU actions to curb illegal hate speech online. The Commission will also look into taking binding transparency measures for platforms to clarify how they deal with illegal hate speech on their platforms.”

Earlier this month, at a briefing discussing Commission efforts to tackle online disinformation, Jourová suggested lawmakers are ready to set down some hard legal limits online where illegal content is concerned, telling journalists: “In the Digital Services Act you will see the regulatory action very probably against illegal content — because what’s illegal offline must be clearly illegal online and the platforms have to proactively work in this direction.” Disinformation would not likely get the same treatment, she suggested.

The Commission has now further signalled it will consider ways to prompt all platforms that deal with illegal hate speech to set up “effective notice-and-action systems”.

In addition, it says it will continue — this year and next — to work on facilitating the dialogue between platforms and civil society organisations that are focused on tackling illegal hate speech, saying that it especially wants to foster “engagement with content moderation teams, and mutual understanding on local legal specificities of hate speech”

In its own report last year assessing the code of conduct, the Commission concluded that it had contributed to achieving “quick progress”, particularly on the “swift review and removal of hate speech content”.

It also suggested the effort had “increased trust and cooperation between IT Companies, civil society organisations and Member States authorities in the form of a structured process of mutual learning and exchange of knowledge” — noting that platforms reported “a considerable extension of their network of ‘trusted flaggers’ in Europe since 2016.”

“Transparency and feedback are also important to ensure that users can appeal a decision taken regarding content they posted as well as being a safeguard to protect their right to free speech,” the Commission report also notes, specifying that Facebook reported having received 1.1 million appeals related to content actioned for hate speech between January 2019 and March 2019, and that 130,000 pieces of content were restored “after a reassessment”.

On volumes of hate speech, the Commission suggested the amount of notices on hate speech content are roughly in the range of 17-30% of total content, noting for example that Facebook reported having removed 3.3M pieces of content for violating hate speech policies in the last quarter of 2018 and 4M in the first quarter of 2019.

“The ecosystems of hate speech online and magnitude of the phenomenon in Europe remains an area where more research and data are needed,” the report added.

Germany tightens online hate speech rules to make platforms send reports straight to the feds

While a French online hate speech law has just been derailed by the country’s top constitutional authority on freedom of expression grounds, Germany is beefing up hate speech rules — passing a provision that will require platforms to send suspected criminal content directly to the Federal police at the point it’s reported by a user.

The move is part of a wider push by the German government to tackle a rise in right wing extremism and hate crime — which it links to the spread of hate speech online.

Germany’s existing Network Enforcement Act (aka the NetzDG law) came into force in the country in 2017, putting an obligation on social network platforms to remote hate speech within set deadlines as tight as 24 hours for easy cases — with fines of up to €50M should they fail to comply.

Yesterday the parliament passed a reform which extends NetzDG by placing a reporting obligation on platforms which requires them to report certain types of “criminal content” to the Federal Criminal Police Office.

A wider reform of the NetzDG law remains ongoing in parallel, that’s intended to bolster user rights and transparency, including by simplifying user notifications and making it easier for people to object to content removals and have successfully appealed content restored, among other tweaks. Broader transparency reporting requirements are also looming for platforms.

The NetzDG law has always been controversial, with critics warning from the get go that it would lead to restrictions on freedom of expression by incentivizing platforms to remove content rather than risk a fine. (Aka, the risk of ‘overblocking’.) In 2018 Human Rights Watch dubbed it a flawed law — critiquing it for being “vague, overbroad, and turn[ing] private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal”.

The latest change to hate speech rules is no less controversial: Now the concern is that social media giants are being co-opted to help the state build massive databases on citizens without robust legal justification.

A number of amendments to the latest legal reform were rejected, including one tabled by the Greens which would have prevented the personal data of the authors of reported social media posts from being automatically sent to the police.

The political party is concerned about the risk of the new reporting obligation being abused — resulting in data on citizens who have not in fact posted any criminal content ending up with the police.

It also argues there are only weak notification requirements to inform authors of flagged posts that their data has been passed to the police, among sundry other criticisms.

The party had proposed that only the post’s content would be transmitted directly to police who would have been able to request associated personal data from the platform should there be a genuine need to investigate a particular piece of content.

The German government’s reform of hate speech law follows the 2019 murder of a pro-refugee politician, Walter Lübcke, by neo nazis — which it said was preceded by targeted threats and hate speech online.

Earlier this month police staged raids on 40 hate speech suspects across a number of states who are accused of posting “criminally relevant comments” about Lübcke, per national media.

The government also argues that hate speech online has a chilling effect on free speech and a deleterious impact on democracy by intimidating those it targets — meaning they’re unable to freely express themselves or participate without fear in society.

At the pan-EU level, the European Commission has been pressing platforms to improve their reporting around hate speech takedowns for a number of years, after tech firms signed up to voluntary EU Code of Conduct on hate speech.

It is also now consulting on wider changes to platform rules and governance — under a forthcoming Digital Services Act which will consider how much liability tech giants should face for content they’re fencing.

FCC Commissioner disparages Trump’s social media order: ‘The decision is ours alone’

FCC Commissioner Geoffrey Starks has examined the President’s Executive Order that attempts to spur the FCC into action against social media companies and found it wanting. “There are good reasons for the FCC to stay out of this debate,” he said. “The decision is ours alone.”

The Order targets Section 230 of the Communications Decency Act, which ensures that platforms like Facebook and YouTube aren’t liable for illegal content posted to them, as long as they are making efforts to take them down in accordance with the law.

Some in government feel these protections go too far and have led to social media companies suppressing free speech. Trump himself clearly felt suppressed when Twitter placed a fact-check warning on unsupported claims of fraud in mail-in voting, leading directly to the Order.

Starks gave his take on the topic in an interview with the Information Technology and Innovation Foundation, a left-leaning think tank that pursues tech-related issues. While he is just one of five commissioners and the FCC has yet to consider the order in any official sense, his words have weight, as they indicate serious legal and procedural objections to it.

“The Executive Order definitely gets one thing right, and that is that the President cannot instruct the FCC to do this or anything else,” he said. “We’re an independent agency.”

He was careful to make clear that he doesn’t think the law is perfect — just that this method of changing it is completely unjustified.

“The broader debate about section 230 long predates President Trump’s conflict with Twitter in particular, and there are so many smart people who believe the law here should be updated,” he explained. “But ultimately that debate belongs to Congress. That the president may find it more expedient to influence a 5-member commission than a 538-member Congress is not a sufficient reason, much less a good one, to circumvent the constitutional function of our democratically elected representatives.”

The Justice Department has entered the picture as well, offering its own recommendations for changing Section 230 today — though like the White House, Justice has no power to directly change or invent responsibilities for the FCC.

Fellow Commissioner Jessica Rosenworcel echoed his concerns, paraphrasing an earlier statement on the order: “Social media can be frustrating, but turning the FCC into the President’s speech police is not the answer.”

After detailing some of the legal limitations of the FCC, Section 230, and the difficulty and needlessness of narrowly defining “good faith” actions, Starks concluded that the order simply doesn’t make a lot of sense in their context.

“The first amendment allows social media companies to censor content freely in ways the government never could, and it prohibits the government from retaliating against them for that speech,” he said. “So much — so much — of what the president proposes here seems inconsistent with those core principles, making an FCC rulemaking even less desirable.”

“The worst case scenario, the one that burdens the proper functioning of our democracy, would be to allow the laxity here to bestow some type of credibility on the Executive Order, one that threatens certainly a new regulatory regime upon internet service providers with no credible legal support,” he continued.

Having said that, he acknowledged that the order does mean that some action should take place at the FCC — it may just not be the kind of resolution Trump wishes.

“I’m calling to press [the National Telecommunications Industry Association] to send the petition as quickly as possible. I see no reason why they should need more than 30 days from the Executive Order’s issuance itself so we can get on with it, have the FCC review it and vote,” he said. “And if, as I suspect it ultimately will, the petition fails at a legal question of authority, I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season can use a pending proceeding to, in my estimation, intimidate private parties.”

A lot of this is left to Chairman Ajit Pai, who has fairly consistently fallen in line with the administration’s wishes. And if the eagerness of Commissioner Carr is any indicator, the Republican members of the Commission are happy to respond to the President’s “call for guidance.”

So far there has been no official announcement of FCC business relating to the Executive Order, but if the NTIA moves quickly we could hear about it as early as next month’s open meeting.

Facebook agrees to restrict anti-government content in Vietnam after months of throttling

Facebook has agreed to block access to certain anti-government content to users in Vietnam, following months of having its services throttled there, reportedly by state-owned telecoms.

Reuters, citing sources within the company, reported that Vietnam requested earlier in the year that Facebook restrict a variety of content it deemed illegal, such as posts critical of the government. When the social network balked, the country used its control over local internet providers to slow Facebook traffic to unusable levels.

An explanation at the time that the slowdown was owing to maintenance of undersea cables likely did not convince many, since it was specific to Facebook (and related properties Messenger and Instagram).

All things being equal, Facebook has shown in the past that it would prefer to keep discourse open. But all things are not equal and in this case millions of users were unable to access its services — and consequently, it must be said, unable to be advertised to.

The slowdown lasted some 7 weeks, from mid-February to early April, when Facebook conceded to the government’s demands.

One Reuters source said that “once we committed to restricting more content… the servers were turned back online by the telecommunications operators.”

Facebook offered the following statement confirming general, though not specific, aspects of the story reported by Reuters:

The Vietnamese government has instructed us to restrict access to content which it has deemed to be illegal in Vietnam. We believe freedom of expression is a fundamental human right, and work hard to protect and defend this important civil liberty around the world. However, we have taken this action to ensure our services remain available and usable for millions of people in Vietnam, who rely on them every day.

Facebook is no stranger to government requests both to restrict and to hand over data. Although the company inspects these requests and sometimes challenges them, it’s Facebook’s stated policy to comply with local law — even if that means (as it often does) complicity with government censorship practices.

The justification usually offered (as here) is that people in a country with such restrictions are better served with an incomplete set of Facebook’s communications tools rather than none at all.

The Hong Kong Internet Service Providers Association warns that restricting online access would be ruinous for the region

After Hong Kong’s leader suggested she may invoke emergency powers that could potentially include limiting Internet access, one of city’s biggest industry groups warned that “any such restrictions, however slight originally, would start the end of the open Internet of Hong Kong.”

While talking to reporters on Tuesday, Hong Kong Chief Executive Carrie Lam suggested the government may use the Emergency Regulations Ordinance in response to ongoing anti-government demonstrations. The law, which has not been used in more than half a century, would give the government a sweeping array of powers, including the ability to restrict or censor publications and communications. In contrast to China’s “Great Firewall” and routine government censorship of internet services, Hong Kong’s internet is currently open and mostly unrestricted, with the exception of laws to prevent online crime, copyright infringements and the spread of obscene material like child pornography.

In an “urgent statement” addressed to Hong Kong’s Executive Council, the Hong Kong Internet Service Providers Association (HKISPA) said that because of technology like VPNs, the cloud and cryptographies, the only way to “effectively and meaningfully block any services” would entail putting all of Hong Kong’s internet behind a large-scale surveillance firewall. The association added that this would have huge economic and social consequences and deter international organizations from doing business in Hong Kong.

Furthermore, restricting the internet in Hong Kong would also have implications in the rest of the region, including in mainland China, the HKISPA added. There are currently 18 international cable systems that land, or will land, in Hong Kong, making it a major telecommunications hub. Blocking one application means users will move onto another application, creating a cascading effect that will continue until all of Hong Kong is behind a firewall, the association warned.

In its statement, the HKISPA wrote that “the lifeline of Hong Kong’s Internet industry relies in large part on the open network,” adding “Hong Kong is the largest core node of Asia’s optical fiber network and hosts the biggest Internet exchange in the region, and it is now home to 100+ data centers operated by local and international companies, and it transits 80%+ of traffic for mainland China.”

“All these successes rely on the openness of Hong Kong’s network,” the HKISPA continued. “Such restrictions imposed by executive orders would completely ruin the uniqueness and value of Hong Kong as a telecommunications hub, a pillar of success as an international financial centre.”

The HKISPA urged the government to consult the industry and “society at large” before placing any restrictions in place. “The HKISPA strongly opposes selective blocking of Internet Services without consensus of the community,” it said.

If internet access is restricted in Hong Kong, a major financial hub, it would be a major hit to global internet freedom, which Freedom House says has been declining over the last eight consecutive years as more countries “mov[e] toward digital authoritarianism by embracing the Chinese model of extensive censorship and automated surveillance systems.” Many governments, including those of Tanzania and Uganda, have enacted new restrictions or laws in an attempt to curb political dissent, modeling their censorship measures on countries like China and Russia.

 

‘Behind the Screen’ illuminates the invisible, indispensable content moderation industry

The moderators who sift through the toxic detritus of social media have gained the spotlight recently, but they’ve been important for far longer — longer than internet giants would like you to know. In her new book “Behind the Screen,” UCLA’s Sarah Roberts illuminates the history of this scrupulously hidden workforce and the many forms the job takes.

It is after all people who look at every heinous image, racist diatribe, and porn clip that gets uploaded to Facebook, YouTube, and every other platform — people who are often paid like dirt, treated like parts, then disposed of like trash when worn out. And they’ve been doing it for a long time.

True to her academic roots, Roberts lays out the thesis of the book clearly in the introduction, explaining that although content moderators or the companies that employ them may occasionally surface in discussions, the job has been systematically obscured from sight.

The work they do, the conditions under which they do it, and for whose benefit are largely imperceptible to the users of the platforms who pay for and rely upon this labor. In fact, this invisibility is by design.

Roberts, an assistant professor of information studies at UCLA, has been looking into this industry for the better part of a decade, and this book is the culmination of her efforts to document it. While it is not the final word on the topic — no academic would suggest their work was — it is an eye-opening account, engagingly written, and not at all the tour of horrors you may reasonably expect it to be.

After reading the book, I talked with Roberts about the process of researching and writing it. As an academic and tech outsider, she was not writing from personal experience or even commenting on the tech itself, but found that she had to essentially invent a new area of research from scratch spanning tech, global labor, and sociocultural norms.

“Opacity, obfuscation, and general unwillingness”

“To take you back to 2010 when I started this work, there was literally no academic research on this topic,” Roberts said. “That’s unusual for a grad student, and actually something that made me feel insecure — like maybe this isn’t a thing, maybe no one cares.”

That turned out not to be the case, of course. But the practices we read about with horror, of low-wage workers grinding through endless queues of content from child abuse to terrorist attacks, while they’ve been in place for years and years, have been successfully moderated out of existence by the companies that employ them. But recent events have changed that.

“A number of factors are coalescing to make the public more receptive to this kind of work,” she explained. “Average social media users, just regular people, are becoming more sophisticated about their use, and questioning the integration of those kinds of tools and media in their everyday life. And certainly there were a few key political situations where social media was implicated. Those were a driving force behind the people asking, do I actually know what I’m using? Do I know whether or how I’m being manipulated? How do the things I see on my screen actually get there?”

A handful of reports over the years, like Casey Newton’s in the Verge recently, also pierced the curtain behind which tech firms carefully and repeatedly hid this unrewarding yet essential work. At some point the cat was simply out of the bag. But few people recognized it for what it was.

Facebook can be told to cast a wider net to find illegal content, says EU court advisor

How much of an obligation should social media platforms be under to hunt down illegal content?

An influential advisor to Europe’s top court has taken the view that social media platforms like Facebook can be required to seek out and identify posts that are equivalent to content that an EU court has deemed illegal — such as hate speech or defamation — if the comments have been made by the same user.

Platforms can also be ordered to hunt for identical repostings of the illegal content.

But there should not be an obligation for platforms to identify equivalent defamatory comments that have been posted by any user, with the advocate general opining that such a broad requirement would not ensure a fair balance between the fundamental rights concerned — flagging risks to free expression and free access to information.

“An obligation to identify equivalent information originating from any user would not ensure a fair balance between the fundamental rights concerned. On the one hand, seeking and identifying such information would require costly solutions. On the other hand, the implementation of those solutions would lead to censorship, so that freedom of expression and information might well be systematically restricted.”

We covered this referral to the CJEU last year.

It’s an interesting case that blends questions of hate speech moderation and the limits of robust political speech, given that the original 2016 complaint of defamation was made by the former leader of the Austrian Green Party, Eva Glawischnig.

An Austrian court agreed with Glawischnig that hate speech posts made about her on Facebook were defamatory and ordered the company to remove them. Facebook did so, but only in Austria. Glawischnig challenged its partial takedown and in May 2017 a local appeals court ruled that it must remove both the original posts and any verbatim repostings and do so worldwide, not just in Austria. 

Further legal appeals led to the referral to the CJEU which is being asked to determine where the line should be drawn for similarly defamatory postings, and whether takedowns can be applied globally or only locally.

On the global takedowns point, the advocate general believes that existing EU law does not present an absolute blocker to social media platforms being ordered to remove information worldwide.

“Both the question of the extraterritorial effects of an injunction imposing a removal obligation and the question of the territorial scope of such an obligation should be analysed, in particular, by reference to public and private international law,” runs the non-binding opinion.

Another element relates to the requirement under existing EU law that platforms should not be required to carry out general monitoring of information they store — and specifically whether that directive precludes platforms from being ordered to remove “information equivalent to the information characterised as illegal” when they have been made aware of it by the person concerned, third parties or another source. 

On that, the AG takes the view that the EU’s e-Commerce Directive does not prevent platforms from being ordered to take down equivalent illegal content when it’s been flagged to them by others — writing that, in that case, “the removal obligation does not entail general monitoring of information stored”.

Advocate General Maciej Szpunar’s opinion — which can be read in full here — is not the last word on the matter, with the court still to deliberate and issue its final decision (usually within three to six months of an AG opinion). However advisors to the CJEU are influential and tend to predict which way the court will jump.

We reached out to Facebook for comment. A spokesperson for the company told us:

This case raises important questions about freedom of expression online and about the role that internet platforms should play in locating and removing speech, particularly when it comes to political discussions and criticizing elected officials. We remove content that breaks the law and our priority is always to keep people on Facebook safe. However this opinion undermines the long-standing principle that one country should not have the right to limit free expression in other countries. We hope the CJEU will clarify that, even in the age of the internet, the scope of court orders from one country must be limited to its borders.

This report was updated with comment from Facebook

China blocks CNN’s website and Reuters stories about Tiananmen Square

CNN’s website is currently blocked in mainland China, after it published a story about today’s 30th anniversary of the Tiananmen Square massacre as one of its top headlines. The site is usually accessible in China, according to historical data from GreatFire.org.

Matt Rivers, a Beijing-based reporter, noted the blocking of the site on Twitter, writing that “the government here is near obsessive about limiting conversation on this topic.

Information about the Tiananmen Square pro-democracy demonstration, which ended when the government ordered troops to fire on activists, is suppressed in China, but the country’s censorship apparatus begins intensifying its efforts at eradicating any mention of the events in the weeks leading up to its anniversary each year.

Earlier today, financial information provider Refinitiv also took down Reuters stories related to Tiananmen Square from its Eikon information terminal, following an order from the Cyberspace Administration of China (CAC), the government’s Internet regulation and censorship agency. The CAC told Refinitiv it would suspend its service in China if did not comply with the order.

Even though the stories were only supposed to be blocked in China, Reuters reported today that some users outside of China also said they could not see them, though the reason for that is unclear. (Early versions of the Reuters story about the suspension were themselves removed from Eikon, too).