Imagine a better future for social media at TechCrunch Sessions: Justice

Toxic culture, deadly conspiracies and organized hate have exploded online in recent years. We’ll discuss how much responsibility social networks have in the rise of these phenomena and how to build healthy online communities that make society better, not worse at TechCrunch Sessions: Justice on March 3.

Join us for a wide-ranging discussion with Rashad Robinson, Jesse Lehrich and Naj Austin that explores what needs to change to make social networks more just, healthy environments rather than dangerous echo chambers that amplify society’s ills.

Naj Austin is the founder and CEO of Somewhere Good and Ethel’s Club. She has spent her career building digital and physical products that make the world a more intersectional and equitable space. She was named one of Inc. magazine’s 100 Female Founders transforming America, a HuffPost Culture Shifter of 2020 and Time Out New York’s 2020 list of women making NYC better.

Jesse Lehrich is a co-founder of Accountable Tech. He has a decade of experience in political communications and issue advocacy, including serving as the foreign policy spokesman for Hillary Clinton’s 2016 presidential campaign, where he was part of the team managing the response to Russia’s information warfare operation.

Rashad Robinson is the president of Color Of Change, a leading racial justice organization driven by more than 7.2 million members who are building power for Black communities. Color Of Change uses innovative strategies to bring about systemic change in the industries that affect Black people’s lives: Silicon Valley, Wall Street, Hollywood, Washington, corporate board rooms, local prosecutor offices, state capitol buildings and city halls around the country.

Under Rashad’s leadership, Color Of Change designs and implements winning strategies for racial justice, among them: forcing corporations to stop supporting Trump initiatives and white nationalists; framing net neutrality as a civil rights issue; holding local prosecutors accountable to end mass incarceration, police violence and financial exploitation across the justice system; forcing over 100 corporations to abandon ALEC, the secretive right-wing policy shop; changing representations of race and racism in Hollywood; moving Airbnb, Google and Facebook to implement anti-racist initiatives; and forcing Bill O’Reilly off the air.

Be sure to join us for this conversation and much more at TechCrunch Sessions: Justice on March 3.

Minneapolis bans its police department from using facial recognition software

Minneapolis voted Friday to ban the use of facial recognition software for its police department, growing the list of major cities that have implemented local restrictions on the controversial technology. After an ordinance on the ban was approved earlier this week, 13 members of the city council voted in favor of the ban, with no opposition.

The new ban will block the Minneapolis Police Department from using any facial recognition technology, including software by Clearview AI. That company sells access to a large database of facial images, many scraped from major social networks, to federal law enforcement agencies, private companies and a number of U.S. police departments. The Minneapolis Police Department is known to have a relationship with Clearview AI, as is the Hennepin County Sheriff’s Office, which will not be restricted by the new ban.

The vote is a landmark decision in the city that set off racial justice protests around the country after a Minneapolis police officer killed George Floyd last year. The city has been in the throes of police reform ever since, leading the nation by pledging to defund the city’s police department in June before backing away from that commitment into more incremental reforms later that year.

Banning the use of facial recognition is one targeted measure that can rein in emerging concerns about aggressive policing. Many privacy advocates are concerned that the AI-powered face recognition systems would not only disproportionately target communities of color, but that the tech has been demonstrated to have technical shortcomings in discerning non-white faces.

Cities around the country are increasingly looking to ban the controversial technology and have implemented restrictions in many different ways. In Portland, Oregon, new laws passed last year block city bureaus from using facial recognition but also forbid private companies from deploying the technology in public spaces. Previous legislation in San Francisco, Oakland and Boston restricted city governments from using facial recognition systems, though didn’t include a similar provision for private companies.

Facebook Oversight Board says other social networks ‘welcome to join’ if project succeeds

The Facebook Oversight Board has only been operational for a short time, but the nascent project is already looking ahead.

In a conversation hosted by the Carnegie Endowment Thursday, Oversight Board co-chair and former Prime Minister of Denmark Helle Thorning-Schmidt painted a more expansive vision for the group that could go beyond making policy decisions for Facebook.

The board co-chair said that if the project proves to be a success, “other platforms and other tech companies are more than welcome to join and be part of the oversight that we will be able to provide.”

Thorning-Schmidt emphasized that a broader vision for this kind of moderation body would happen well in the future, but the board’s current mission was to move away from policy decisions getting made in a “closed box” at the company.

“Until now, content moderation was basically done by the last person at Facebook or Twitter as we have seen — either Mark Zuckerberg or the other platform directors,” Thorning-Schmidt said.

“For the first time in history, we actually have content moderation being done outside one of the big social media platforms. That in itself… I don’t hesitate to call it historic.”

Those comments may capture broader aspirations for Facebook’s Oversight Board, which refers to itself only as the “Oversight Board” without an explicit reference to Facebook on its website.

Throughout the panel, those involved with the Oversight Board defended the project. The group has come under criticism from skeptics wary that its origins with Facebook make real autonomy from the company impossible.

“A lot of people want to immediately dismiss the Oversight Board and look for something new,” Oversight Board Head of Communications Dex Hunter-Torricke said.

Hunter-Torricke, who spent four years working on the Facebook executive communications team and served as a speechwriter for both Mark Zuckerberg and Sheryl Sandberg, also hinted at a more expansive vision for the board.

“This is a model that we’re testing here to see if this is the kind of institution that can have an impact in one sphere of Facebook and the content moderation challenges they face,” Hunter-Torricke said. He added that the board intends to “evolve and grow” using what it learns from handling Facebook moderation cases.

“… As we build up our expertise and our body of experience in dealing with Facebook I expect there will be more capabilities that come onto the board,” Hunter-Torricke said. “We are on a journey. It’s not something [where] we necessarily know the final destination yet but we are looking to test this model and refine it further.”

TechCrunch reached out to the Oversight Board to ask if the group sees its future as an external governing body for social networks beyond Facebook.

Facebook’ Oversight Board is currently facing the hugely consequential case of whether to reinstate former President Donald Trump, who was removed from Facebook after inciting the violent mob that attacked the U.S. Capitol in early January.

Five of the group’s 20 members will evaluate the Trump case, though the board will not disclose which members evaluate which cases. Once the five reach their decision, the broader board must pass the decision in a majority vote. The board’s verdict is expected within the next two months.

The Facebook Oversight Board’s most prominent Trump critic, legal scholar Pamela Karlan, left her role at the board to join the Biden administration last week and won’t be involved in the decision. Karlan testified at Trump’s first impeachment hearing, arguing that Trump’s actions constituted impeachable offenses.

The board is accepting comments on the Trump case through Friday in an effort to consider “diverse perspectives” in its decision process.

On Thursday, former Facebook Chief Security Officer Alex Stamos signed onto a public letter urging the Oversight Board to keep Facebook’s decision to remove Trump in place. “Without social media spreading Trump’s statements, it seems extremely unlikely these events would have occurred,” the comment’s authors wrote.

“There no doubt will be close calls under a policy that allows the deplatforming of political leaders in extreme circumstances. This was not one of them.”

Twitter says Trump is banned forever — even if he runs for president again

As the second impeachment trial of his presidency unfolds, there’s another bit of bad news for the former president. In a new interview on CNBC’s Squawk Box, Twitter Chief Financial Officer Ned Segal gave the decisive word on how the company would handle Trump’s Twitter account long-term.

Responding to a question about what would happen if Trump ran again and was elected to office, Segal didn’t mince words.

“The way our policies work, when you’re removed from the platform, you’re removed from the platform — whether you’re a commentator, you’re a CFO, or you are a former or current public official,” Segal said.

“Remember, our policies are designed to make sure that people are not inciting violence, and if anybody does that, we have to remove them from the service and our policies don’t allow people to come back.”

Twitter banned Trump from its platform one month ago citing concerns about the “risk of further incitement of violence.” Trump’s role in instigating the deadly attack on the U.S. Capitol ultimately sealed his fate on his platform of choice, where he’d spent four years rallying his followers, amplifying conspiracies and lambasting his critics.

Facebook says it will remove more COVID-19 conspiracies that discourage vaccination

Vaccine misinformation has been around since well before the pandemic, but ensuring that anti-scientific conspiracies don’t get boosted online is more crucial than ever as the world races against the spread of a deadly, changing virus.

Now, Facebook says it will expand the criteria it uses to take down false vaccine claims. Under the new rules, which Facebook said it made in consultation with groups like the World Health Organization, the company will remove posts claiming that COVID-19 vaccines aren’t effective, that it’s “safer to get the disease” and the widely debunked longstanding anti-vaxxer claim that vaccines could cause autism.

Facebook says it will place a “particular focus” on enforcement against groups, Pages, groups and accounts that break the rules, noting that they may be removed from the platform outright.

Facebook took steps to limit COVID-19 vaccine misinformation in December, preparing the platform for the vaccine rollout while still lagging well behind the rampant spread of anti-vaccine claims. The company began removing posts containing some misinformation about the vaccine, including “false claims that COVID-19 vaccines contain microchips” and content claiming that the vaccine is being tested on portions of the population without their consent.

Why this kind of stuff didn’t already fall under Facebook’s rules against COVID-19 misinformation is anyone’s guess. The company came out of the gate early in the pandemic with a new set of policies intended to prevent an explosion of potentially deadly COVID-related conspiracies, but time and time again the company fails to evenly and firmly enforce its own rules.

The SAFE TECH Act offers Section 230 reform, but the law’s defenders warn of major side effects

The first major Section 230 reform proposal of the Biden era is out. In a new bill, Senate Democrats Mark Warner (D-VA), Mazie Hirono (D-HI) and Amy Klobuchar (D-MN) propose changes to Section 230 of the Communications Decency Act that would fundamentally change the 1996 law widely credited with cultivating the modern internet.

Section 230 is a legal shield that protects internet companies from the user-generated content they host, from Facebook and TikTok to Amazon reviews and comments sections. The new proposed legislation, known as the SAFE TECH Act, would do a few different things to change how that works.

First, it would fundamentally alter the core language of Section 230 — and given how concise that snippet of language is to begin with, any change is a big change. Under the new language, Section 230 would no longer offer protections in situations where payments are involved.

Here’s the current version:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information speech provided by another information content provider.

And here are the changes the SAFE TECH Act would make:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any speech provided by another information content provider, except to the extent the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech.

(B) (c)(1)(A) shall be an affirmative defense to a claim alleging that an interactive computer service provider is a publisher or speaker with respect to speech provided by another information content provider that an interactive computer service provider has a burden of proving by a preponderance of the evidence.

That might not sound like much, but it could be a massive change. In a tweet promoting the bill, Sen. Warner called online ads a “a key vector for all manner of frauds and scams” so homing in on platform abuses in advertising is the ostensible goal here. But under the current language, it’s possible that many other kinds of paid services could be affected, from Substack, Patreon and other kinds of premium online content to web hosting.

“A good lawyer could argue that this covers many different types of arrangements that go far beyond paid advertisements,” Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy who authored a book about Section 230, told TechCrunch. “Platforms accept payments from a wide range of parties during the course of making speech ‘available’ to the public. The bill does not limit the exception to cases in which platforms accept payments from the speaker.”

Internet companies big and small rely on Section 230 protections to operate, but some of them might have to rethink their businesses if rules proposed in the new bill come to pass. Sen. Ron Wyden (D-OR), one of Section 230’s original authors, noted that the new bill has some good intentions, but he issued a strong caution against the blowback its unintended consequences could cause.

“Unfortunately, as written, it would devastate every part of the open internet, and cause massive collateral damage to online speech,” Wyden told TechCrunch, likening the bill to a full repeal of the law with added confusion from a cluster of new exceptions.

“Creating liability for all commercial relationships would cause web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech,” Wyden said.

Fight for the Future Director Evan Greer echoed the sentiment that the bill is well intentioned but shared the same concerns. “…Unfortunately this bill, as written, would have enormous unintended consequences for human rights and freedom of expression,” Greer said.

“It creates a huge carveout in Section 230 that impacts not only advertising but essentially all paid services, such as web hosting and [content delivery networks], as well as small services like Patreon, Bandcamp, and Etsy.”

Given its focus on advertising and instances in which a company has accepted payment, the bill might be both too broad and too narrow at once to offer effective reform. While online advertising, particularly political advertising, has become a hot topic in recent discussions about cracking down on platforms, the vast majority of violent conspiracies, misinformation and organized hate is the result of organic content, not the stuff that’s paid or promoted. It also doesn’t address the role of algorithms, a particular focus of a narrow Section 230 reform proposal in the House from Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ).

New exceptions

The other part of the SAFE TECH Act, which attracted buy-in from a number of civil rights organizations including the Anti-Defamation League, the Center for Countering Digital Hate and Color of Change, does address some of those ills. By appending Section 230, the new bill would open internet companies to more civil liability in some cases, allowing victims of cyberstalking, targeted harassment, discrimination and wrongful death to the opportunity to file lawsuits against those companies rather than blocking those kinds of suits outright.

The SAFE TECH Act would also create a carve-out allowing individuals to seek court orders in cases when an internet company’s handling of material it hosts could cause “irreparable harm” as well as allowing lawsuits in U.S. courts against American internet companies for human rights abuses abroad.

In a press release, Warner said the bill was about updating the 1996 law to bring it up to speed with modern needs:

“A law meant to encourage service providers to develop tools and policies to support effective moderation has instead conferred sweeping immunity on online providers even when they do nothing to address foreseeable, obvious and repeated misuse of their products and services to cause harm,” Warner said.

There’s no dearth of ideas about reforming Section 230. Among them: the bipartisan PACT Act from Senators Brian Schatz (D-HI) and John Thune (R-SD), which focuses on moderation transparency and providing less cover for companies facing federal and state regulators, and the EARN IT Act, a broad bill from Sen. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT) that 230 defenders and internet freedom advocates regard as unconstitutional, overly broad and disastrous.

With so many proposed Section 230 reforms already floating around, it’s far from guaranteed that a bill like the SAFE TECH Act will prevail. The only thing that’s certain is we’ll be hearing a lot more about the tiny snippet of law with huge consequences for the modern internet.

House punishes Republican lawmaker who promoted violent conspiracy theories

Democrats in the House voted to strip freshman Georgia Representative Marjorie Taylor Greene of some of her responsibilities Thursday, citing her penchant for violent, anti-democratic and at times anti-Semitic conspiracy theories.

Greene has expressed support for a range of alarming conspiracies, including the belief that the 2018 Parkland school shooting that killed 17 people was a “false flag.” That belief prompted two teachers unions to call for her removal from the House Education Committee — one of her new committee assignments.

The vote on a resolution to remove Greene from her committee assignments broke along party lines, with nearly all Republicans opposing the resolution. Some of her colleagues even voted in Greene’s defense in spite of condemning her behavior in the past.

As the House moved to vote on the highly unusual resolution, the new Georgia lawmaker claimed that her embrace of QAnon was in the past.

“I never once said during my entire campaign ‘QAnon,’ ” Greene said Thursday. “I never once said any of the things that I am being accused of today during my campaign. I never said any of these things since I have been elected for Congress. These were words of the past.”

But as the Daily Beast’s Will Sommer reported, a deleted tweet from December shows Greene explicitly defending QAnon and directing blame toward the media and “big tech.”

In another recently-uncovered post from January 2019, Greene showed support for online comments calling for “a bullet to the head” for House Speaker Nancy Pelosi and executing FBI agents.

Greene has also shared openly racist, Islamophobic and anti-Semitic views in Facebook videos, a track record that prompted Republican House Minority Leader Kevin McCarthy to condemn her statements as “appalling” last June. More recently, McCarthy defended Greene against efforts to remove her from committees.

Greene was elected in November to represent a conservative district in northwest Georgia after her opponent Kevin Van Ausdal dropped out, citing personal reasons. Greene beat her opponent in the Republican primary in August, winning 57% of the vote.

QAnon, a dangerous once-fringe collection of conspiracy theories, was well-represented in January’s deadly Capitol riot and many photos from the day show the prevalence of QAnon symbols and sayings. In 2019, an FBI bulletin warned of QAnon’s connection to “conspiracy theory-driven domestic extremists.” A year later, at least one person who had espoused the same views would win a seat in Congress.

The overlap between Greene’s beliefs and those of the violent pro-Trump mob at the Capitol escalated tensions among lawmakers, many of whom feared for their lives as the assault unfolded.

A freshman representative with little apparent appetite for policy or coalition-building, Greene wasn’t likely to wield much legislative power in the House. But as QAnon and adjacent conspiracies move from the fringe to the mainstream and possibly back again — a trajectory largely dictated by the at times arbitrary decisions of social media companies — Greene’s treatment in Congress may signal what’s to come for a dangerous online movement that’s more than demonstrated its ability to spill over into real-world violence.

New antitrust reform bill charts one possible path for regulating big tech

As Democrats settle into control of both chambers of Congress, signs of the party’s legislative priorities are starting to manifest. So far, lawmakers’ interest in reimagining tech’s regulatory landscape appears to be alive and well.

Sen. Amy Klobuchar (D-MN) is out with a new proposal for antitrust reform that would create more barriers for big mergers and beef up federal resources for antitrust enforcement. Klobuchar’s bill, the Antitrust Law Enforcement Reform Act, seeks to address consolidation across industries, calling out “dominant digital platforms” specifically.

“While the United States once had some of the most effective antitrust laws in the world, our economy today faces a massive competition problem,” Klobuchar said. “We can no longer sweep this issue under the rug and hope our existing laws are adequate.”

Klobuchar now leads the Senate Subcommittee on Antitrust, Competition Policy and Consumer Rights, a corner of Congress already signaling its interest on reform that would impact big tech.

The new bill would bolster the Clayton Antitrust Act, a 1914 bill that created a framework for the rules around competition that are still applied today. Specifically, it would amend that bill to reinterpret its standard for evaluating anti-competitive mergers, changing the language to prevent any deal that “create[s] an appreciable risk of materially lessening competition” rather than the current wording.

The aim is to catch potentially anti-competitive behavior earlier in the game — an outcome that would address the government’s current conundrum, in which federal regulators are now awkwardly reevaluating mergers that evolved into monopolistic behavior years after the fact.

The bill would also put the onus on merging companies to prove that they don’t pose a risk of reducing competition, taking that burden off of the government in specific cases. Those rules would apply to “mega-mergers” worth $5 billion or more and in which a company with 50% market share seeks to buy a current or potential competitor.

Klobuchar’s proposal also seeks to add a provision to the Clayton Act against conduct that puts competitors at a disadvantage — a rule that would address some of the murkier areas of anti-competitive behavior that stretch beyond outright mergers and acquisitions.

Citing lacking enforcement budgets, the bill also sets out a $300 million infusion for the Justice Department’s antitrust division and the FTC. At the FTC, that money would help create a new division within the agency for research on markets and mergers.

The bill will be co-sponsored by Senators Cory Booker, Richard Blumenthal, Brian Schatz and Ed Markey, all Democrats on the antitrust subcommittee. And while it’s a single party endeavor for now, the antitrust reform could attract support from Republicans in Congress like Missouri Senator Josh Hawley, who signaled his interest in antitrust changes that target large tech companies as recently as this week. Hawley also sits on the Senate’s antitrust subcommittee.

Klobuchar stops short of calling for large tech companies like Facebook and Google to be broken into their component parts, a move that has attracted some support from lawmakers like Elizabeth Warren and Bernie Sanders in recent years. In the midst of emerging multistate lawsuits targeting big tech companies, the FTC announced its own antitrust case against Facebook late last year, pushing for the company to be broken up.

You can now give Facebook’s Oversight Board feedback on the decision to suspend Trump

Facebook’s “Supreme Court” is now accepting comments on one of its earliest and likely most consequential cases. The Facebook Oversight Board announced Friday that it would begin accepting public feedback on Facebook’s suspension of former President Trump.

Mark Zuckerberg announced Trump’s suspension on January 7, after the then-president of the United States incited his followers to riot at the nation’s Capitol, an event that resulted in a number of deaths and imperiled the peaceful transition of power.

In a post calling for feedback, the Oversight Board describes the two posts that led to Trump’s suspension. One is a version of the video the president shared the day of the Capitol riot in which he sympathizes with rioters and validates their claim that the “election was stolen from us.” In the second post, Trump reiterates those views, falsely bemoaning a “sacred landslide election victory” that was “unceremoniously & viciously stripped away.”

The board says the point of the public comment process is to incorporate “diverse perspectives” from third parties who wish to share research that might inform their decisions, though it seems a lot more likely the board will wind up with a tidal wave of subjective and probably not particularly useful political takes. Nonetheless, the comment process will be open for 10 days and comments will be collected in an appendix for each case. The board will issue a decision on Trump’s Facebook fate within 90 days of January 21, though the verdict could come sooner.

The Oversight Board specifically invites public comments that consider:

Whether Facebook’s decision to suspend President Trump’s accounts for an indefinite period complied with the company’s responsibilities to respect freedom of expression and human rights, if alternative measures should have been taken, and what measures should be taken for these accounts going forward.

How Facebook should assess off-Facebook context in enforcing its Community Standards, particularly where Facebook seeks to determine whether content may incite violence.

How Facebook should treat the expression of political candidates, office holders, and former office holders, considering their varying positions of power, the importance of political opposition, and the public’s right to information.

The accessibility of Facebook’s rules for account-level enforcement (e.g. disabling accounts or account functions) and appeals against that enforcement.

Considerations for the consistent global enforcement of Facebook’s content policies against political leaders, whether at the content-level (e.g. content removal) or account-level (e.g. disabling account functions), including the relevance of Facebook’s “newsworthiness” exemption and Facebook’s human rights responsibilities.

The Oversight Board’s post gets very granular on the Trump suspension, critiquing Facebook for lack of specificity when the company didn’t state exactly which part of its community standards were violated. Between this and the five recent cases, the board appears to view its role as a technical one, in which it examines each case against Facebook’s existing ruleset and then makes recommendations for future policy rather than working backward from its own broader recommendations.

The Facebook Oversight Board announced its first cluster of decisions this week, overturning the company’s own choice to remove potentially objectionable content in four of five cases. None of those cases pertained to content relevant to Trump’s account suspension, but they prove that the Oversight Board isn’t afraid to go against the company’s own thinking — at least when it comes to what gets taken down.

Lawmakers announce hearings on GameStop and online trading platforms

The GameStop short squeeze saga caught the attention of Congress Thursday morning and that buzz is already panning out into hearings on the topic.

Rep. Maxine Waters (D-CA), chairwoman of the House Committee on Financial Services, announced plans for an investigation into the situation, pointing to a history of “predatory conduct” from hedge funds.

Waters didn’t call out Robinhood or any other trading services by name, but did note that a future hearing would focus on the systemic financial impact of short selling, “gamification” and online trading platforms. The hearing date is not yet set.

“Addressing that predatory and manipulative conduct is the responsibility of lawmakers and securities regulators who are charged with protecting investors and ensuring that our capital markets are fair, orderly, and efficient,” Waters said.

 

In the Senate, incoming Senate Banking Chairman Sherrod Brown announced his own plans for a hearing on the “current state of the stock market” in light of recent events. “People on Wall Street only care about the rules when they’re the ones getting hurt,” Brown said.

Earlier on Thursday, Democratic Reps. Rep. Rashida Tlaib, Alexandria Ocasio-Cortez and Ro Khanna all condemned the startup Robinhood for halting some trades in the midst of the Reddit retail investor-led volatility. Morgan Stanley-owned E-Trade followed suit.

Texas Republican Senator Ted Cruz echoed Democrats’ concerns over Robinhood’s actions, signaling that even in the midst of pandemic relief negotiations and an impeachment trial that lawmakers on both sides of the aisle still have an appetite for dragging tech in for questioning.

And apparently it’s not just Congress. New York Attorney General Letitia James also issued a short statement Thursday noting that her office is “aware of concerns raised regarding activity on the Robinhood app” and would be reviewing the situation.