Square finds its Sarah Friar replacement with new CFO Amrita Ahuja

Founder and chief executive Jack Dorsey says Square has poached Amrita Ahuja from Blizzard Entertainment, a division of the gaming company Activision Blizzard, to lead finance at the merchant services and mobile payments company.

Ahuja will join Square later this month, about three months after long-time Square chief financial officer Sarah Friar exited the company in favor of a CEO opportunity at Nextdoor, a neighborhood social networking site. Friar, often described as Dorsey’s right-hand woman, joined Square in 2012 and led the startup through an initial public offering that valued the company at about $3 billion.

Prior to an eight-year stint at Blizzard, Ahuja clocked in a few years at Fox Networks Group, the Walt Disney Company and Morgan Stanley, where she was an analyst in the investment banking division.

“In Amrita, we have found an amazing, multidimensional business leader,” Dorsey said in a statement. “Amrita brings the ability to consider and balance opportunities across our entire business, and she will help strengthen our discipline as we invest, build, and scale.”

Shares of Square [NYSE: SQ] dropped more than 8 percent on Thursday.

Twitter’s newest feature is reigniting the ‘iPhone vs Android’ war

Twitter’s newest feature is reigniting the flame war between iOS and Android owners.

The U.S. social media company’s latest addition is a subtle piece of information that shows the client that each tweet is sent from. In doing so, the company now displays whether a user tweets from the web or mobile and, if they are on a phone, whether they used Twitter’s iOS or Android apps, or a third-party service.

The feature — which was quietly enabled on Twitter’s mobile clients earlier this month; it has long been part of the TweetDeck app — has received a mixed response from users since CEO Jack Dorsey spotlighted it.

Some are happy to have additional details to dig into for context, for example, whether a person is on mobile or using third-party apps, but others believe it is an unnecessary addition that is stoking the rivalry between iOS and Android fans.

Interestingly, the app detail isn’t actually new. Way back in 2012 — some six years ago — Twitter stripped out the information as part of a series of changes to unify users across devices, focus on service’s reading experience and push people to its official apps where it could maximize advertising reach.

That was a long time ago — so long that TechCrunch editor-in-chief Matthew Panzarino was still a reporter when he wrote about it; he and I were at another publication altogether — and much has changed at Twitter, which has grown massively in popularity to reach 330 million users.

Back in 2012, Twitter was trying to reign in the mass of third-party apps that were popular with users in order to centralize its advertising to get itself, and its finances, together before going public. Twitter’s IPO happened in 2013 and it did migrate most users to its own apps, but it did a terrible job handling developers and thus, today, there are precious few third-party apps. That’s still a sore point with many users, since the independent apps were traditionally superior with better design and more functions. Most are dead now and Twitter’s official apps reign supreme.

Many Twitter users may not be aware of the back story, so it is pretty fascinating to see some express uncertainty at displaying details of their phone. Indeed, a number of Android users lamented that the new detail is ‘exposing’ their devices.

Here’s a selection of tweets:

I could go on — you can see more here — but it seems like, for many, iPhone is still the ultimate status symbol over Android despite the progress made by the likes of Samsung, Huawei and newer Android players Xiaomi and Oppo.

While it may increase arguments between mobile’s two tribes, the feature has already called out brands and ambassadors using the ‘wrong’ device. Notable examples including a Korean boyband sponsored by LG using iPhones or the Apple Music team sending a tweet via an Android device. Suddenly spotting these mismatches is a whole lot easier.

TikTok parent ByteDance sues Chinese news site that exposed fake news problem

There’s worrying news from China’s online media world as ByteDance, the $75 billion company behind popular video app TikTok is taking a news site to court for alleged defamation after it published a story about ByteDance’s fake news problem in India.

U.S. tech firms have come to rely on media to help uncover issues, but Chinese tech news site Huxiu has become the latest litigation target of ByteDance, which reportedly surpassed Uber’s valuation after raising $3 billion. The company has sued internet giants Tencent and Baidu in the past year for alleged anti-competitive behavior.

This time around, ByteDance — which is backed by SoftBank’s Vision Fund, KKR and General Atlantic among others — has taken issue with an op-ed published earlier this month that spotlights a fake news problem on its Indian language news app, Helo.

Launched in July as part of ByteDance’s push in India, Helo competes with local media startups such as Xiaomi-backed ShareChat and DailyHunt as well as Facebook. ByteDance operates news app Jinri Toutiao with over 250 million monthly active users in China, according to data services provider QuestMobile. TikTok, branded as Douyin in China, has a reach well beyond its home front and claims 500 million MAUs worldwide with an additional 100 million users gleaned from its Musical.ly buyout.

“An insult and abuse”

On December 4, Huxiu published an opinion piece that condemned Helo and ShareChat for allowing misinformation to spread. One Helo post, for instance, falsely claimed that a Congress leader had suggested that India should help neighboring rival Pakistan clear its debt rather than invest in the State of Unity, a pricey local infrastructure project.

In response, ByteDane filed a lawsuit against Huxiu, saying that the Chinese news site made defamatory statements against it in translating an op-ed by contributor Elliott Zaagman. Tech blog TechNode — TechCrunch’s partner in China — ran an edited English version of the story but it is not part of the suit.

Zhang Yiming, founder of ByteDance, poses for a photograph at the company’s headquarters in Beijing, China. Photographer: Giulia Marchi/Bloomberg via Getty Images

“Technode edited the piece and removed some of my words. Huxiu was, and is with most of my articles, true to my original words,” Zaagman wrote on his WeChat timeline.

To adhere only to “facts” as part of its editorial process, TechNode removed “colorful” parts of Zaagman’s article, according to the blog’s editor-in-chief.

What goes missing on TechNode is what incensed ByteDance. Zaagman’s unfiltered statements on Huxiu “constitute an insult and abuse against ByteDance” by “claiming that Chinese companies have influence over the Indian election,” a ByteDance spokesperson told TechCrunch.

“The content on Huxiu is obviously a rumor and libel. It’s malicious slander. Whether it’s Chinese or foreign publications, Chinese or foreign authors, they must respect the truth, laws, and principles of journalism,” the spokesperson added.

The unedited English version is posted on Zaagman’s personal LinkedIn account here. Here is one paragraph that TechNode removed:

Maybe still Zhang is simply a victim of his own success. Few entrepreneurs start a company expecting it to be worth $75 billion. But what he has created may have far broader ramifications. As is demonstrated by Russia’s use of American social networking platforms to interfere in Western elections, misinformation campaigns can be a tool used by adversaries to disrupt a country’s internal politics. At this current moment when China faces greater international tensions, a pushback to their rising influence in Asia, and territorial disputes along their border with India, the last thing that Beijing needs is accusations from an opportunistic Indian politician sounding the alarm about how Beijing-based Chinese companies are spreading misinformation among the impressionable Indian electorate….

And this as well:

Although, on second thought, maybe it makes perfect sense that Zhang Yiming is peddling products that he himself would likely never use. After all, any good drug dealer knows not to get high on their own supply.

In a statement, Huxiu dismissed ByteDance’s accusation for being “wildly untrue” and bringing “major repercussions” for the online publication’s reputation. A spokesperson for Huxiu told TechCrunch that it hasn’t received any summons as the court is still processing the complaint.

In a peculiar twist to the incident, Huxiu actually pulled its Chinese version of Zaagman’s piece days leading to the ByteDance suit. The removal came as a result of “negotiations among multiple parties,” said the Huxiu representative who declined to share more details on the decision. In China, an online article can be subject to censorship for containing material considered illegal or inappropriate by the media platform itself or the government.

The problem of AI

douyin tiktok musically

The logo for ByteDance’s popular video app TikTok (called Douyin in China) at an electronic dance music festival. / Credit: ByteDance

In the U.S., Facebook has responded proactively to issues raised by the media — for example by banning accounts that stoke racial tension in Myanmar — while Twitter CEO Jack Dorsey went so far as to suggest that journalists sniffing out issues on his service is “critical” to the company. Beijing-based ByteDance hasn’t commented on the fake news problem highlighted in Zaagman’s article, but staff from its Indian regional app previously acknowledged the presence of misinformation.

“We work very closely with our local content review and moderation team in harnessing our algorithms to review and take down inappropriate content,” a Helo spokesperson told local newspaper Hindustan Times.

The concerns about Helo are the latest blow for ByteDance, which has marketed itself as an artificial intelligence company delivering what users want to see based on what their online interaction in the past. As has been the case with Western platforms, such as Google-owned YouTube which also uses an algorithm to feed users videos that they favor, the outcome can mean sensational and sometimes illegal content.

Along those lines, ByteDance’s focus on AI at the expense of significant “human-led” editorial oversight has come in for criticism.

In July, the Indonesian government banned TikTok because it contained “pornography, inappropriate content and blasphemy.” At home, Chinese media watchdogs have similarly slammed a number of the company’s other content platforms, and regulators in the country went so far as to shutter its humor app for serving “vulgar” content.

But ByteDance is hardly the only tech company entangled in China’s increased media scrutiny. Heavyweights including Tencent, Baidu, and ByteDance’s archrival Kuaishou have also come under attack at various degrees for hosting content deemed problematic by the authorities over the past year.

Jack Dorsey and Twitter ignored opportunity to meet with civic group on Myanmar issues

Responding to criticism from his recent trip to Myanmar, Twitter CEO Jack Dorsey said he’s keen to learn about the country’s racial tension and human rights atrocities, but it has emerged that both he and Twitter’s public policy team ignored an opportunity to connect with a key civic group in the country.

A loose group of six companies in Myanmar has engaged with Facebook in a bid to help improve the situation around usage of its services in the country — often with frustrating results — and key members of that alliance, including Omidyar-backed accelerator firm Phandeeyar, contacted Dorsey via Twitter DM and emailed the company’s public policy contacts when they learned that the CEO was visiting Myanmar.

The plan was to arrange a forum to discuss the social media concerns in Myanmar to help Dorsey gain an understanding of life on the ground in one of the world’s fastest-growing internet markets.

“The Myanmar tech community was all excited, and wondering where he was going,” Jes Kaliebe Petersen, the Phandeeyar CEO, told TechCrunch in an interview. “We wondered: ‘Can we get him in a room, maybe at a public event, and talk about technology in Myanmar or social media, whatever he is happy with?'”

The DMs went unread. In a response to the email, a Twitter staff member told the group that Dorsey was visiting the country strictly on personal time with no plans for business. The Myanmar-based group responded with an offer to set up a remote, phone-based briefing for Twitter’s public policy team with the ultimate goal of getting information to Dorsey and key executives, but that email went unanswered.

When we contacted Twitter, a spokesperson initially pointed us to a tweet from Dorsey in which he said: “I had no conversations with the government or NGOs during my trip.”

However, within two hours of our inquiry, a member of Twitter’s team responded to the group’s email in an effort to restart the conversation and set up a phone meeting in January.

“We’ve been in discussions with the group prior to your outreach,” a Twitter spokesperson told TechCrunch in a subsequent email exchange.

That statement is incorrect.

Still, on the bright side, it appears that the group may get an opportunity to brief Twitter on its concerns on social media usage in the country after all.

The micro-blogging service isn’t as well-used in Myanmar as Facebook, which has some 20 million monthly users and is practically the de facto internet, but there have been concerns in Myanmar. For one thing, there was been the development of a somewhat sinister bot army in Myanmar and other parts of Southeast Asia, while it remains a key platform for influencers and thought-leaders.

“[Dorsey is] the head of a social media company and, given the massive issues here in Myanmar, I think it’s irresponsible of him to not address that,” Petersen told TechCrunch.

“Twitter isn’t as widely used as Facebook but that doesn’t mean it doesn’t have concerns happening with it,” he added. “As we’d tell Facebook or any large tech company with a prominent presence in Myanmar, it’s important to spend time on the ground like they’d do in any other market where they have a substantial presence.”

The UN has concluded that Facebook plays a “determining” role in accelerating ethnic violence in Myanmar. While Facebook has tried to address the issues, it hasn’t committed to opening an office in the country and it released a key report on the situation on the eve of the U.S. mid-term elections, a strategy that appeared designed to deflect attention from the findings. All of which suggests that it isn’t really serious about Myanmar.

Twitter, why are you such a hot mess?

Today, Jack Dorsey tweeted a link to his company’s latest gesture toward ongoing political relevance, a U.S. midterms news center collecting “the latest news and top commentary” on the country’s extraordinarily consequential upcoming election. If curated and filtered properly, that could be useful! Imagine. Unfortunately, rife with fake news, the tool is just another of Twitter’s small yet increasingly consequential disasters.

Beyond a promotional tweet from Dorsey, Twitter’s new offering is kind of buried — probably for the best. On desktop it’s a not particularly useful mash of national news reporters, local candidates and assorted unverifiable partisans. As Buzzfeed news details, the tool is swimming with conspiracy theories, including ones involving the migrant caravan. According to his social media posts, the Pittsburgh shooter was at least partially motivated by similar conspiracies, so this is not a good look to say the least.

Why launch a tool like this before performing the most basic cursory scan for the kind of low-quality sources that already have your company in hot water? Why have your chief executive promote it? Why why why

A few hours after Dorsey’s tweet, likely after the prominent callout, the main feed looked a bit tamer than it did at first glance. Subpages for local races appear mostly populated by candidates themselves, while the national feed looks more like an algorithmically generated echo chamber version of my regular Twitter feed, with inexplicably generous helpings of MSNBC pundits and more lefty activists.

For Twitter users already immersed in conspiracies, particularly those that incubate so successfully on the far right, does this feed offer yet another echo chamber disguised as a neutral news source? In spite of its sometimes dubiously left-leanings, my feed is still peppered with tweets from undercover video provocateur James O’Keefe — not exactly a high quality source.

In May, Twitter announced that political candidates would get a special badge, making them stand out from other users and potential imposters. That was useful! Anything that helps Twitter function as a fast news source with light context is a positive step, but unfortunately we haven’t seen a whole lot in this direction.

Social media companies need to stop launching additional amplification tools into the ominous void. No social tech company has yet exhibited a meaningful understanding of the systemic shifts that need to happen — possibly product-rending shifts — to dissuade bad actors and straight up disinformation from spreading like a back-to-school virus. 

Unfortunately, a week before the U.S. midterm elections, Twitter looks as disinterested as ever in the social disease wreaking havoc on its platform, even as users suffer its real-life consequences. Even more unfortunate for any members of its still dedicated, weary userbase, Twitter’s latest wholly avoidable minor catastrophe comes as a surprise to no one.

White House says a draft executive order reviewing social media companies is not “official”

A draft executive order circulating around the White House “is not the result of an official White House policymaking process,” according to deputy White House press secretary, Lindsay Walters.

According to a report in The Washington Post, Walters denied that White House staff had worked on a draft executive order that would require every federal agency to study how social media platforms moderate user behavior and refer any instances of perceived bias to the Justice Department for further study and potential legal action.

Bloomberg first reported the draft executive order and a copy of the document was acquired and published by Business Insider.

Here’s the relevant text of the draft (from Business Insider):

Section 2. Agency Responsibilities. (a) Executive departments and agencies with authorities that could be used to enhance competition among online platforms (agencies) shall, where consistent with other laws, use those authorities to promote competition and ensure that no online platform exercises market power in a way that harms consumers, including through the exercise of bias.

(b) Agencies with authority to investigate anticompetitive conduct shall thoroughly investigate whether any online platform has acted in violation of the antitrust laws, as defined in subsection (a) of the first section of the Clayton Act, 15 U.S.C. § 12, or any other law intended to protect competition.

(c) Should an agency learn of possible or actual anticompetitive conduct by a platform that the agency lacks the authority to investigate and/or prosecute, the matter should be referred to the Antitrust Division of the Department of Justice and the Bureau of Competition of the Federal Trade Commission.

While there are several reasonable arguments to be made for and against the regulation of social media platforms, “bias” is probably the least among them.

That hasn’t stopped the steady drumbeat of accusations of bias under the guise of “anticompetitive regulation” against platforms like Facebook, Google, YouTube, and Twitter from increasing in volume and tempo in recent months.

Bias was the key concern Republican lawmakers brought up when Mark Zuckerberg was called to testify before Congress earlier this year. And bias was front and center in Republican lawmakers’ questioning of Jack Dorsey, Sheryl Sandberg, and Google’s empty chair when they were called before Congress earlier this month to testify in front of the Senate Intelligence Committee.

The Justice Department has even called in the attorneys general of several states to review the legality of the moderation policies of social media platforms later this month (spoiler alert: they’re totally legal).

With all of this activity focused on tech companies, it’s no surprise that the administration would turn to the Executive Order — a preferred weapon of choice for Presidents who find their agenda stalled in the face of an uncooperative legislature (or prevailing rule of law).

However, as the Post reported, aides in the White House said there’s little chance of this becoming actual policy.

… three White House aides soon insisted they didn’t write the draft order, didn’t know where it came from, and generally found it to be unworkable policy anyway. One senior White House official confirmed the document had been floating around the White House but had not gone through the formal process, which is controlled by the staff secretary.

White House says a draft executive order reviewing social media companies is not “official”

A draft executive order circulating around the White House “is not the result of an official White House policymaking process,” according to deputy White House press secretary, Lindsay Walters.

According to a report in The Washington Post, Walters denied that White House staff had worked on a draft executive order that would require every federal agency to study how social media platforms moderate user behavior and refer any instances of perceived bias to the Justice Department for further study and potential legal action.

Bloomberg first reported the draft executive order and a copy of the document was acquired and published by Business Insider.

Here’s the relevant text of the draft (from Business Insider):

Section 2. Agency Responsibilities. (a) Executive departments and agencies with authorities that could be used to enhance competition among online platforms (agencies) shall, where consistent with other laws, use those authorities to promote competition and ensure that no online platform exercises market power in a way that harms consumers, including through the exercise of bias.

(b) Agencies with authority to investigate anticompetitive conduct shall thoroughly investigate whether any online platform has acted in violation of the antitrust laws, as defined in subsection (a) of the first section of the Clayton Act, 15 U.S.C. § 12, or any other law intended to protect competition.

(c) Should an agency learn of possible or actual anticompetitive conduct by a platform that the agency lacks the authority to investigate and/or prosecute, the matter should be referred to the Antitrust Division of the Department of Justice and the Bureau of Competition of the Federal Trade Commission.

While there are several reasonable arguments to be made for and against the regulation of social media platforms, “bias” is probably the least among them.

That hasn’t stopped the steady drumbeat of accusations of bias under the guise of “anticompetitive regulation” against platforms like Facebook, Google, YouTube, and Twitter from increasing in volume and tempo in recent months.

Bias was the key concern Republican lawmakers brought up when Mark Zuckerberg was called to testify before Congress earlier this year. And bias was front and center in Republican lawmakers’ questioning of Jack Dorsey, Sheryl Sandberg, and Google’s empty chair when they were called before Congress earlier this month to testify in front of the Senate Intelligence Committee.

The Justice Department has even called in the attorneys general of several states to review the legality of the moderation policies of social media platforms later this month (spoiler alert: they’re totally legal).

With all of this activity focused on tech companies, it’s no surprise that the administration would turn to the Executive Order — a preferred weapon of choice for Presidents who find their agenda stalled in the face of an uncooperative legislature (or prevailing rule of law).

However, as the Post reported, aides in the White House said there’s little chance of this becoming actual policy.

… three White House aides soon insisted they didn’t write the draft order, didn’t know where it came from, and generally found it to be unworkable policy anyway. One senior White House official confirmed the document had been floating around the White House but had not gone through the formal process, which is controlled by the staff secretary.

Hate speech, collusion, and the constitution

Half an hour into their two-hour testimony on Wednesday before the Senate Intelligence Committee, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey were asked about collaboration between social media companies. “Our collaboration has greatly increased,” Sandberg stated before turning to Dorsey and adding that Facebook has “always shared information with other companies.” Dorsey nodded in response, and noted for his part that he’s very open to establishing “a regular cadence with our industry peers.”

Social media companies have established extensive policies on what constitutes “hate speech” on their platforms. But discrepancies between these policies open the possibility for propagators of hate to game the platforms and still get their vitriol out to a large audience. Collaboration of the kind Sandberg and Dorsey discussed can lead to a more consistent approach to hate speech that will prevent the gaming of platforms’ policies.

But collaboration between competitors as dominant as Facebook and Twitter are in social media poses an important question: would antitrust or other laws make their coordination illegal?

The short answer is no. Facebook and Twitter are private companies that get to decide what user content stays and what gets deleted off of their platforms. When users sign up for these free services, they agree to abide by their terms. Neither company is under a First Amendment obligation to keep speech up. Nor can it be said that collaboration on platform safety policies amounts to collusion.

This could change based on an investigation into speech policing on social media platforms being considered by the Justice Department. But it’s extremely unlikely that Congress would end up regulating what platforms delete or keep online – not least because it may violate the First Amendment rights of the platforms themselves.

What is hate speech anyway?

Trying to find a universal definition for hate speech would be a fool’s errand, but in the context of private companies hosting user generated content, hate speech for social platforms is what they say is hate speech.

Facebook’s 26-page Community Standards include a whole section on how Facebook defines hate speech. For Facebook, hate speech is “anything that directly attacks people based on . . . their ‘protected characteristics’ — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.” While that might be vague, Facebook then goes on to give specific examples of what would and wouldn’t amount to hate speech, all while making clear that there are cases – depending on the context – where speech will still be tolerated if, for example, it’s intended to raise awareness.

Twitter uses a “hateful conduct” prohibition which they define as promoting “violence against or directly attacking or threatening other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” They also prohibit hateful imagery and display names, meaning it’s not just what you tweet but what you also display on your profile page that can count against you.

Both companies constantly reiterate and supplement their definitions, as new test cases arise and as words take on new meaning. For example, the two common slang words to describe Ukrainians by Russians and Russians by Ukrainians was determined to be hate speech after war erupted in Eastern Ukraine in 2014. An internal review by Facebook found that what used to be common slang had turned into derogatory, hateful language.

Would collaboration on hate speech amount to anticompetitive collusion?

Under U.S. antitrust laws, companies cannot collude to make anticompetitive agreements or try to monopolize a market. A company which becomes a monopoly by having a superior product in the marketplace doesn’t violate antitrust laws. What does violate the law is dominant companies making an agreement – usually in secret – to deceive or mislead competitors or consumers. Examples include price fixing, restricting new market entrants, or misrepresenting the independence of the relationship between competitors.

A Pew survey found that 68% of Americans use Facebook. According to Facebook’s own records, the platform had a whopping 1.47 billion daily active users on average for the month of June and 2.23 billion monthly active users as of the end of June – with over 200 million in the US alone. While Twitter doesn’t disclose its number of daily users, it does publish the number of monthly active users which stood at 330 million at last count, 69 million of which are in the U.S.

There can be no question that Facebook and Twitter are overwhelmingly dominant in the social media market. That kind of dominance has led to calls for breaking up these giants under antitrust laws.

Would those calls hold more credence if the two social giants began coordinating their policies on hate speech?

The answer is probably not, but it does depend on exactly how they coordinated. Social media companies like Facebook, Twitter, and Snapchat have grown large internal product policy teams that decide the rules for using their platforms, including on hate speech. If these teams were to get together behind closed doors and coordinate policies and enforcement in a way that would preclude smaller competitors from being able to enter the market, then antitrust regulators may get involved.

Antitrust would also come into play if, for example, Facebook and Twitter got together and decided to charge twice as much for advertising that includes hate speech (an obviously absurd scenario) – in other words, using their market power to affect pricing of certain types of speech that advertisers use.

In fact, coordination around hate speech may reduce anti-competitive concerns. Given the high user engagement around hate speech, banning it could lead to reduced profits for the two companies and provide an opening to upstart competitors.

Sandberg and Dorsey’s testimony Wednesday didn’t point to executives hell-bent on keeping competition out through collaboration. Rather, their potential collaboration is probably better seen as an industry deciding on “best practices,” a common occurrence in other industries including those with dominant market players.

What about the First Amendment?

Private companies are not subject to the First Amendment. The Constitution applies to the government, not to corporations. A private company, no matter its size, can ignore your right to free speech.

That’s why Facebook and Twitter already can and do delete posts that contravene their policies. Calling for the extermination of all immigrants, referring to Africans as coming from shithole countries, and even anti-gay protests at military funerals may be protected in public spaces, but social media companies get to decide whether they’ll allow any of that on their platforms. As Harvard Law School’s Noah Feldman has stated, “There’s no right to free speech on Twitter. The only rule is that Twitter Inc. gets to decide who speaks and listens–which is its right under the First Amendment.”

Instead, when it comes to social media and the First Amendment, courts have been more focused on not allowing the government to keep citizens off of social media. Just last year, the U.S. Supreme Court struck down a North Carolina law that made it a crime for a registered sex offender to access social media if children use that platform. During the hearing, judges asked the government probing questions about the rights of citizens to free speech on social media from Facebook, to Snapchat, to Twitter and even LinkedIn.

Justice Ruth Bader Ginsburg made clear during the hearing that restricting access to social media would mean “being cut off from a very large part of the marketplace of ideas [a]nd [that] the First Amendment includes not only the right to speak, but the right to receive information.”

The Court ended up deciding that the law violated the fundamental First Amendment principle that “all persons have access to places where they can speak and listen,” noting that social media has become one of the most important forums for expression of our day.

Lower courts have also ruled that public officials who block users off their profiles are violating the First Amendment rights of those users. Judge Naomi Reice Buchwald, of the Southern District of New York, decided in May that Trump’s Twitter feed is a public forum. As a result, she ruled that when Trump blocks citizens from viewing and replying to his posts, he violates their First Amendment rights.

The First Amendment doesn’t mean Facebook and Twitter are under any obligation to keep up whatever you post, but it does mean that the government can’t just ban you from accessing your Facebook or Twitter accounts – and probably can’t block you off of their own public accounts either.

Collaboration is Coming?

Sandberg made clear in her testimony on Wednesday that collaboration is already happening when it comes to keeping bad actors off of platforms. “We [already] get tips from each other. The faster we collaborate, the faster we share these tips with each other, the stronger our collective defenses will be.”

Dorsey for his part stressed that keeping bad actors off of social media “is not something we want to compete on.” Twitter is here “to contribute to a healthy public square, not compete to have the only one, we know that’s the only way our business thrives and helps us all defend against these new threats.”

He even went further. When it comes to the drafting of their policies, beyond collaborating with Facebook, he said he would be open to a public consultation. “We have real openness to this. . . . We have an opportunity to create more transparency with an eye to more accountability but also a more open way of working – a way of working for instance that allows for a review period by the public about how we think about our policies.”

I’ve already argued why tech firms should collaborate on hate speech policies, the question that remains is if that would be legal. The First Amendment does not apply to social media companies. Antitrust laws don’t seem to stand in their way either. And based on how Senator Burr, Chairman of the Senate Select Committee on Intelligence, chose to close the hearing, government seems supportive of social media companies collaborating. Addressing Sandberg and Dorsey, he said, “I would ask both of you. If there are any rules, such as any antitrust, FTC, regulations or guidelines that are obstacles to collaboration between you, I hope you’ll submit for the record where those obstacles are so we can look at the appropriate steps we can take as a committee to open those avenues up.”

Justice Dept. says social media giants may be ‘intentionally stifling’ free speech

The Justice Department has confirmed that Attorney General Jeff Sessions has expressed a “growing concern” that social media giants may be “hurting competition” and “intentionally stifling” free speech and expression.

The comments come as Facebook chief operating officer Sheryl Sandberg and Twitter chief executive Jack Dorsey gave testimony to the Senate Intelligence Committee on Wednesday, as lawmakers investigate foreign influence campaigns on their platforms.

Social media companies have been under the spotlight in recent years after threat actors, believed to be working closely with the Russian and Iranian governments, used disinformation spreading tactics to try to influence the outcome of the election.

“The Attorney General has convened a meeting with a number of state attorneys general this month to discuss a growing concern that these companies may be hurting competition and intentionally stifling the free exchange of ideas on their platforms,” said Justice Department spokesman Devin O’Malley in an email.

It’s not clear exactly if the Justice Department is pushing for regulation or actively investigating the platforms for issues relating to competition — or antitrust. Social media companies aren’t covered under US free speech laws — like the First Amendment — but have long said they support free speech and expression across their platforms, including for users in parts of the world where freedom of speech is more restrictive.

Neither Facebook nor Twitter immediately responded to a request for comment.

Twitter hints at new threaded conversations and who’s online features

Twitter head Jack Dorsey sent out a tweet this afternoon hinting the social platform might get a couple of interesting updates to tell us who else is currently online and to help us more easily follow Twitter conversation threads.

“Playing with some new Twitter features: presence (who else is on Twitter right now?) and threading (easier to read convos),” Dorsey tweeted, along with samples.

The “presence” feature would make it easier to engage with those you follow who are online at the moment and the “threading” feature would allow Twitter users to follow a conversation easier than the current embed and click-through method.

However, several responders seemed concerned about followers seeing them online.

Twitter’s head of product Sara Haider responded to one such tweeted concern at the announcement saying she “would definitely want you to have full control over sharing your presence.”

So it seems there would be some sort of way to hide that you are online if you don’t want people to know you are there.

There were also a few design concerns involved in threading conversations together. TC OG reporter turned VC M.G. Siegler wasn’t a fan of the UI’s flat tops. Another user wanted to see something more like iMessage.

I personally like the nesting idea as it cleans the conversation up and makes it easier to follow along. However, I really don’t care as much how it’s designed beyond that (flat tops, round tops) as long as I don’t have to click through a bunch of tweets like I do with the @reply, which is annoying and makes it hard to follow the thread.

I also don’t think I’d want others knowing if I’m online and it’s not a feature I need for those I tweet at, either. Conversations often happen at a ripping pace on the platform. You are either there for it or you can read about it later. I get the thinking on letting users know who’s live but it’s not necessary and seems to be something a lot of people don’t want.

Its unclear when either of these features would roll out to the general public, though it seems they’re available to those in a select test group. We’ve asked Twitter and are waiting to hear back for more information. Of course, plenty of users are still wondering when we’re getting that edit button.