Bots Distorted the 2016 Election. Will the Midterms Be a Sequel?

The fact that Russian-linked bots penetrated social media to influence the 2016 U.S. presidential election has been well documented and the details of the deception are still trickling out.

In fact, on Oct. 17 Twitter disclosed that foreign interference dating back to 2016 involved 4,611 accounts — most affiliated with the Internet Research Agency, a Russian troll farm. There were more than 10 million suspicious tweets and more than 2 million GIFs, videos and Periscope broadcasts.

In this season of another landmark election — a recent poll showed that about 62 percent of Americans believe the 2018 midterm elections are the most important midterms in their lifetime – it is natural to wonder if the public and private sectors have learned any lessons from the 2016 fiasco. And about what is being done to better protect against this malfeasance by nation-state actors.

There is good news and bad news here. Let’s start with the bad.

Two years after the 2016 election, social media still sometimes looks like a reality show called “Propagandists Gone Wild.” Hardly a major geopolitical event takes place in the world without automated bots generating or amplifying content that exaggerates the prevalence of a particular point of view.

In mid-October, Twitter suspended hundreds of accounts that simultaneously tweeted and retweeted pro-Saudi Arabia talking points about the disappearance of journalist Jamal Khashoggi.

On Oct. 22, the Wall Street Journal reported that Russian bots helped inflame the controversy over NFL players kneeling during the national anthem. Researchers from Clemson University told the newspaper that 491 accounts affiliated with the Internet Research Agency posted more 12,000 tweets on the issue, with activity peaking soon after a Sept. 22, 2017 speech by President Trump in which he said team owners should fire players for taking a knee during the anthem.

The problem hasn’t persisted only in the United States. Two years after bots were blamed for helping sway the 2016 Brexit vote in Britain, Twitter bots supporting the anti-immigration Sweden Democrats increased significantly this spring and summer in the leadup to that country’s elections.

These and other examples of continuing misinformation-by-bot are troubling, but it’s not all doom and gloom.  I see positive developments too.

Photo courtesy of Shutterstock/Nemanja Cosovic

First, awareness must be the first step in solving any problem, and cognizance of bot meddling has soared in the last two years amid all the disturbing headlines.

About two-thirds of Americans have heard of social media bots, and the vast majority of those people are worried bots are being used maliciously, according to a Pew Research Center survey of 4,500 U.S. adults conducted this summer. (It’s concerning, however, that much fewer of the respondents said they’re confident that can actually recognize when accounts are fake.)

Second, lawmakers are starting to take action. When California Gov. Jerry Brown on Sept. 28 signed legislation making it illegal as of July 1, 2019 to use bots – to try to influence voter opinion or for any other purpose — without divulging the source’s artificial nature, it followed anti-ticketing-bot laws nationally and in New York State as the first bot-fighting statutes in the United States.

While I support the increase in awareness and focused interest by legislators, I do feel the California law has some holes. The measure is difficult to enforce because it’s often very hard to identify who is behind a bot network, the law’s penalties aren’t clear, and an individual state is inherently limited it what it can do to attack a national and global issue. However, the law is a good start and shows that governments are starting to take the problem seriously.

Third, the social media platforms — which have faced congressional scrutiny over their failure to address bot activity in 2016 – have become more aggressive in pinpointing and eliminating bad bots.

It’s important to remember that while they have some responsibility, Twitter and Facebook are victims here too, taken for a ride by bad actors who have hijacked these commercial platforms for their own political and ideological agendas.

While it can be argued that Twitter and Facebook should have done more sooner to differentiate the human from the non-human fakes in its user rolls, it bears remembering that bots are a newly acknowledged cybersecurity challenge. The traditional paradigm of a security breach has been a hacker exploiting a software vulnerability. Bots don’t do that – they attack online business processes and thus are difficult to detect though customary vulnerability scanning methods.

I thought there was admirable transparency in Twitter’s Oct. 17 blog accompanying its release of information about the extent of misinformation operations since 2016. “It is clear that information operations and coordinated inauthentic behavior will not cease,” the company said. “These types of tactics have been around for far longer than Twitter has existed — they will adapt and change as the geopolitical terrain evolves worldwide and as new technologies emerge.”

Which leads to the fourth reason I’m optimistic: technological advances.

In the earlier days of the internet, in the late ‘90s and early 00’s, networks were extremely susceptible to worms, viruses and other attacks because protective technology was in its early stages of development. Intrusions still happen, obviously, but security technology has grown much more sophisticated and many attacks occur due to human error rather than failure of the defense systems themselves.

Bot detection and mitigation technology keeps improving, and I think we’ll get to a state where it becomes as automatic and effective as email spam filters are today. Security capabilities that too often are siloed within networks will integrate more and more into holistic platforms better able to detect and ward off bot threats.

So while we should still worry about bots in 2018, and the world continues to wrap its arms around the problem, we’re seeing significant action that should bode well for the future.

The health of democracy and companies’ ability to conduct business online may depend on it.

Bots Distorted the 2016 Election. Will the Midterms Be a Sequel?

The fact that Russian-linked bots penetrated social media to influence the 2016 U.S. presidential election has been well documented and the details of the deception are still trickling out.

In fact, on Oct. 17 Twitter disclosed that foreign interference dating back to 2016 involved 4,611 accounts — most affiliated with the Internet Research Agency, a Russian troll farm. There were more than 10 million suspicious tweets and more than 2 million GIFs, videos and Periscope broadcasts.

In this season of another landmark election — a recent poll showed that about 62 percent of Americans believe the 2018 midterm elections are the most important midterms in their lifetime – it is natural to wonder if the public and private sectors have learned any lessons from the 2016 fiasco. And about what is being done to better protect against this malfeasance by nation-state actors.

There is good news and bad news here. Let’s start with the bad.

Two years after the 2016 election, social media still sometimes looks like a reality show called “Propagandists Gone Wild.” Hardly a major geopolitical event takes place in the world without automated bots generating or amplifying content that exaggerates the prevalence of a particular point of view.

In mid-October, Twitter suspended hundreds of accounts that simultaneously tweeted and retweeted pro-Saudi Arabia talking points about the disappearance of journalist Jamal Khashoggi.

On Oct. 22, the Wall Street Journal reported that Russian bots helped inflame the controversy over NFL players kneeling during the national anthem. Researchers from Clemson University told the newspaper that 491 accounts affiliated with the Internet Research Agency posted more 12,000 tweets on the issue, with activity peaking soon after a Sept. 22, 2017 speech by President Trump in which he said team owners should fire players for taking a knee during the anthem.

The problem hasn’t persisted only in the United States. Two years after bots were blamed for helping sway the 2016 Brexit vote in Britain, Twitter bots supporting the anti-immigration Sweden Democrats increased significantly this spring and summer in the leadup to that country’s elections.

These and other examples of continuing misinformation-by-bot are troubling, but it’s not all doom and gloom.  I see positive developments too.

Photo courtesy of Shutterstock/Nemanja Cosovic

First, awareness must be the first step in solving any problem, and cognizance of bot meddling has soared in the last two years amid all the disturbing headlines.

About two-thirds of Americans have heard of social media bots, and the vast majority of those people are worried bots are being used maliciously, according to a Pew Research Center survey of 4,500 U.S. adults conducted this summer. (It’s concerning, however, that much fewer of the respondents said they’re confident that can actually recognize when accounts are fake.)

Second, lawmakers are starting to take action. When California Gov. Jerry Brown on Sept. 28 signed legislation making it illegal as of July 1, 2019 to use bots – to try to influence voter opinion or for any other purpose — without divulging the source’s artificial nature, it followed anti-ticketing-bot laws nationally and in New York State as the first bot-fighting statutes in the United States.

While I support the increase in awareness and focused interest by legislators, I do feel the California law has some holes. The measure is difficult to enforce because it’s often very hard to identify who is behind a bot network, the law’s penalties aren’t clear, and an individual state is inherently limited it what it can do to attack a national and global issue. However, the law is a good start and shows that governments are starting to take the problem seriously.

Third, the social media platforms — which have faced congressional scrutiny over their failure to address bot activity in 2016 – have become more aggressive in pinpointing and eliminating bad bots.

It’s important to remember that while they have some responsibility, Twitter and Facebook are victims here too, taken for a ride by bad actors who have hijacked these commercial platforms for their own political and ideological agendas.

While it can be argued that Twitter and Facebook should have done more sooner to differentiate the human from the non-human fakes in its user rolls, it bears remembering that bots are a newly acknowledged cybersecurity challenge. The traditional paradigm of a security breach has been a hacker exploiting a software vulnerability. Bots don’t do that – they attack online business processes and thus are difficult to detect though customary vulnerability scanning methods.

I thought there was admirable transparency in Twitter’s Oct. 17 blog accompanying its release of information about the extent of misinformation operations since 2016. “It is clear that information operations and coordinated inauthentic behavior will not cease,” the company said. “These types of tactics have been around for far longer than Twitter has existed — they will adapt and change as the geopolitical terrain evolves worldwide and as new technologies emerge.”

Which leads to the fourth reason I’m optimistic: technological advances.

In the earlier days of the internet, in the late ‘90s and early 00’s, networks were extremely susceptible to worms, viruses and other attacks because protective technology was in its early stages of development. Intrusions still happen, obviously, but security technology has grown much more sophisticated and many attacks occur due to human error rather than failure of the defense systems themselves.

Bot detection and mitigation technology keeps improving, and I think we’ll get to a state where it becomes as automatic and effective as email spam filters are today. Security capabilities that too often are siloed within networks will integrate more and more into holistic platforms better able to detect and ward off bot threats.

So while we should still worry about bots in 2018, and the world continues to wrap its arms around the problem, we’re seeing significant action that should bode well for the future.

The health of democracy and companies’ ability to conduct business online may depend on it.

Twitter removes thousands of accounts that tried to dissuade Democrats from voting

Twitter has deleted thousands of automated accounts posting messages that tried to discourage and dissuade voters from casting their ballot in the upcoming election next week.

Some 10,000 accounts were removed across late September and early October after they were first flagged by staff at the Democratic Party, the company has confirmed.

“We removed a series of accounts for engaging in attempts to share disinformation in an automated fashion – a violation of our policies,” said a Twitter spokesperson in an email to TechCrunch. “We stopped this quickly and at its source.” But the company did not provide examples of the kinds of accounts it removed, or say who or what might have been behind the activity.

The accounts posed as Democrats and try to convince key demographics to stay at home and not vote, likely as an attempt to sway the results in key election battlegrounds, according to Reuters, which first reported the news.

A spokesperson for the Democratic National Committee did not return a request for comment outside its business hours.

The removals are a drop in the ocean to the wider threats that Twitter faces. Earlier this year, the social networking giant deleted 1.2 million accounts for sharing and promoting terrorist content. In May alone, the company deleted just shy of 10 million accounts each week for sending malicious, automated messages.

Twitter had 335 million monthly active users as of its latest earnings report in July.

But the company has faced criticism from lawmakers for not doing more to proactively remove content that violate its rules or spread disinformation and false news. With just days before Americans are set to vote in the U.S. midterms, this latest batch of takedowns is likely to spark further concern that Twitter did not automatically detect the malicious accounts.

Twitter does not have a strict policy on the spread of disinformation in the run-up to election season, unlike Facebook, which recently banned content that tried to suppress voters with false and misleading information. Instead, Twitter said last year that its “open and real-time nature” is a “powerful antidote to the spreading of all types of false information.” But researchers have been critical of that approach. Research published last month found that over 700,000 accounts that were active during the 2016 presidential election are still active to this day — pushing a million tweets each day.

A Twitter spokesperson added that for the election this year, the company has “established open lines of communication and direct, easy escalation paths for state election officials, Homeland Security, and campaign organizations from both major parties to help us enforce our policies vigorously and protect conversational health on our service.

Twitter hires God-is Rivera as global director of culture and community

Twitter has brought on its first-ever global director of culture and community, God-is Rivera. As global director of culture and community, Rivera will report to Global Head of Culture, Engagement and Experiential Nola Weinstein. Rivera previously led internal diversity and inclusion efforts at VMLY&R, a digital and creative agency.

“As a black woman who has worked in industries in which I have been underrepresented, I feel a great responsibility to amplify and support diverse communities, and they exist in full force on Twitter,” Rivera said in a statement. “The team has shown a passion to serve and spotlight their most active users and I am honored to step into this new role as a part of that commitment.”

For context, 26 percent of U.S. adults who identify as black use Twitter, while 24 percent of white-identified adults and 20 percent of Latinx-identified adults in the U.S. use Twitter, according to a March 2018 survey from Pew Research Center.

At Twitter, the plan is for Rivera to “better serve and engage communities” on Twitter through the company’s brand marketing, campaigns, events and other experiences. Internally, Rivera will be tasked with ensuring Twitter’s campaigns and programs are inclusive and “reflective of the communities we serve,” according to Twitter’s press release. Externally, Rivera will be responsible for developing relationships and programs with content creators, community leaders, brands and more — similar to the one with HBO’s Insecure.

Here’s the internal note Weinstein sent to Twitter employees earlier today:

Team,

I am so excited to welcome @GodisRivera to the team as Twitter’s new Global Director of Culture & Community. She captivated us at #OneTeam with her enlightening presentation on #BlackTwitter and we are thrilled that she will now be bringing her passion and perspective inside.

In this newly created role, God-is will help lead our efforts to better serve and engage the powerful voices and global communities who take to Twitter to share, discover and discuss what matters to them. This will come to life through Twitter’s brand efforts, campaigns, events and experiences. She will help ensure that our programs are connective, inclusive and reflective of the communities we serve. You can imagine more efforts that engage and excite our communities like #HereWeAre, #NBATwitter, thoughtful tweetups, etc.

God-is’ deep expertise in marketing and social strategy, cultural understanding and ability to elevate and connect communities makes for a rare and incredibly powerful combination. She was previously Director, Inclusion and Cultural Resonance at VMLY&R, where she led internal diversity efforts to fuse the importance of internal culture and representation to creative work outputs. In 2018, God-is was named an Ad Age “Woman to Watch” and Adweek “Disruptor” for continuing to fight for representation and equity in the advertising industry. She currently resides in New York, NY with her husband and daughter.

On a personal note, I have had the pleasure of spending time with God-is at #HereWeAre, #Influence, and #OneTeam and her energy, passion and positivity are infectious. I know her presence will make a difference and am excited by all that the culture & experiential team will create together.

God-is will start on November 12th and will be based in NYC reporting to me.

Please join me in welcoming her to the flock!

Twitter tests homescreen button to easily switch to reverse chronological

Twitter is digging one of its most important new features out of its settings and putting it within easy reach. Twitter is now testing with a small number of iOS users a homescreen button that lets you instantly switch from its algorithmic timeline that shows the best tweets first but out of order to the old reverse chronological feed that only shows people you follow — no tweets liked by friends or other randomness.

Twitter had previously buried this option in its settings. In mid-September, it fixed the setting so it would only show a raw reverse chronological feed of tweets by people you follow with nothing extra added, and promised a more easily accessible design for the feature in the future. Now we have our first look at it. A little Twitter sparkle icon in the top opens a menu where you can switch between Top Tweets and Latest Tweets, plus a link to your content settings. It would be even better if it was a one-tap toggle.

Twitter’s VP of Product Kayvon Beykpour tweeted that “We want to make it easier to toggle between seeing the latest tweets the top tweets. So we’re experimenting with making this a top-level switch rather than buried in the settings. Feedback welcome.. what do you think?”

Given the backlash back in 2016 when Twitter started shifting to an algorithmically sorted timeline based on what you engaged with, many users will probably think this is great. Whether you’re trying to follow a sports game, a political debate, breaking news, or are just glued to Twitter and want the ordering to make more sense, there are plenty of reasons you might want to switch to reverse chronological.

Still, Twitter’s apprehension to make the setting too accessible makes sense. Hardcore users might prefer reverse chronological, but for most people who only open Twitter a few times per day or week, that’d mean they’d likely miss the tweets from their closest friends that could be drown out by the noise of everyone else. Twitter’s user growth rate perked up after the shift to algorithmic.

We’ve asked whether the setting reverts to the Top Tweets default when you close the app. That might be frustrating to some expert users, but could prevent novice users from accidentally getting stuck in reverse chronological and not knowing how to switch back. The company tells TechCrunch that it’s trying out several different duration options for the setting based on user inactivity to see what works best. For example, one version will revert the setting to the Top Tweets default if they’re gone for a day. That method would make sure people who’ve been inactive long enough to forget changing their timeline setting will get the default back and not end up stuck in a chronological abyss.

If Twitter gets the reversion to default situation figured out, the new button could make the service much more flexible, thereby boosting usage. You could start algorithmic in the morning or after a weekend away to see what you missed, then quickly toggle to reverse chronological if something big happens or you’ll be on it non-stop all day to get the real-time pulse of the world.

Twitter, why are you such a hot mess?

Today, Jack Dorsey tweeted a link to his company’s latest gesture toward ongoing political relevance, a U.S. midterms news center collecting “the latest news and top commentary” on the country’s extraordinarily consequential upcoming election. If curated and filtered properly, that could be useful! Imagine. Unfortunately, rife with fake news, the tool is just another of Twitter’s small yet increasingly consequential disasters.

Beyond a promotional tweet from Dorsey, Twitter’s new offering is kind of buried — probably for the best. On desktop it’s a not particularly useful mash of national news reporters, local candidates and assorted unverifiable partisans. As Buzzfeed news details, the tool is swimming with conspiracy theories, including ones involving the migrant caravan. According to his social media posts, the Pittsburgh shooter was at least partially motivated by similar conspiracies, so this is not a good look to say the least.

Why launch a tool like this before performing the most basic cursory scan for the kind of low-quality sources that already have your company in hot water? Why have your chief executive promote it? Why why why

A few hours after Dorsey’s tweet, likely after the prominent callout, the main feed looked a bit tamer than it did at first glance. Subpages for local races appear mostly populated by candidates themselves, while the national feed looks more like an algorithmically generated echo chamber version of my regular Twitter feed, with inexplicably generous helpings of MSNBC pundits and more lefty activists.

For Twitter users already immersed in conspiracies, particularly those that incubate so successfully on the far right, does this feed offer yet another echo chamber disguised as a neutral news source? In spite of its sometimes dubiously left-leanings, my feed is still peppered with tweets from undercover video provocateur James O’Keefe — not exactly a high quality source.

In May, Twitter announced that political candidates would get a special badge, making them stand out from other users and potential imposters. That was useful! Anything that helps Twitter function as a fast news source with light context is a positive step, but unfortunately we haven’t seen a whole lot in this direction.

Social media companies need to stop launching additional amplification tools into the ominous void. No social tech company has yet exhibited a meaningful understanding of the systemic shifts that need to happen — possibly product-rending shifts — to dissuade bad actors and straight up disinformation from spreading like a back-to-school virus. 

Unfortunately, a week before the U.S. midterm elections, Twitter looks as disinterested as ever in the social disease wreaking havoc on its platform, even as users suffer its real-life consequences. Even more unfortunate for any members of its still dedicated, weary userbase, Twitter’s latest wholly avoidable minor catastrophe comes as a surprise to no one.

Twitter suspends accounts linked to mail bomb suspect

At least two Twitter accounts linked to the man suspected of sending explosive devices to more than a dozen prominent Democrats were suspended on Friday afternoon.

Cesar Sayoc Jr., 56, was apprehended by federal law enforcement officers in Florida on Friday morning. “Though we’re still analyzing the devices in our laboratory, these are not hoax devices,” FBI Director Christopher Wray said during a press briefing.

Facebook moved fairly quickly to suspend Sayoc’s account on the platform, though two Twitter accounts that appeared to belong to Sayoc remained online and accessible until around 2:30 p.m. Pacific. Both accounts featured numerous tweets, many of which contained far-right political conspiracy theories, graphic images and specific threats.

TechCrunch was able to review the accounts extensively before they were removed. Both known accounts, @hardrockintlet and @hardrock2016, contained many tweets that appeared to threaten violence against perceived political enemies, including Keith Ellison and Joe Biden, an intended recipient of an explosive device.

In one case, those threats had been previously reported to Twitter. Democratic commentator Rochelle Ritchie tweeted that she reported a tweet from @hardrock2016 following her appearance on Fox News. According to a screenshot, Twitter received the report and on October 11 responded that it found “no violation of the Twitter rules against abusive behavior.”

The tweet stated “We will see u 4 sure. Hug your loved ones real close every time you leave home” accompanied by a photo of Ritchie, a screenshot of a news story about a body found in the Everglades and the tarot card representing death.

Between the two accounts linked to Sayoc, many of the threats were depicted with graphic images in sequence. In one tweet on September 18 to former Vice President Joe Biden, the account tweeted images of an air boat, a symbol depicting an hourglass with a scythe and graphic images of a decapitated goat.

Threatening messages that emerge out of a sequence of images would likely be more difficult for machine learning moderation tools to parse, though any human content moderator would have no trouble extracting their meaning. In most cases the threatening images were paired with a verbal threat. At least one archive of a Twitter account linked to Sayoc remains online.

In a statement to TechCrunch, Twitter stated only that “This is an ongoing law enforcement investigation. We do not have a comment.” The company indicated that the accounts were suspended for violating Twitter’s rules, though did not specify which.

Twitter says it has removed several accounts affiliated with Infowars and Alex Jones

Twitter has cleared more Infowars related accounts off its platform. The company told CNN today that it permanently suspended 18 accounts affiliated with the far-right website, known for spreading misinformation and conspiracy theories, on Monday after “numerous violations and warnings.” It added the removals were in addition to five Infowars affiliated accounts that had been already been banned.

Alex Jones and Infowars, which he launched in 1999, had their accounts permanently suspended by Twitter last month, one of the last major social media platforms to do so. Infowars, however, had been using affiliated accounts to get around the ban and promote its content, according to a Daily Beast report last week. These included the accounts of Infowars’ “Real News” show, the Infowars store (which sells Jones’ line of dietary supplements) and “News Wars,” which promoted videos by Infowars. All of these accounts, and several others, were mentioned in the Daily Beast article and included in Twitter’s purge on Monday.

Alex Jones and Infowars are notorious for spreading some of the most pernicious conspiracy theories to emerge recently, including ones claiming the Sandy Hook and Parkland school shootings were faked. Their articles and videos have also frequently included racist and homophobic content and frequent calls for violence.

A wave of social media and tech companies began suspending accounts maintained by Alex Jones and Infowars in August for violating their policies on hate speech and violence. After weeks of prevarication and half-measures, including a brief temporary ban, Twitter permanently removed both @realalexjones and @infowars at the beginning of September. The list of platforms Infowars and Alex Jones are now barred from include Twitter, YouTube, Spotify, and the App Store.

TechCrunch has contacted Twitter and Infowars for comment.

Egypt and Thailand: When the military turns against free speech