Mueller report details the evolution of Russia’s troll farm as it began targeting US politics

BRENDAN SMIALOWSKI/AFP/Getty Images

On Thursday, Attorney General William Barr released the long-anticipated Mueller report. With it comes a useful overview of how Russia leveraged U.S.-based social media platforms to achieve its political ends.

While we’ve yet to find too much in the heavily redacted text that we didn’t already know, Mueller does recap efforts undertaken by Russia’s mysterious Internet Research Agency or “IRA” to influence the outcome of the 2016 presidential election. The IRA attained infamy prior to the 2016 election after it was profiled in depth by the New York Times in 2015. (That piece is still well worth a read.)

Considering the success the shadowy group managed to achieve in infiltrating U.S. political discourse — and the degree to which those efforts have reshaped how we talk about the world’s biggest tech platforms — the events that led us here are worth revisiting.

IRA activity begins in 2014

In Spring of 2014, the special counsel reports that the IRA started to “consolidate U.S. operations within a single general department” with the internal nickname the “translator.” The report indicates that this is the time the group began to “ramp up” its operations in the U.S. with its sights on the 2016 presidential election.

At this time, the IRA was already running operations across various social media platforms, including Facebook, Twitter and YouTube. Later it would expand its operations to Instagram and Tumblr as well.

Stated anti-Clinton agenda

As the report details, in the early stages of its U.S.-focused political operations, the IRA mostly impersonated U.S. citizens but into 2015 it shifted its strategy to create larger pages and groups that pretended to represent U.S.-based interests and causes, including “anti-immigration groups, Tea Party activists, Black Lives Matter [activists]” among others.

The IRA offered internal guidance to its specialists to “use any opportunity to criticize Hillary [Clinton] and the rest (except Sanders and Trump – we support them” in early 2016.

While much of the IRA activity that we’ve reported on directly sowed political discord on divisive domestic issues, the group also had a clearly stated agenda to aid the Trump campaign. When the mission strayed, one IRA operative was criticized for a “lower number of posts dedicated to criticizing Hillary Clinton” and called the goal of intensify criticism of Clinton “imperative.”

That message continued to ramp up on Facebook into late 2016, even as the group also continued its efforts in issued-based activist groups that, as we’ve learned, sometimes inspired or intersected with real life events. The IRA bought a total of 3,500 ads on Facebook for $100,000 — a little less than $30 per ad. Some of the most successful IRA groups had hundreds of thousands of followers. As we know, Facebook shut down many of these operations in August 2017.

IRA operations on Twitter

The IRA used Twitter as well, though its strategy there produced some notably different results. The group’s biggest wins came when it managed to successfully interact with many members of the Trump campaign, as was the case with @TEN_GOP which posed as the “Unofficial Twitter of Tennessee Republicans.” That account earned mentions from a number of people linked to the Trump campaign, including Donald Trump Jr., Brad Parscale and Kellyanne Conway.

As the report describes, and has been previously reported, that account managed to get the attention of Trump himself:

“On September 19, 2017, President Trump’s personal account @realDonaldTrump responded to a tweet from the IRA-controlled account @ l0_gop (the backup account of @TEN_ GOP, which had already been deactivated by Twitter). The tweet read: “We love you, Mr. President!”

The special counsel also notes that “Separately, the IRA operated a network of automated Twitter accounts (commonly referred to as a bot network) that enabled the IRA to amplify existing content on Twitter.”

Real life events

The IRA leveraged both Twitter and Facebook to organize real life events, including three events in New York in 2016 and a series of pro-Trump rallies across both Florida and Pennsylvania in the months leading up the election. The IRA activity includes one event in Miami that the then-candidate Trump’s campaign promoted on his Facebook page.

While we’ve been following revelations around the IRA’s activity for years now, Mueller’s report offers a useful birds-eye overview of how the group’s operations wrought havoc on social networks, achieving mass influence at very little cost. The entire operation exemplified the greatest weaknesses of our social networks — weaknesses that up until companies like Facebook and Twitter began to reckon with their role in facilitating Russian election interference, were widely regarded as their greatest strengths.

Congress readies for Mueller report to be delivered on CDs

If there weren’t enough obstacles already standing between Congress and the results of the special counsel’s multiyear investigation, lawmakers are expecting to need an optical drive to read the document.

A Justice Department official told the Associated Press that a CD containing the Mueller report would be delivered to Congress tomorrow between 11 and noon Eastern. At some point after the CDs are delivered, the report is expected to be made available to the public on the special counsel’s website.

Any Congressional offices running Macs will likely have to huddle up with colleagues who still have a CD-capable drive. Optical drives disappeared from Apple computers years ago. With people increasingly reliant on cloud storage over physical storage, they’re no longer as popular on Windows machines either.

Tomorrow’s version of the report is expected to come with a fair amount of detail redacted throughout, though a portion of Congress may receive a more complete version at a later date. The report’s release on Thursday will be preceded by a press conference hosted by Attorney General William Barr and Deputy Attorney General Rod Rosenstein. If you ask us, there’s little reason to tune into that event rather than waiting for substantive reporting on the actual contents of the report once it’s out in the wild. Better yet, hunker down and read some of the 400 pages yourself while you wait for thoughtful analyses to materialize.

Remember: No matter what sound bites start flying tomorrow morning, digesting a dense document like this takes time. Don’t trust anyone who claims to have synthesized the whole thing right off the bat. After all, America has waited this long for the Mueller report to materialize — letting the dust settle won’t do any harm.

Co-Star raises $5 million to bring its astrology app to Android

Nothing scales like a horoscope.

If you haven’t heard of Co-Star, you might just be in the wrong circles. In some social scenes it’s pretty much ubiquitous. Wherever conversations regularly kick off by comparing astrological charts, it’s useful to have that info at hand. The trend is so notable that the app even got a shout out in a New York Times piece on VCs flocking to astrology startups.

This week, the company behind probably the hottest iOS astrology app announced that it has raised a $5.2 million seed round. Maveron, Aspect, 14w and Female Founder Fund all participated in the round, which follows $750,000 in prior pre-seed funding. The company plans to use the funding to craft an Android companion to its iOS-only app, grow its team and “build features that encourage new ways get closer, new ways to take care of ourselves, and new ways to grow.”

TechCrunch spoke with Banu Guler, the CEO and co-founder of Co-Star about what it was like talking to potential investors to drum up money for an idea that Silicon Valley’s elite echo chambers might find unconventional.

“We certainly talked to some who were dismissive,” Guler told TechCrunch in an email. “But the reality is that interest in astrology is skyrocketing… It was all about finding the right investors who see the value in astrology and the potential for growth.”

“There are people out there who think astrology is silly or unserious. But in our experience, the number of people who find value and meaning in astrology is far greater than the number of people who are turned off by it.”

If you’ve ever used a traditional astrology app or website to look up your birth chart — that is, to determine the positions of the planets on the day and time you were born — then you’ve probably noticed how most of those services share more in common with ancient Geocities sites than they do with bright, modern apps. In contrast, Co-Star’s app is clean and artful, with encyclopedia-like illustrations and a simple layout. It’s not something with an infinite scroll you’ll get lost in, but it’s pleasant to dip into Co-Star, check your algorithmically-generated horoscope and see what your passive aggressive ex’s rising sign is.

In a world still obsessed with the long-debunked Meyers-Briggs test, you can think of astrology as a kind of cosmic organizational psychology, but one more interested in peoples’ emotional realities than their modus operandi in the workplace. For many young people — and queer people, from personal experience — astrology is a thoroughly playful way to take stock of life. Instead of directly predicting future events (good luck with that), it’s is more commonly used as a way to evaluate relationships, events and anything else. If astrology memes on Instagram are any indication, there’s a whole cohort of people using astrology as a framework for talking about their emotional lives. That search for authenticity — and no doubt the proliferation of truly inspired viral content — is likely fueling the astrology boom. 

“By positioning human experience against a backdrop of a vast universe, Co–Star creates a shortcut to real talk in a sea of small talk: a way to talk about who we are and how we relate to each other,” the company wrote in its funding announcement. “It doesn’t reduce complexity. It doesn’t judge. It understands.”

Facebook’s Portal will now surveil your living room for half the price

No, you’re not misremembering the details from that young adult dystopian fiction you’re reading — Facebook really does sell a video chat camera adept at tracking the faces of you and your loved ones. Now, you too can own Facebook’s poorly timed foray into social hardware for the low, low price of $99. That’s a pretty big price drop considering that the Portal, introduced less than six months ago, debuted at $199.

Unfortunately for whoever toiled away on Facebook’s hardware experiment, the device launched into an extremely Facebook-averse, notably privacy-conscious market. Those are pretty serious headwinds. Of course, plenty of regular users aren’t concerned about privacy — but they certainly should be.

As we found in our review, Facebook’s Portal is actually a pretty competent device with some thoughtful design touches. Still, that doesn’t really offset the unsettling idea of inviting a company notorious for disregarding user privacy into your home, the most intimate setting of all.

Facebook’s premium Portal+ with a larger, rotating 1080p screen is still priced at $349 when purchased individually, but if you buy a Portal+ with at least one other Portal, it looks like you can pick it up for $249. Facebook advertised the Portal discount for Mother’s Day and the sale ends on May 8. We reached out to the company to ask how sales were faring and if the holiday discounts would stick around for longer and we’ll update when we hear back.

YouTube’s algorithm added 9/11 facts to a livestream of the Notre-Dame Cathedral fire

Some viewers following live coverage of the Notre-Dame Cathedral broadcast on YouTube were met with a strangely out of place info box offering facts about the September 11 attacks.

Buzzfeed first reported the appearance of the misplaced fact check box on at least three livestreams from major news outlets. Twitter users also took note of the information mismatch.

Ironically, the feature is a tool designed to fact check topics that generate misinformation on the platform. It adds a small info box below videos that provides third-party factual information from YouTube partners — in this case Encyclopedia Britannica.

YouTube began rolling out the fact checking “information panels” this year in India and they now appears to be available in other countries.

“Users may see information from third parties, including Encyclopedia Britannica and Wikipedia, alongside videos on a small number of well-established historical and scientific topics that have often been subject to misinformation online, like the moon landing,” the company wrote in its announcement at the time.

The information boxes are clearly algorithmically generated and today’s unfortunate slip-up makes it clear that the tool doesn’t have much human oversight. It’s possible that imagery of a tower-like structure burning triggered the algorithm to provide the 9/11 information, but we’ve asked YouTube for more details on what specifically went wrong here.

Facebook taps Peggy Alford for its board, Reed Hastings and Erskine Bowles to depart

Facebook’s board is undergoing its biggest shakeup in memory. On Friday, the company announced that Peggy Alford would be nominated to join the company’s board of directors.

“Peggy is one of those rare people who’s an expert across many different areas — from business management to finance operations to product development,” Facebook CEO Mark Zuckerberg said of the change. “I know she will have great ideas that help us address both the opportunities and challenges facing our company.”

Alford, currently Senior Vice President of Core Markets for PayPal, will become the first black woman to serve on Facebook’s board. She previously served as the Chief Financial Officer of the Chan Zuckerberg Initiative, Mark Zuckerberg and Priscilla Chan’s massive charitable foundation.

Facebook announced some serious departures along with the news of Alford’s nomination. Longtime Facebook board members Reed Hastings and Erskine Bowles will leave the board, marking a major shakeup for the board’s composition. Both Hastings, the CEO of Netflix, and Bowles, a former Democratic political staffer, have served on the board since 2011. Both men have been critical of Facebook’s direction in recent years. Hastings reportedly clashed with fellow board member Peter Thiel over his support for the Trump administration and Bowles famously dressed down Facebook’s top brass over Russia’s political interference on the platform.

Alford’s nomination will come to a vote at Facebook’s May 30 shareholder meeting.

“What excites me about the opportunity to join Facebook’s board is the company’s drive and desire to face hard issues head-on while continuing to improve on the amazing connection experiences they have built over the years,” Alford said of her nomination. “I look forward to working with Mark and the other directors as the company builds new and inspiring ways to help people connect and build community.”

Nancy Pelosi warns tech companies that Section 230 is ‘in jeopardy’

In a new interview with Recode, House Speaker Nancy Pelosi made some notable comments on what by all accounts is the most important law underpinning the modern internet as we know it.

Section 230 is as short as it is potent, so it’s worth getting familiar with. It states “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

When asked about Section 230, Pelosi referred to the law as a “gift” to tech companies that have leaned heavily on the law to grow their business. That provision, providing tech platforms legal cover for content created by their users, is what allowed services like Facebook, YouTube and many others to swell into the massive companies they are today.

Pelosi continued:

“It is a gift to them and I don’t think that they are treating it with the respect that they should, and so I think that that could be a question mark and in jeopardy… I do think that for the privilege of 230, there has to be a bigger sense of responsibility on it. And it is not out of the question that that could be removed.”

Expect to hear a lot more about Section 230. In recent months, a handful of Republicans in Congress have taken aim at the law. Section 230 is what’s between the lines in Devin Nunes’ recent lawsuit accusing critics for defaming him on Twitter. It’s also the extremely consequential subtext beneath conservative criticism that Twitter, Facebook and Google do not run “neutral” platforms.

While the idea of stripping away Section 230 is by no means synonymous with broader efforts to regulate big tech, it is the nuclear option. And when tech’s most massive companies behave badly, it’s a reminder to some of them that their very existences hinge on 26 words that Congress giveth and Congress can taketh away.

Whatever the political motivations, imperiling Section 230 is a fearsome cudgel against even tech’s most seemingly untouchable companies. While it’s not clear what some potentially misguided lawmakers would stand to gain by dismantling the law, Pelosi’s comments are a reminder that tech’s biggest companies and users alike have everything to lose.

Democrats draw up bill that would require tech platforms to assess algorithmic bias

Democratic lawmakers have proposed a bill to address the algorithmic biases lurking under the surface of tech’s biggest platforms. The bill, known as the Algorithmic Accountability Act, was introduced by Senators Ron Wyden (D-OR), Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY) will sponsor parallel legislation in the House.

The bill is well timed. Over the last month alone, Facebook found itself settling over discriminatory practices that affected job ads as well as drawing civil charges from the Department of Housing and Urban Development over similar issues with its housing ad targeting tools. The present bill targets companies that make more than $50 million a year, though any company holding data on more than one million users would be subject to its requirements.

Like yesterday’s proposed Senate bill addressing dark pattern design, the Algorithmic Accountability Act (PDF) routes its regulatory specifics through the Federal Trade Commission. Under the bill, the FTC could require companies to perform “impact assessments” on their own algorithmic decision-making systems. Those assessment would assess potential consequences for “accuracy, fairness, bias, discrimination, privacy and security” within automated systems and companies would be required to correct any issues they uncovered during the process.

In a statement on the proposed legislation, Booker denounced discriminatory tech practices that lead to “houses that you never know are for sale, job opportunities that never present themselves, and financing that you never become aware of.”

“This bill requires companies to regularly evaluate their tools for accuracy, fairness, bias, and discrimination,” Booker said.

Bias on tech’s major platforms is a hot topic right now, though the political parties are approaching the issue from very different vantage points. Just today, the Senate Judiciary Subcommittee on the Constitution held a hearing chaired by Senator Ted Cruz, who led Republicans in repeating recent unsubstantiated allegations that Facebook and Twitter disproportionately punish users on the right.

Democrats for their part have been more interested in the off-platform implications of algorithmic bias.

“By requiring large companies to not turn a blind eye towards unintended impacts of their automated systems, the Algorithmic Accountability Act ensures 21st Century technologies are tools of empowerment, rather than marginalization, while also bolstering the security and privacy of all consumers,” Rep. Clarke said.

‘Hateful comments’ result in YouTube disabling chat during a livestreamed hearing on hate

At today’s House Judiciary hearing addressing “Hate Crimes and the Rise of White Nationalism,” hate appears to have prevailed.

As the hearing’s livestream aired on the House Judiciary’s YouTube channel, comments in the live chat accompanying the stream were so inflammatory that YouTube actually disabled the chat feature mid-hearing. Many of those comments were anti-semitic in nature.

Unsurprisingly, the hearing struggled to balance its crowded witness list, which included Facebook public policy director Neil Potts and Google public policy lead Alexandria Walden. Potts emphasized that Facebook recently righted its course with regard to white nationalism, though this shift is still in its earliest days.

“Facebook rejects not just hate speech, but all hateful ideologies,” Potts said in the hearing. “Our rules have always been clear that white supremacists are not allowed on our platform under any circumstances.”

The hearing was probably ill-fated from the start. As Democrats attempt to grapple with the real world effects of white supremacist violence, voices on the far right — recently amplified by figures in Congress — denounce that conversation outright. When political parties can’t even agree on a hearing’s topic, it usually guarantees a performative rather than productive few hours and in spite of some of its serious witnesses, this hearing was no exception.

Hours after the hearing, anti-semitic comments continue to pour into the House Judiciary YouTube page, many focused on Rep. Jerry Nadler, the committee’s chair. “White nationalism isn’t a crime its [sic] a human right,” one user declared. “(((They))) are taking over our government,” another wrote, alluding to widespread anti-semitic conspiracy theories. Many more defended white nationalism as a form of pride rather than a hate-based belief system tied to real world violence.

“… Hate speech and violent extremism have no place on YouTube,” YouTube’s Walden said during the hearing. “We believe we have developed a responsible approach to address the evolving and complex issues that manifest on our platform.”

Proposed bill would forbid big tech platforms from using dark pattern design

A new piece of bipartisan legislation aims to protect people from one of the sketchiest practices that tech companies employ to subtly influence user behavior. Known as “dark patterns,” this dodgy design strategy often pushes users toward giving up their privacy unwittingly and allowing a company deeper access to their personal data.

To fittingly celebrate the one year anniversary of Mark Zuckerberg’s appearance before Congress, Senators Mark Warner (D-VA) and Deb Fischer (R-NE) have proposed the Deceptive Experiences To Online Users Reduction (DETOUR) Act. While the acronym is a bit of a stretch, the bill would forbid online platforms with more than 100 million users from “relying on user interfaces that intentionally impair user autonomy, decision-making, or choice.”

“Any privacy policy involving consent is weakened by the presence of dark patterns,” Senator Fischer said of the proposed bipartisan bill. “These manipulative user interfaces intentionally limit understanding and undermine consumer choice.”

While this particular piece of legislation might not go on to generate much buzz in Congress, it does point toward some regulatory themes that we’ll likely hear more about as lawmakers build support for regulating big tech.

The bill, embedded below, would create a standards body to coordinate with the FTC on user design best practices for large online platforms. That entity would also work with platforms to outline what sort of design choices infringe on user rights, with the FTC functioning as a “regulatory backstop.”

Whether the bill gets anywhere or not, the FTC itself is probably best suited to take on the issue of dark pattern design, issuing its own guidelines and fines for violating them. Last year, after a Norwegian consumer advocacy group published a paper detailing how tech companies abuse dark pattern design, a coalition of eight U.S. watchdog groups called on the FTC to do just that.

Beyond eradicating dark pattern design, the bill also proposes prohibiting user interface designs that cultivate “compulsive usage” in children under the age of 13 as well as disallowing online platforms from conducting “behavioral experiments” without informed user consent. Under the guidelines set out by the bill, big online tech companies would have to organize their own Institutional Review Boards. These groups, more commonly called IRBs, provide powerful administrative oversight in any scientific research that uses human subjects.

“For years, social media platforms have been relying on all sorts of tricks and tools to convince users to hand over their personal data without really understanding what they are consenting to,” Senator Warner said of the proposed legislation. “Our goal is simple: to instill a little transparency in what remains a very opaque market and ensure that consumers are able to make more informed choices about how and when to share their personal information.”

The full text of the legislation is embedded below.