Civic tech platform Mobilize launches a census hub for the 2020 count’s critical final stretch

With the already narrow window of remaining time to complete a census count abruptly cut short by the Trump administration, getting every person living in the U.S. to fill out the form, already a scramble in a normal census year, is a compound challenge in 2020.

The critical once-a-decade count determines everything from Congressional representation to Pell grants to funding for school lunch programs — and as of this week, as many as 60 million households remain unaccounted for. If left untallied, those individuals and their communities will be invisible when the time comes to allocate vital federal resources.

To rise to that challenge, the progressive volunteer and campaign coordination platform Mobilize is launching a central resource hub to empower census volunteers during the six-week final stretch. The civic tech startup noticed that a handful of nonprofits doing census work were already bringing campaigns onto the platform, and the new site,, will amplify those efforts and collect them in one place. 

Speaking with TechCrunch, Mobilize co-founder and CEO Alfred Johnson describes the task, reasonably, as “Herculean.” 

“Organizations are trying to reach communities and help them understand what they’re going to be asked in the census, what they’ll not be asked by the census and make sure… that those communities are aware of what their rights are here, are aware of what the deadlines are, and can be counted,” Johnson said. “Because we know that being counted is such a fundamental piece of being included in our democracy.”

One of the biggest challenges this census year is the focus on reaching historically undercounted Black, Latinx and indigenous communities — a key goal if the 2020 census is to capture U.S. demographic shifts and allocate resources and representation accordingly. With the shortened deadline, the pandemic and tens of millions of “hard to count” households left uncounted, the stakes couldn’t be higher.

“We’re facing this monumental challenge, particularly with COVID going on,” Johnson said. “It’s always hard to perform the census every 10 years and make sure that you’re getting accurate counts.

“It is additionally hard to do that if you’ve got a once-in-a-century pandemic that is preventing people from answering their mail, from answering the door, from all the things that would [lead to] a more representative count.”

Mobilize launched in early 2017 amidst the post-Trump surge of activism on the left and quickly became ubiquitous among progressive causes and candidates. In the 2020 Democratic primary contest, Biden, Bernie and everybody in between relied on the platform to marshal campaign volunteers and steer supporters. This January, the civic tech startup raised a $3.75 million Series A round led by progressive tech incubator Higher Ground Labs. LinkedIn co-founder Reid Hoffman, a prominent Democratic donor, and Chris Sacca’s Lowercase Capital also contributed to the round.

The digital platform aims to both be a unifying resource to Democratic and progressive campaigns and to do what the events page on social networks like Facebook can’t. For Mobilize, that means translating what on a different platform might remain aspirational online activity into action. It accomplishes that by sending volunteers reminders, prompting them to invite friends and staying connected even after they take action to keep them engaged in similar campaigns.

Groups already coordinating their census campaigns on Mobilize include the NYC Census Bureau, CensusCounts and Fair Count, an organization founded by Fair Fight founder and former Democratic nominee for Georgia governor Stacey Abrams. Fair Count’s mission is to reach “hard to count” communities in Georgia, including the state’s historically undercounted Black male population, to win the state the resources and representation that reflect its reality. lets anyone type in their zip code to see local census mobilization efforts coordinated across those organizations and others. It stands to reason that if you’re willing to phone bank to reach people who’ve yet to be counted for one group, you’d probably be willing to do it for a different one with overlapping goals. 

For Mobilize, the crucial final census push is something of a crucible for the platform’s power in a year that’s gone all-digital. Johnson has seen virtual events skyrocket on Mobilize as COVID-19 took root across the U.S. Prior to the pandemic, about a quarter of events were virtual — now they all are.  

Johnson acknowledges that the “headwinds” against an accurate census count in 2020 are very real, both politically and logistically, and particularly now that the Trump administration has trimmed the deadline. But he hopes that Mobilize is able to help organizations leverage the power of the platform’s network effect and its scalability during a national crisis that has a nation cooped up indoors rather than knocking on them.

In spite of the crisis, or perhaps because of it, Mobilize has seen a major uptick in volunteer signups between the months of April and July and expects August to be even bigger once the numbers are in.

“2020 is a very hard year for a lot of people for very real reasons,” Johnson said. “I think that is actually motivating even more civic engagement by virtue of the fact that people are wanting to see circumstances change and help their friends, neighbors [and] communities in this moment of existential crisis, on whatever axis you’re evaluating it.”

On Facebook, Trump’s next false voting claim will come with an info label

As part of its effort to steel its platform against threats to the 2020 election, Facebook will try surfacing accurate voting info in a new place — on politicians’ own posts.

Starting today, Facebook posts by federal elected officials and candidates — including presidential candidates — will be accompanied by an info label prompting anyone who sees the post to click through for official information on how to vote. The label will link out to For posts that address vote-by-mail specifically, the link will point to a section of the same website with state-by-state instructions about how to register to vote through the mail.

Image via Facebook

Facebook plans to expand the voting info label to apply to all posts about voting in the U.S., not just those from federal-level political figures. That plan remains on track to launch later this summer with Facebook’s Voter Information Center, its previously announced info hub for official, verified information related to the 2020 election. The voter info center, like the coronavirus info center Facebook launched in March, will be placed prominently in order to funnel users toward useful resources.

The company did not mention any specific reason for the decision to prioritize elected officials before other users, but in May Facebook faced criticism for its decision to allow false claims by President Trump about vote-by-mail systems and the 2020 election to remain on the platform untouched. At the time, Twitter added its own voting info label to the same posts, which also appeared as tweets from the president’s account.

In a June post, Mark Zuckerberg discussed voter suppression concerns, saying that Facebook would be “tightening” its policies around content that misleads voters “to reflect the realities of the 2020 elections.” Facebook will also focus on removing false statements about polling places in the 72 hour lead-up to the election. Zuckerberg said that posts with misleading information that could intimidate voters would be banned, using the example of a post falsely claiming ICE officials are checking for documentation at a given polling location.

Zuckerberg made no specific mention of President Trump’s own false claims that expanded mail-in voting in light of the coronavirus crisis would “substantially fraudulent” and result in a “rigged election.” Zuckerberg did say that Facebook would begin labeling some “newsworthy” posts from political figures, leaving the content online but adding a label noting that it violates the platform’s rules.

While false claims from political figures are a cause for concern, they don’t account for the bulk of voting misinformation on the platform. A new report from ProPublica found that many of Facebook’s most well-performing posts about voting contained misinformation. “Of the top 50 posts, ranked by total interactions, that mentioned voting by mail since April 1, 22 contained false or substantially misleading claims about voting, particularly about mail-in ballots,” ProPublica writes in the report, noting that many of the posts appear to break Facebook’s own rules about voting misinformation but remain up with no labels or other contextualization.

While its past enforcement decisions remain controversial and often puzzling, Facebook does appear to be rethinking those choices and gearing up its efforts in light of the coming U.S. election. For Facebook, which goes to sometimes self-defeating lengths to project an aura of political neutrality, that’s less about expanded fact-checking and more about making correct, verified voting information at hand and readily available to users.

In early July, Facebook announced a voter drive that aims to register 4 million new U.S. voters. As part of that effort, Facebook pushed a pop-up info box to app users in the U.S. reminding them to register to vote, check their registration status with links to official state voter registration sites. Those notifications will soon appear in Instagram and Messenger as part of the same voter mobilization push.

Facebook is also apparently mulling the idea of banning all political advertising in the lead-up to November, a decision that would likely alleviate at least one of the company’s headaches at the cost of leaving both political parties, which rely on Facebook ads to reach voters, frustrated.

Zoom misses its own deadline to publish its first transparency report

How many government demands for user data has Zoom received? We won’t know until “later this year,” an updated Zoom blog post now says.

The video conferencing giant previously said it would release the number of government demands it has received by June 30. But the company said it’s missed that target and has given no firm new date for releasing the figures.

It comes amid heightened scrutiny of the service after a number of security issues and privacy concerns came to light following a massive spike in its user base, thanks to millions working from home because of the coronavirus pandemic.

In a blog post today reflecting on the company’s turnaround efforts, chief executive Eric Yuan said the company has made “made significant progress defining the framework and approach for a transparency report that details information related to requests Zoom receives for data, records, or content.”

“We look forward to providing the fiscal [second quarter data in our first report later this year,” he said.

Transparency reports offer rare insights into the number of demands or requests a company gets from the government for user data. These reports are not mandatory, but are important to understand the scale and scope of government surveillance.

Zoom said last month it would launch its first transparency report after the company admitted it briefly suspended the Zoom accounts of two U.S.-base accounts and one Hong Kong activist at the request of the Chinese government. The users, who were not based in China, held a Zoom call commemorating the anniversary of the Tiananmen Square massacre, an event that’s cloaked in secrecy and censorship in mainland China.

The company said at the time it “must comply with applicable laws in the jurisdictions where we operate,” but later said that it would change its policies to disallow requests from the Chinese government to impact users outside of mainland China.

A spokesperson for Zoom did not immediately comment.

Twitter plans to expand its misinformation labels—but will they apply to Trump?

President Trump is again testing Twitter’s stomach for misinformation flowing from its most prominent users.

In a flurry of recent tweets, Trump floated conspiracy theories about the death of Lori Klausutis, an intern for former congressman Joe Scarborough who was found dead in his Florida office in 2001—a freak accident a medical examiner reported resulted from a fall stemming from an undiagnosed heart condition. Scarborough, a political commentator and host of MSNBC’s Morning Joe, is a prominent Trump critic and a frequent target for the president’s political ire.

The medical evaluation and lack of any evidence suggesting something nefarious in the former intern’s death has not been enough to discourage Trump from revisiting the topic frequently in recent days.

“When will they open a Cold Case on the Psycho Joe Scarborough matter in Florida. Did he get away with murder?” Trump tweeted in mid-May. A week later, Trump encouraged his followers to “Keep digging, use forensic geniuses!” on the long-closed case.

In a statement provided to TechCrunch, Twitter expressed that the company is “deeply sorry about the pain these statements, and the attention they are drawing, are causing the family.”

“We’ve been working to expand existing product features and policies so we can more effectively address things like this going forward, and we hope to have those changes in place shortly,” a Twitter spokesperson said.

When asked for clarity about what product and policy changes the company was referring to, Twitter pointed us to its blog post on the labels the company introduced to flag “synthetic and manipulated media” and more recently COVID-19 misinformation. The company indicated that it plans to expand the use of misinformation labels outside of those existing categories.

Twitter will not apply a label or warning to Trump’s recent wave of Scarborough conspiracy tweets, but the suggestion here is that future labels could be used to mitigate harm in situations like this one. Whether that means labeling unfounded accusations of criminality or labeling that kind of claim when made by the president of the United States remains to be seen.

In March, Twitter gave a video shared by White House social media director Dan Scavino and retweeted by Trump its “manipulated content” label—a rare action against the president’s account. The misleadingly edited video showed presumptive Democratic nominee Joe Biden calling to re-elect Trump.

According to the blog post Twitter pointed us to, the company previously said it will add new labels to “provide context around different types of unverified claims and rumors as needed.”

Even within existing categories—COVID-19 misinformation and manipulated media—Twitter has so far been reluctant to apply labels to high profiles accounts like that of the president, a frequent purveyor of online misinformation.

Twitter also recently introduced a system of warnings that hide a tweet, requiring the user to click through to view it. The tweets that are hidden behind warnings “[depend] on the propensity for harm and type of misleading information” they contain.

Trump’s renewed interest in promoting the baseless conspiracy theory prompted the young woman’s widower T.J. Klausutis to write a letter to Twitter CEO Jack Dorsey requesting that the president’s tweets be removed.

In the letter, Klausutis told Dorsey he views protecting his late wife’s memory as part of his marital obligation, even in her death. “My request is simple: Please delete these tweets,” Klausutis wrote.

“An ordinary user like me would be banished from the platform for such a tweet but I am only asking that these tweets be removed.”

Biden campaign releases a flurry of digital DIY projects and virtual banners. Yes, there are Zoom backgrounds

2020 is a nightmare year by most metrics, and it’s also a worst-case-scenario emerging out of a best-case-scenario for Joe Biden . After trailing in early primary states, Biden came crashing back on a stunning wave of Super Tuesday support. Now the presumptive Democratic nominee in the midst of a global health crisis that’s immobilized the U.S. workforce and somehow even further polarized American politics, the former Vice President will have to navigate completely uncharted waters to find a path to the presidency.

Biden—not the most internetty candidate out of 2020’s wide Democratic field by a long shot— is now tasked with getting creative, connecting with voters not at rallies or traditional events, the kind of thing the famously affable candidate excels at, but through screens. Making those connections is crucial for attracting less engaged voters, wrangling straying progressives and even maintaining its body of existing supporters, who need to be kept energized to power the campaign into the general election.

To that end, the Biden campaign is rolling out a new collection of digital assets to energize supporters stuck at home and to communicate the Biden brand’s visual language to everybody else. The selection of “Team Joe swag” includes some DIY options for supporters like big “No Malarkey!” home window placards and “We want Joe” button templates.

The campaign is also releasing an array of print-friendly coloring book pages to amuse idle politically-inclined progeny. Some of the pages thank frontline workers and immortalize Biden’s two german shepherds in crayon, while others depict Biden’s more meme-worthy symbols: ice cream cones and his trademark aviator sunglasses. (A viral moment from 2014 combined the two.)

For supporters who aren’t leaning into arts and crafts just yet, there’s “Joementum” phone wallpapers, banners optimized for social media and yes, a full set of Zoom backgrounds depicting Biden’s recent campaign stage: his home library.

Some critics say he needs to ditch his basement studio, but Biden said he plans to follow public health guidelines, hitting the virtual campaign trail from his now-expanded home setup in Delaware via virtual town halls and video chats, like his recent Instagram Live sit-down with U.S. soccer superstar and former Warrenite Megan Rapinoe.

The signs and wallpapers are just a tiny part of the campaign’s big picture, but depending on what comes after, a candidate’s visual signature can sear a political moment into the collective consciousness. Think Obama’s 2008 “Hope” poster by the artist Shepard Fairey, later acquired by National Portrait Gallery (Fairey himself later denounced the Obama administration’s drone program). Or Trump’s telltale red MAGA hats, which no one will be forgetting any time soon, regardless of how the general election shakes out.

Warm and fuzzy

For a campaign stuck indoors, visual branding is more important than ever. Biden’s visual brand mostly seems to focus on positive feelings that bring people together—kindness, faith, togetherness—rather than policy specifics or even “dump Trump” style calls to action.

“We want to find ways to make people feel involved while they’re cooped up at home,” the campaign’s Deputy National Press Secretary Matt Hill said. “These are tools that are going to help everyone who is involved with the campaign communicate that visually in a time when everyone is particularly logged on.”

Much has been written about how the virtual race poses unique challenges for the Biden campaign. The presumptive Democratic nominee is a candidate best known for his affable, empathetic in-person demeanor. But empathy doesn’t always perform well online, particularly when cast against the sound and fury of the factually-unencumbered, cash-flush Trump campaign.

“Branding communicates values, and during this crisis we want to let Joe Biden’s values shine through,” Hill said. “Yes, it’s of course ice cream and aviators, but it’s also decency, empathy, hope, and everything that is just the polar opposite of Donald Trump.”

The campaign frames this in broad strokes, good-against-evil language, describing Biden’s online movement as one of “empathy and human connection” out to topple the dark forces at work on the internet. The campaign’s digital director Rob Flaherty has said that 2020 is not just a fight for America itself, but also a “battle for the soul of the internet.”

“Right now, people are craving empathy and good… it gives us an advantage,” Hill said. “You have one side that often fights to win the Twitterverse with vitriol, and then you have us,”


Biden’s campaign has come under some scrutiny in recent weeks for the perception that it’s been slow to adapt to the pandemic era. Obama campaign veterans David Plouffe and David Axelrod penned a New York Times op-ed in early May calling for Biden to step up his digital efforts, likening his current broadcasts to “an astronaut beaming back to earth.”

After weeks of concern from insiders worried the Biden campaign might not be building the online momentum it needs, the campaign just beefed up its previously lean team with a flurry of new hires. The new talent will particularly build out the campaign’s digital operations, which it plans to double in size.

The hires include former Elizabeth Warren staffer Caitlin Mitchell, who will advise the Biden camp on digital strategy and help it scale up, Buzzfeed Video and Kamala Harris campaign alum Andrew Gauthier, who joins the Biden campaign as its video director and Robyn Kanner, previously Beto for America’s creative director, to lead design and branding.

It will be interesting to see what else emerges out of the “Biden brand,” which doesn’t translate as easily to organic virality as Bernie’s all-purpose “I’m once again asking” meme or the somehow-not-cloying antics of Elizabeth Warren’s lovable golden retriever Bailey. At least for now, the campaign doesn’t seem to view that as a problem.

But cracking virtual campaigning is not the only headwind for the Biden campaign at the moment. Sexual assault allegations by former Biden Senate aide Tara Reade made their way into mainstream reporting in April. And if formulating a response to such serious allegations would be delicate under normal circumstances, the Biden campaign has had to figure out how do it from a silo.

With its early technical difficulties ironed out, the Biden campaign may have a bit more breathing room to get creative. The campaign is focused on what it views as its “core platforms” for now—Facebook, YouTube, Instagram, Twitter and Snapchat—but it plans to both “invest more deeply” in those and also look at other platforms in the process of scaling up.

“We’ve already seen volunteers expand on Discord, Reddit, Pinterest and elsewhere,” Biden campaign Director of Digital Content Pam Stamoulis told TechCrunch.

Stamoulis also notes that the campaign is in “close communication” with the major social platforms where it focuses its efforts.

“… We have scheduled and consistent check in times to go over best practices, recommendations, new tools and brainstorm ideas and concepts to help optimize our use of their platforms,” Stamoulis said. “We anticipate working closely with platforms as we continue to move into the general.”

Biden’s stay-the-course digital strategy seems to reflect the thinking of his unlikely Super Tuesday coup, believing that you need the biggest coalition possible and you don’t necessarily build it through the buzziest politics or the flashiest moments. The campaign doesn’t want Biden to go viral as much as it wants him to connect with the most people in the broadest possible sense.

And to his credit, between the South Carolina comeback and his team-of-rivals Super Tuesday trick, Biden pulled it all off somehow. If there’s anything we can count on in 2020, whether it’s U.S. politics or a global health reckoning, it’s that we don’t know what the hell is going to happen. That lesson seems especially resonant for the extremely online among us, who seem to discover again and again that we are but a tiny, self-selecting sliver of the American electorate.

There’s no word on if we’ll see Biden trading island codes for Animal Crossing à la AOC or a virtual likeness of the candidate looming over Fortnite’s map psychedelic Travis Scott-style, but in a truly unusual election year, nothing is quite off the table.

Reddit announces updates, including a new subreddit, to increase political ad transparency

Reddit announced an update to its policy for political advertising that will require campaigns to leave comments open on ads for the first 24 hours. The platform also launched a new subreddit, r/RedditPoliticalAds, that will include information about advertisers, targeting, impressions and spending by each campaign.

In a post, Reddit said “we will strongly encourage political advertisers to use this opportunity to engage directly with users in the comments.” The new subreddit will also list all political ad campaigns on Reddit going back to January 1, 2019.

The company said that the latest update and new subreddit are meant to give users a “chance to engage directly and transparently with political advertisers around important political issues, and provide a line of sight into the campaigns and political organizations seeking your attention.”

Reddit’s ad policy already banned deceptive ads and required political ads to be manually reviewed for messaging and creative content. The platform only allows ads from within the United States, at the federal level, which means ads for state and local campaigns are not allowed.

In response to a user who asked if there are measures in place to prevent advertisers from increasing the size of their campaign to reach more users after the 24-hour open comment period is over, Reddit said “that activity will trigger a re-review of the ad and it would result in rejection.”

Political advertising policy updates by social platforms ahead of the 2020 U.S. presidential election have ranged from Facebook’s refusal to ban or fact-check political ads despite harsh criticism of the platform’s inaction during the 2016 election, to Google’s limits on demographic targeting and Twitter’s outright ban on political ads.

In an interview with Politico, Ben Lee, Reddit vice president and general counsel, suggested that Reddit is unlikely to adopt a policy like Twitter’s, saying that “just getting rid of political ads doesn’t strike me as the right approach in this context.”

Instead, he said Reddit’s update is “basically about two things that are pretty important to us: One is encouraging conversation around political ads and the second is transparency.”

EU parliament moves to email voting during COVID-19

The European Parliament will temporarily allow electronic voting by email as MEPs are forced to work remotely during the coronavirus crisis.

A spokeswoman for the parliament confirmed today that an “alternative electronic voting procedure” has been agree for the plenary session that will take place on March 26.

“This voting procedure is temporary and valid until 31 July,” she added.

Earlier this month the parliament moved the majority of its staff to teleworking. MEPs have since switch to full remote work as confirmed cases of COVID-19 have continued to step up across Europe. Though how to handle voting remotely has generated some debate in and of itself.

“Based on public health grounds, the President decided to have a temporary derogation to enable the vote to take place by an alternative electronic voting procedure, with adequate safeguards to ensure that Members’ votes are individual, personal and free, in line with the provisions of the Electoral act and the Members’ Statute,” the EU parliament spokeswoman said today, when we asked for the latest on its process for voting during the COVID-19 pandemic.

“The current precautionary measures adopted by the European Parliament to contain the spread of COVID-19 don’t affect legislative priorities. Core activities are reduced, but maintained precisely to ensure legislative, budgetary, scrutiny functions,” she added.

The spokeswoman confirmed votes will take place via email — explaining the process as follows: “Members would receive electronically, via email to their official email address, a ballot form, which would be returned, completed, from their email address to the relevant Parliament’s functional mailbox.”

“The results of all votes conducted under this temporary derogation would be recorded in the minutes of the sitting concerned,” she further noted.

Last week, ahead of the parliament confirming the alternative voting process, German Pirate Party MEP, Patrick Breyer, raised concerns about the security of e-voting — arguing that what was then just a proposal for MEPs to fill and sign a voting list, scan it and send it via email to the administration risked votes being vulnerable to manipulation and hacking.

“Such a manipulation-prone procedure risks undermining public trust in the integrity of Parliament votes that can have serious consequences,” he wrote. “The procedure comes with a risk of manipulation by hackers. Usually MEPs can send emails using several devices, and their staff can access their mailbox, too. Also it is easy to come by a MEP’s signature and scan it… This procedure also comes with the risk that personally elected and highly paid MEPs could knowingly allow others to vote on their behalf.”

“eVoting via the public Internet is inherently unsafe and prone to hacking, thus risks to erode public trust in European democracy,” he added. “I am sure powerful groups such as the Russian intelligence agency have a great interest in manipulating tight votes. eVoting makes manipulation at a large scale possible.”

Breyer suggested a number of alternatives — such as parallel postal voting, to have a paper back-up of MEPs’ e-votes; presence voting in EP offices in Member States (though clearly that would require parliamentarians to risk exposing themselves and others to the virus by traveling to offices in person); and a system such as “Video Ident”, which he noted is already used in Germany, where the MEP face identify in front of a webcam in a live video stream and then show their voting sheets to the camera.

He also suggested MEPs might not notice manipulations even if voting results were published — as looks to be the case with the parliament’s agreed procedure.

It’s not clear whether the parliament is applying a further back-up step — such as requiring a paper ballot to be mailed in parallel to an email vote. The parliament spokeswoman declined to comment in any detail when we asked. “All measures have been put in place to ensure the vote runs smoothly,” she said, adding: “We never comment on security measures.”

Reached for his response, Breyer told us: “My concerns definitely stand.”

However security expert J. Alex Halderman, a professor of Computer Science and Engineering at the University of Michigan — who testified before the US Senate hearing into Russian interference in the 2016 U.S. Election — said e-voting where the results are public is relatively low risk provided MEPs check their votes have been recorded properly.

“Voting isn’t such a hard problem when it’s not a secret ballot, and I take it that how each MEP votes is normally public. As long as that’s the case, I don’t think this is a major security issue,” he told TechCrunch. “MEPs should be encouraged to check that their votes are correctly recorded in the minutes and to raise alarms if there’s any discrepancy, but that’s probably enough of a safeguard during these challenging times.”  

“All of this is in stark contrast to election for public office, which are conducted with a secret ballot and in which there’s normally no possibility for voters to verify that their votes are correctly recorded,” he added. 

NationBuilder probe closed

In further news related to the EU parliament the European Data Protection Supervisor (EDPS) announced today that it’s closed an investigation into the former’s user of the US-based political campaign group, NationBuilder last year.

Back in November the EU’s lead data regulator revealed it had issued its first ever sanction of an EU institution by taking enforcement action over the parliament’s contract with NationBuilder for a public engagement campaign to promote voting in the spring election.

During the campaign the website collected personal data from more than 329,000 people, which was processed on behalf of the Parliament by NationBuilder. The EDPS found the parliament had contravened regulations governing how EU institutions can use personal data related to the selection and approval of sub-processors used by NationBuilder.

The contract has been described as coming to “a natural end” in July 2019, and the EDPS said today that all data collected has been transferred to the European Parliament’s servers’.

No further sanctions have been implemented, though the regulator said it will continue to monitor the parliament’s activities closely.

“Data protection plays a fundamental role in ensuring electoral integrity and must therefore be treated as a priority in the planning of any election campaign,” said EDPS, Wojciech Wiewiórowski, in a statement today. “With this in mind, the EDPS will continue to monitor the Parliament’s activities closely, in particular those relating to the 2024 EU parliamentary elections. Nevertheless, I am confident that the improved cooperation and understanding that now exists between the EDPS and the Parliament will help the Parliament to learn from its mistakes and make more informed decisions on data protection in the future, ensuring that the interests of all those living in the EU are adequately protected when their personal data is processed.”

At the time of writing the parliament had not responded to a request for comment.

Silicon Valley saw itself in Pete Buttigieg. Now he’s out of the race.

Tech’s darling is out. On Sunday, the 38-year-old Democratic presidential contender stepped out of the race, clearing a path for moderates to coalesce around another candidate on the eve of Super Tuesday’s broad contest for delegates.

Buttigieg, a previous political unknown, ran a surprisingly successful long-shot presidential campaign, making history as the first gay candidate to make a real run at the presidency and vaulting his political profile well above his mayorship of South Bend, Ind. Buttigieg’s appeal as a young, articulate politician was bolstered early on by his friendliness to Silicon Valley in a race in which tech’s biggest names were cast as villains by the race’s leftmost wing.

As the race began, elite segments of Silicon Valley, including tech leadership and venture capital, sought an alternative to Bernie Sanders. Sanders and his ideological next-door neighbor Elizabeth Warren have made criticisms of consolidated power and wealth a cornerstone of their platforms, with Warren in particular taking direct aim at tech’s biggest success stories or its ruling elite, depending on your perspective. That message was always more likely to resonate with tech’s rank and file workers, while the sector’s better-compensated upper echelons turned to Buttigieg and other Democratic candidates who reflected their traditionally liberal values while promising less upheaval for the industry.

Buttigieg was able to appeal to that dual desire, and his young age and proximity to tech power brokers like Mark Zuckerberg, a member of his cohort at Harvard, helped boost his cause among tech-savvy supporters. According to reporting from The Guardian, more than 75 VCs threw their weight behind the Buttigieg campaign as early as July 2019, lauding Buttigieg’s “intellectual vigor” and data-driven thinking — qualities Silicon Valley prides in itself. One of Facebook’s first 300 users, Buttigieg’s tech fluency offered a striking contrast to the stereotype of tech-inept members of Congress, a frequent complaint within tech in recent regulatory hearings.

“I don’t know if people saw him as a technology culture person,” one member of the VC community told TechCrunch. “It was more like as a smart person who could potentially execute. As a winner.” They characterized him as “persuasive and competent” but “not competent in some technocratic, bloodless way.”

“We want to build a campaign that’s a little disruptive, kind of entrepreneurial. Right now, it feels like a startup,” Buttigieg campaign manager Mike Schmuhl told AP News in April.

For some in Silicon Valley, which tends to think in terms of Silicon Valley, the analogy stuck all the way to the end.

With Buttigieg out of the race, the big question is where his support goes next. Given his moderate policies and new momentum, former Vice President Joe Biden appears likely to receive a boost from Buttigieg’s exit. Still, venture capitalists who spoke with TechCrunch suggested that support is likely to lack the enthusiasm tech had for Buttigieg. “People could have gone with Biden early on,” one Buttigieg supporter noted.

That redirected support may prove tepid compared to Silicon Valley’s initial bet on Buttigieg, but it could be a shot in the arm for Biden if the former vice president comes out of Tuesday’s big contest looking like a winner.

Three-quarters of Americans lack confidence in tech companies’ ability to fight election interference

A significant majority of Americans have lost faith in tech companies’ ability to prevent the misuse of their platforms to influence the 2020 presidential election, according to a new study from Pew Research Center, released today. The study found that nearly three-quarters of Americans (74%) don’t believe platforms like Facebook, Twitter and Google will be able to prevent election interference. What’s more, this sentiment is felt by both political parties evenly.

Pew says that nearly identical shares of Republicans and Republican-leaning independents (76%) and Democrats and Democrat-leaning independents (74%) have little or no confidence in technology companies’ ability to prevent their platforms’ misuse with regard to election interference.

And yet, 78% of Americans believe it’s tech companies’ job to do so. Slightly more Democrats (81%) took this position, compared with Republicans (75%).

While Americans had similar negative feelings about platforms’ misuse ahead of the 2018 midterm elections, their lack of confidence has gotten even worse over the past year. As of January 2020, 74% of Americans report having little confidence in the tech companies, compared with 66% back in September 2018. For Democrats, the decline in trust is even greater, with 74% today feeling “not too” confident or “not at all” confident, compared with 62% in September 2018. Republican sentiment has declined somewhat during this same time, as well, with 72% expressing a lack of confidence in 2018, compared with 76% today.

Even among those who believe the tech companies are capable of handling election interference, very few (5%) Americans feel “very” confident in their capabilities. Most of the optimists see the challenge as difficult and complex, with 20% saying they feel only “somewhat” confident.

Across age groups, both the lack of confidence in tech companies and a desire for accountability increase with age. For example, 31% of those 18 to 29 feel at least somewhat confident in tech companies’ abilities, versus just 20% of those 65 and older. Similarly, 74% of youngest adults believe the companies should be responsible for platform misuse, compared with 88% of the 65-and-up crowd.

Given the increased negativity felt across the board on both sides of the aisle, it would have been interesting to see Pew update its 2018 survey that looked at other areas of concern Republicans and Democrats had with tech platforms. The older study found that Republicans were more likely to feel social media platforms favored liberal views while Democrats were more heavily in favor of regulation and restricting false information.

Issues around election interference aren’t just limited to the U.S., of course. But news of Russia’s meddling in U.S. politics in particular — which involved every major social media platform — has helped to shape Americans’ poor opinion of tech companies and their ability to prevent misuse. The problem continues today, as Russia is being called out again for trying to intervene in the 2020 elections, according to several reports. At present, Russia’s focus is on aiding Sen. Bernie Sanders’ campaign in order to interfere with the Democratic primary, the reports said.

Meanwhile, many of the same vulnerabilities that Russia exploited during the 2016 elections remain, including the platforms’ ability to quickly spread fake news, for example. Russia is also working around blocks the tech companies have erected in an attempt to keep Russian meddling at bay. One report from The NYT said Russian hackers and trolls were now better at covering their tracks and were even paying Americans to set up Facebook pages to get around Facebook’s ban on foreigners buying political ads.

Pew’s report doesn’t get into any details as to why Americans have lost so much trust in tech companies since the last election, but it’s likely more than just the fallout from election interference alone. Five years ago, tech companies were viewed largely as having a positive impact on the U.S., Pew had once reported. But Americans no longer feel as they did, and now only around half of U.S. adults believe the companies are having a positive impact.

More Americans are becoming aware of how easily these massive platforms can be exploited and how serious the ramifications of those exploits have become across a number of areas, including personal privacy. It’s not surprising, then, that user sentiment around how well tech companies are capable of preventing election interference has declined, too, along with all the rest.

Why the world must pay attention to the fight against disinformation and fake news in Taiwan

On Saturday, Taiwan will hold its presidential election. This year, the outcome is even more important than usual because it will signal what direction the country’s people want its relationship with China, which claims Taiwan as its territory, to move in. Also crucial are efforts against fake news. Taiwan has one of the worst disinformation problems in the world and how it is handled is an important case study for other countries.

Yesterday, Twitter said in a blog post that it has held trainings for the two main political parties in Taiwan, the Democratic Progressive Party (DPP) and the Kuomintang (KMT), and Taiwan’s Central Election Commission, in addition to setting up a portal for feedback during the election. Late last month, the state-owned Central News Agency reported that Facebook will set up a “war room” to counteract disinformation before the election, echoing its efforts in other countries (the company previously established a regional elections center at its Asia-Pacific headquarters in Singapore).

But the fight against disinformation in Taiwan started years before the current presidential election. It now encompasses the government, tech companies and non-profit groups like the Taiwan FactCheck Center, and will continue after the election. As in other countries, the fake news problem in Taiwan takes advantage of complex, deep-rooted ideological, cultural and political rifts among Taiwan’s population of 24 million, and it demonstrates that fake news isn’t just a tech or media literacy problem, but also one that needs to be examined from a social psychological perspective.

How Fake News Spreads in Taiwan

This year’s election is taking place as the Chinese government, under President Xi Jinping, makes increasingly aggressive efforts to assert control over Taiwan, and as the ongoing demonstrations in Hong Kong underscore the fissures in China’s “one country, two systems” model. The two leading candidates are incumbent Tsai Ing-wen, a member of the DPP, and opponent Han Kuo-yu of the KMT, who favors a more conciliatory relationship with China.

The Chinese government has been linked to disinformation campaigns in Taiwan. Last year, the Varieties of Democracy Institute (V-Dem) at the University of Gothenburg in Sweden researched foreign influence in domestic politics and placed Taiwan in its “worst” category, along with Latvia and Bahrain, as the countries where foreign governments most frequently use social media to spread false information for “all key political issues.” By comparison, the United States ranked 13th worst on the list, despite being targeted by Russian disinformation operations.

But disinformation also comes from many other sources, including Taiwanese politicians from different parties and their supporters, “cyber armies” whose aim is to influence voters, and content farms that sensationalize and repost content from media outlets. It is spread through platforms including Facebook, Twitter, messaging app Line, online bulletin board PTT and YouTube.

The problem has escalated over the past five years, according to an April 2019 report by CommonWealth magazine reporters Rebecca Lin and Felice Wu. During the previous presidential election, cyber armies consisting of supporters for some politicians, or workers for political parties and public relations companies, began engaging in online information wars, creating an opportunity for foreign influence. Wu Hsun-hsiao, former legal counsel to PTT, told the magazine that in 2015, more dummy accounts entering from China began to appear on the bulletin board and Facebook. “The rise of emerging online media has generated a considerable amount of noise, and China has discovered the influence it has,” he told CommonWealth.

Over the last two years, YouTube has also become an increasingly potent way to spread disinformation, often through short videos that take clips and photos from news outlets and re-edit them to present misleading narratives about major news events. “They take advantage of the myth that ‘to see is to believe’ by massively disseminating false information in the run-up to the election,” Wang Tai-li, a professor in National Taiwan University’s Graduate Institute of Journalism told the Liberty Times.

Earlier this month, Taiwan’s legislature passed the DPP-backed Anti-Infiltration Bill, meant to stop Chinese influence in Taiwanese politics. The legislation was opposed by KMT politicians, including former president Ma Ying-jeou, who made controversial statements comparing the bill to restoring Taiwan’s four decades of martial law, which ended in 1987.

But part of the challenge of fighting fake news in Taiwan is lack of awareness. After local elections in November 2018, Wang conducted a survey that found 52% of respondents did not believe there was foreign interference, or said they did not know enough to judge.

Fighting Back

Much of the work, however, is being done by volunteers and private citizens. Last month, Los Angeles Times’ reporter Alice Su wrote about organizations like the Taiwan FactCheck Center, a non-profit that does not receive funding from the government, political parties or politicians. In July 2018, the group began collaborating with Facebook, where posts flagged as containing false information bring up a screen with a link that takes users to a Taiwan FactCheck Center report before they are allowed to view the content.

Su also covered other groups like DoubleThink Labs, which monitors Chinese disinformation networks in Taiwan, and CoFacts, a crowdsourced database that operates a factchecking Line chatbot. But these groups and social media platforms are up against thousands of posts containing disinformation each day, including in private chat groups that can’t be monitored.

Last month, Facebook said it had removed 118 fan pages, 99 groups and 51 accounts, including an unofficial fan page for Han called “Kaohsiung Fan Group” that had more than 150,000 members, for rules violations.

In a statement to TechCrunch, a Facebook spokesperson said “Over the last three years, we have dedicated unprecedented resources to fighting malicious activity on our platform and, in particular, to protecting the integrity of elections on Facebook–including this week’s election in Taiwan. Our approach includes removing fake accounts, reducing the spread of misinformation, bringing transparency to political advertising, disrupting information operations and working with Taiwan’s Central Election Commission to promote civic engagement. We have teams of experts dedicated to protecting Taiwan’s election, and we look forward to ensuring that Facebook can play a positive role in the democratic process.”

The Pervasiveness of Fake News

Efforts to combat fake news often results in more disinformation spread by people who believe their views have been unfairly targeted. According to the Stanford Internet Observatory (SIO), by the time the Kaohsiung Fan Group was removed, it had 109 admins and moderators, a number the SIO said was “unusually high compared to the average admin and moderator counts for Taiwanese political groups of either affiliation (pro-Han Kuo-yu groups averaged 27, and pro-Tsai Ing-wen Groups, 10).” Furthermore, several moderators had “suspicious” profiles, including zero or one friend, profile photos that were not of people and “minimal human engagement” on their posts.

But despite that evidence, the SIO also noted that the mass removal “prompted conspiratorial theories about why they were taken down in the first place,” with posts in other pro-KMT groups speculating that it had been done in coordination with the DPP.

An illustration of how sticky fake news can be once it takes root is disinformation about the validity of Tsai’s PhD from the London School of Economics, which continue to circulate through Taiwan even though the university issued a statement confirming the degree.

As in other countries, disinformation in Taiwan also highlight and widen existing political social and cultural rifts. In the United States, for example, fake news campaigns by Russian agents capitalize on already highly polarizing issues, including race, immigration and gun control.

In Taiwan, the specific issues may be different, but the objective is the same. As Su wrote in the Los Angeles Times, many posts “try to stir emotions on hot-button issues–for example, false claims that Tsai’s government has misused pension funds to lure Korean and Japanese tourists to make up for a drop in visitors from the mainland, and that organizers of Taiwan’s annual gay-rights parade received stipends to invite overseas partners to march with them.”

Taiwan became the first country in Asia to legalize same-sex marriage last year, despite aggressive efforts by conservative groups to stop it. Before it was passed, CoFacts documented viral posts that linked same-sex marriage with the spread of of HIV. Creators of fake news continue to take advantage of the issue by spreading homophobic disinformation, including claims that the DPP spent NT$30 million (about $980,000) to organize Taipei’s Pride Parade, even though the event is funded by its organizers and does not receive sponsorship from political parties.

Disinformation is difficult to combat and the use of online platforms to spread it is rapidly emerging as one of this century’s most serious problems. But online tools, vigilance by social media platforms and observers, and understanding the social and cultural issues exploited by disinformation, can also be used to fight it. Taiwan’s rampant fake news and disinformation problem has reached the point of crisis, but it may also reveal what solutions are most effective at combating it around the world.