VCs see much to like in Democrats’ $1.5 trillion Moving Forward Act

“The Moving Forward Act reads like a $1.5 trillion validation of our fund’s thesis — that upgrading cities and related infrastructure is key to fighting the existential threat of climate change and improving lives,” said Stonly Baptiste, co-founder and partner at venture capital fund Urban Us.

Democrats in Congress are wrestling with the twin problems of mass unemployment and a long-delayed need to rebuild America’s crumbling infrastructure. Now, with just 124 days until the election, the party is building a platform that supports national funding to boost employment and develop more sustainable infrastructure heavily focused on renewables.

Some venture investors have long supported aspects of the bill, but persistent gridlock made any movement on new policy unlikely — at least in the near term. The proposed legislation contains provisions that would use $1.5 trillion to overhaul the nation’s transportation infrastructure, schools, affordable housing, renewable energy capacity and postal service.

“[In] this time of strong political division in Washington, D.C., we’ll have to see what if anything can get done on this topic in the Senate. I do have some hope that as we get closer to the election, standing in the way of clean energy is going to be seen by many in the Senate as a vote-loser, and that could sway some votes to support something bipartisan,” said Rob Day, a longtime investor in sustainable technologies and a general partner at Spring Lane Capital. “But I’m not really expecting anything to come close to what’s in this House bill. So as an investor I support what they’re doing here, but I’m not yet changing any investment strategies around it.”

Baptiste said investors already support many of the initiatives under the Moving America Forward Act, but the incentives proposed under the Democratic plan could redouble those efforts.

“In general, it seems like it would incentivize private capital to further align and mobilize in the climate-change battle,” Baptiste said. “Over the last 10 years, VC capital has been increasingly invested in transit in general and there is a 20-year history in the clean energy sector.”

Dear Sophie: Is immigration happening? Who can I hire?

Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

“Dear Sophie” columns are accessible for Extra Crunch subscribers; use promo code ALCORN to purchase a one- or two-year subscription for 50% off.


Dear Sophie:

What is going on with recent USCIS furloughs and Trump’s H-1B ban?

I handle recruitment for several tech companies. Is immigration happening? Who can I hire?

—Frustrated in Fremont

Dear Fremont:

Immigration is still possible and I will explain how below. The administration continues to miss the mark with immigration policy. Trump’s U.S. unemployment “solution” of cutting off the stream of global talent to the U.S. is short-sighted. The administration is shooting America in the foot by walling off the promise of post-COVID economic revitalization and job-creation for Americans through the talent of immigrant entrepreneurs, investors and talent.

USCIS just provided a 30-day furlough notice to more than 70% of its employees. Reporters have been reaching out to me every day requesting stories of affected immigrants and HR professionals; please sign up to share your immigration story with journalists.

U.S. suspends export of sensitive tech to Hong Kong as China passes new national security law

The United States government began measures today to end its special status with Hong Kong, one month after Secretary of State Michael Pompeo told Congress that Hong Kong should no longer be considered autonomous from China. These include suspending export license exceptions for sensitive U.S. technology and ending the export of defense equipment to Hong Kong. Both the Commerce and State Departments also said further restrictions are being evaluated.

The U.S. government’s announcements were made a few hours before news broke that China had passed a new national security law that will give it greater control over Hong Kong. It is expected to take effect on July 1, according to the South China Morning Post.

The term “special status” refers to arrangements that recognized the difference between Hong Kong and mainland China under the “one country, two systems” policy put into place when the United Kingdom handed control of Hong Kong back to Beijing in 1997. These included different export controls, immigration policies and lower tariffs. But that preferential treatment was put into jeopardy after China proposed the new national security law, which many Hong Kong residents fear will end the region’s judicial independence from Beijing.

The U.S Commerce Department and State Department issued separate statements today detailing the new restrictions on Hong Kong. Secretary of Commerce Wilbur Ross said the Commerce Department will suspend export license exceptions for sensitive U.S. technology, and that “further actions to eliminate differential treatment are also being evaluated.”

The State Department said that it will end exports of U.S. defense equipment and also “take steps toward imposing the same restrictions on U.S. defense and dual-use technologies to Hong Kong as it does for China.”

In a statement to Reuters, Kurt Tong, a former U.S. consul general in Hong Kong, said that the U.S. government’s decisions today would not impact a large amount of trade between the U.S. and Hong Kong because the territory is not a major manufacturing center and its economy is mostly services.

According to figures from the Office of the United States Trade Representative, Hong Kong accounted for 2.2% of overall U.S. exports in 2018, totaling $37.3 billion, with the top export categories being electrical machinery, precious metal and stones, art and antiques, and beef. But the new restrictions could make more difficult for U.S. semiconductor and other technology companies to do business with Hong Kong clients.

Other restrictions proposed by the United States including ending its extradition treaty with Hong Kong.

Both the State and Commerce departments said that the restrictions were put into place for national security reasons. “We can no longer distinguish between the export of controlled items to Hong Kong or to mainland China,” Pompeo wrote. “We cannot risk these items falling into the hands of the People’s Liberation Army, whose primary purpose is to uphold the dictatorship of the CCP by any means necessary.”

In his statement, Ross said, “With the Chinese Communist Party’s imposition of new security measures on Hong Kong, the risk that sensitive U.S. technology will be diverted to the People’s Liberation Army or Ministry of State Security has increased, all while undermining the territory’s autonomy.”

Trump suspended from Twitch, as Reddit bans the ‘The_Donald’ and additional subreddits

Two big new pieces of news today from the ongoing battle between social media and politics. Both Twitch and Reddit have made moves against political content, citing violations of terms of service.

Twitch confirmed today that it has temporarily suspended the president’s account. “Hateful conduct is not allowed on Twitch,” a spokesperson for the streaming giant told TechCrunch. “In line with our policies, President Trump’s channel has been issued a temporary suspension from Twitch for comments made on stream, and the offending content has been removed.”

Twitch specifically cites two incidents from campaign rallies, uttered by Trump at rallies four years apart. The first comes from his campaign kickoff, including the now infamous words:

When Mexico sends its people, they’re not sending their best. They’re not sending you. They’re not sending you. They’re sending people that have lots of problems, and they’re bringing those problems with us. They’re bringing drugs. They’re bringing crime. They’re rapists. And some, I assume, are good people. But I speak to border guards and they tell us what we’re getting. And it only makes common sense. It only makes common sense. They’re sending us not the right people.

The second is from the recent rally in Tulsa, Oklahoma, his first since COVID-19-related shutdowns ground much of presidential campaigning to a halt. Here’s the pertinent bit from that:

Hey, it’s 1:00 o’clock in the morning and a very tough, I’ve used the word on occasion, hombre, a very tough hombre is breaking into the window of a young woman whose husband is away as a traveling salesman or whatever he may do. And you call 911 and they say, “I’m sorry, this number’s no longer working.” By the way, you have many cases like that, many, many, many. Whether it’s a young woman, an old woman, a young man or an old man and you’re sleeping.

Twitch tells TechCrunch that it offered the following guidance to Trump’s team when the channel was launched, “Like anyone else, politicians on Twitch must adhere to our Terms of Service and Community Guidelines. We do not make exceptions for political or newsworthy content, and will take action on content reported to us that violates our rules.”

That news follows the recent ban of the massive The_Donald subreddit, which sported more than 790,000 users, largely devoted to sharing content about Trump. Reddit confirmed the update to its policy that resulted in the ban, along with 2,000 other subreddits, including one devoted to the hugely popular leftist comedy podcast, Chapo Trap House.

The company cites the following new rules:

  • Rule 1 explicitly states that communities and users that promote hate based on identity or vulnerability will be banned.
    • There is an expanded definition of what constitutes a violation of this rule, along with specific examples, in our Help Center article.
  • Rule 2 ties together our previous rules on prohibited behavior with an ask to abide by community rules and post with authentic, personal interest.
    • Debate and creativity are welcome, but spam and malicious attempts to interfere with other communities are not.
  • The other rules are the same in spirit but have been rewritten for clarity and inclusiveness.

It adds:

All communities on Reddit must abide by our content policy in good faith. We banned r/The_Donald because it has not done so, despite every opportunity. The community has consistently hosted and upvoted more rule-breaking content than average (Rule 1), antagonized us and other communities (Rules 2 and 8), and its mods have refused to meet our most basic expectations. Until now, we’ve worked in good faith to help them preserve the community as a space for its users—through warnings, mod changes, quarantining, and more.

Reddit adds that it banned the smaller Chapo board for “consistently host[ing] rule-breaking content and their mods have demonstrated no intention of reining in their community.”

Trump in particular has found himself waging war on social media sites. After Twitter played whack-a-mole with problematic tweets around mail-in voting and other issues, he signed an executive order taking aim at Section 230 of the Communications Decency Act, which protects sites from being sued for content posted by users.

Blackbaud’s cloud for ‘social good’ serves customers that work against human rights

Blackbaud offers enterprise tools ostensibly in a campaign to support social good, but the company also provides services to far-right organizations the Heritage Foundation and the Center for Security Policy, TechCrunch has discovered.

Blackbaud describes itself as “the world’s leading cloud software company powering social good,” and collects revenues in the hundreds of millions of dollars from that business. Nothing about that mission is partisan, and good can of course be accomplished by groups all across the current American political spectrum.

But while conservative causes are by no means excluded from the category, the far-right stances of Heritage and especially those of CSP are difficult to square with even the broadest interpretation of social good.

The decades-old Heritage Foundation has been behind lobbying efforts against climate change action, equal rights for LGBTQ Americans and immigration modernization efforts. It has worked on behalf of the oil and tobacco industries, opposed health care reform and recommended the likes of Betsy DeVos and Scott Pruitt to the administration.

Google recently scuttled an advisory committee on AI that included Heritage’s president after overwhelming criticism that they had essentially endorsed the think tank’s policies.

According to GLAAD, Heritage “has made it a focused mission to stop all laws protecting on the basis of sexual orientation and gender identity.” This alone makes it a poor match for a company that just weeks ago said in celebration of Pride that “we want to underscore that LGBTQ+ rights are human rights.”

Yet according to documents reviewed by TechCrunch, Blackbaud collects about $180,000 in annual revenue from the Heritage Foundation and has worked with them for about 15 years.

The Center for Security Policy is a more extreme case. Designated as a hate group by the Southern Poverty Law Center, CSP has pursued a hardline anti-Muslim stance for years. It publishes reports saying jihadists are infiltrating the U.S. government and was commissioned to perform polling to show support for Trump’s Muslim travel ban. A CSP executive was hired by John Bolton to serve in the administration and later left to rejoin the anti-Muslim organization as its head.

Image Credits: Blackbaud

One recent study warns of a Muslim plot “even more sinister” than the widespread sexual abuse revealed in the #MeToo era: “Sharia-supremacists are insinuating themselves into script-writing, Hollywood ‘consulting,’ film production, and even financial scholarships designed to facilitate young Muslims’ penetration of the entertainment industry.”

The documents show a smaller contract with CSP, amounting to about $11,000 in annual revenue.

Blackbaud records show interactions with both companies within the last month or so; these are current contracts. Neither Heritage nor CSP responded to requests for comment, and Blackbaud would not confirm they are customers as a matter of policy.

“Blackbaud provides cloud software, services, data intelligence and expertise to a wide spectrum of registered 501c3 organizations and companies that are lawfully conducting business. Those organizations are diverse in their missions and belief systems, but we remain committed to building the best software to support all who are truly doing good in achieving their missions,” the company said in a statement. It then referred me to a recent blog post entitled “EQUALITY.”

While doing business with a couple of bad actors doesn’t negate Blackbaud’s work with other organizations actually working for the social good, the discrepancy bears highlighting given the company’s virtue-first brand. If the concept of social good they are working with is mutable enough that it includes hate groups, other organizations might think twice about trusting that message.

At times like the present, companies are being called on to not just say they are dedicated to things like human rights, anti-racism and other causes, but to demonstrate it and respond to criticism. According to Blackbaud:

“Racism and acts of hate strip people of basic human rights and defy the very principles of what ‘good’ stands for. We condemn racism and discrimination and seek solutions to end the suffering in our communities and world.

Equality must be more than a motto. It must be a promise to each other and the world.”

By espousing equality on one hand and making millions from those who oppose it on the other, it may be fairly questioned whether that promise is being kept.

Four views: How will the work visa ban affect tech and which changes will last?

The Trump administration’s decision to extend its ban on issuing work visas to the end of this year “would be a blow to very early-stage tech companies trying to get off the ground,” Silicon Valley immigration lawyer Sophie Alcorn told TechCrunch this week.

In 2019, the federal government issued more than 188,000 H-1B visas — thousands of workers who live in the San Francisco Bay Area and other startup hubs hold H-1B and H-2B visas or J and L visas, which are explicitly prohibited under the president’s ban. Normally, the government would process tens of thousands of visa applications and renewals in October at the start of its fiscal year, but the executive order all but guarantees new visas won’t be granted until 2021.

Four TechCrunch staffers analyzed the president’s move in an attempt to see what it portends for the tech industry, the U.S. economy and our national image:

Danny Crichton: Trump’s ban is a “self-inflicted” blow to our precarious economy

America’s economic supremacy is increasingly precarious.

Outsourcing and offshoring led to a generational loss of manufacturing skills, management incompetence killed off many of the country’s leading businesses and the nation now competes directly with China and other countries in critical emerging industries like 5G, artificial intelligence and the other alphabet soup of technological acronyms.

We have one thing going for us that no other country can rival: our ability to attract top talent. No other country hosts more immigrants, nor does any other country capture the imagination of a greater portion of the world’s top minds. America — whether Silicon Valley, Wall Street, Hollywood, Harvard Square or anywhere in between — is where smart people congregate.

Or at least, it was.

The coronavirus was the first major blow, partially self-inflicted. Remote work pushed employers toward keeping workers where they are (both domestically and overseas) rather than centralizing them in a handful of corporate HQs. Meanwhile, students — the first step for many talented workers to enter the United States — are taking a pause, fearing renewed outbreaks of COVID-19 in America while much of the rest of the developed world reopens with few cases.

The second blow was entirely self-inflicted. Earlier this week, President Donald Trump announced that his administration would halt processing critical worker visas like the H-1B due to the current state of the American economy.

As advertisers revolt, Facebook commits to flagging ‘newsworthy’ political speech that violates policy

As advertisers pull away from Facebook to protest the social networking giant’s hands-off approach to misinformation and hate speech, the company is instituting a number of stronger policies to woo them back.

In a livestreamed segment of the company’s weekly all-hands meeting, CEO Mark Zuckerberg recapped some of the steps Facebook is already taking, and announced new measures to fight voter suppression and misinformation — although they amount to things that other social media platforms like Twitter have already enahatected and enforced in more aggressive ways.

At the heart of the policy changes is an admission that the company will continue to allow politicians and public figures to disseminate hate speech that does, in fact, violate Facebook’s own guidelines — but it will add a label to denote they’re remaining on the platform because of their “newsworthy” nature.

It’s a watered-down version of the more muscular stance that Twitter has taken to limit the ability of its network to amplify hate speech or statements that incite violence.

Zuckerberg said:

A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We’ll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what’s acceptable in our society — but we’ll add a prompt to tell people that the content they’re sharing may violate our policies.

The problems with this approach are legion. Ultimately, it’s another example of Facebook’s insistence that with hate speech and other types of rhetoric and propaganda, the onus of responsibility is on the user.

Zuckerberg did emphasize that threats of violence or voter suppression are not allowed to be distributed on the platform whether or not they’re deemed newsworthy, adding that “there are no exceptions for politicians in any of the policies I’m announcing here today.”

But it remains to be seen how Facebook will define the nature of those threats — and balance that against the “newsworthiness” of the statement.

The steps around election year violence supplement other efforts that the company has taken to combat the spread of misinformation around voting rights on the platform.

 

The new measures that Zuckerberg announced also include partnerships with local election authorities to determine the accuracy of information and what is potentially dangerous. Zuckerberg also said that Facebook would ban posts that make false claims (like saying ICE agents will be checking immigration papers at polling places) or threats of voter interference (like “My friends and I will be doing our own monitoring of the polls”).

Facebook is also going to take additional steps to restrict hate speech in advertising.

“Specifically, we’re expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others,” Zuckerberg said. “We’re also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them.”

Zuckerberg’s remarks came days of advertisers — most recently Unilever and Verizon — announced that they’re going to pull their money from Facebook as part the #StopHateforProfit campaign organized by civil rights groups.

These are small, good steps from the head of a social network that has been recalcitrant in the face of criticism from all corners (except, until now. from the advertisers that matter most to Facebook). But they don’t do anything at all about the teeming mass of misinformation that exists in the private channels that simmer below the surface of Facebook’s public facing messages, memes and commentary.

Unilever and Verizon are the latest companies to pull their advertising from Facebook

Advertiser momentum against Facebook’s content and monetization policies continues to grow.

Last night, Verizon (which owns TechCrunch) said it will be pausing advertising on Facebook and Instagram “until Facebook can create an acceptable solution that makes us comfortable and is consistent with what we’ve done with YouTube and other partners.”

Then today, it was joined by consumer goods giant Unilever, which said it will halt all U.S. advertising on Facebook, Instagram (owned by Facebook) and even Twitter, at least until the end of the year.

“Based on the current polarization and the election that we are having in the U.S., there needs to be much more enforcement in the area of hate speech,” Unilever’s executive vice president of global media Luis Di Como told The Wall Street Journal.

The effort to bring advertiser pressure to bear on Facebook began with a campaign called #StopHateforProfit, which is coordinated by the Anti-Defamation League, the NAACP, Color of Change, Free Press and Sleeping Giants. The campaign is calling for changes that are supposed to improve support for victims of racism, anti-Semitism and hate, and to end ad monetization on misinformation and hateful content.

The list of companies who have agreed to pull their advertising from Facebook also includes outdoor brands like REI, The North Face and Patagonia. (An important caveat: Gizmodo noted that it’s not clear whether these advertisers are also pulling their money from the Facebook Audience Network.)

Facebook provided the following statement in response to Unilever’s announcement:

We invest billions of dollars each year to keep our community safe and continuously work with outside experts to review and update our policies. We’ve opened ourselves up to a civil rights audit, and we have banned 250 white supremacist organizations from Facebook and Instagram. The investments we have made in AI mean that we find nearly 90% of Hate Speech [and take] action before users report it to us, while a recent EU report found Facebook assessed more hate speech reports in 24 hours than Twitter and YouTube. We know we have more work to do, and we’ll continue to work with civil rights groups, GARM, and other experts to develop even more tools, technology and policies to continue this fight.

And Twitter provided a statement from Sarah Personette, vice president of global client solutions:

Our mission is to serve the public conversation and ensure Twitter is a place where people can make human connections, seek and receive authentic and credible information, and express themselves freely and safely. We have developed policies and platform capabilities designed to protect and serve the public conversation, and as always, are committed to amplifying voices from underrepresented communities and marginalized groups. We are respectful of our partners’ decisions and will continue to work and communicate closely with them during this time.

As of 1:57 p.m. EDT, Facebook stock was down more than 7% from the start of trading. CEO Mark Zuckerberg said he will also be addressing these issues at a town hall starting at 2 p,m. EDT today.

 

Volcker Rule reforms expand options for raising VC funds

It’s time to put on our thinking caps so we can discuss an esoteric but important policy change and how it is going to impact the VC world.

The 2008 financial crisis devastated the global economy. One of the reforms that came from the detritus of that situation was a policy known as the Volcker Rule.

The rule, proposed by former Fed chairman Paul Volcker and passed into law with the Dodd-Frank Act, was designed to limit the ways that banks could invest their balance sheets to avoid the kind of cataclysmic systemic risks that the world witnessed during the crisis. Many banks faced a liquidity crunch after investing in mortgage-backed securities (MBSs), collateralized debt obligations (CDOs), and other even more arcane speculative financial instruments (like POGs, or Piles Of Garbage) in seeking profits.

A number of reforms are underway to the Volcker Rule, which has been a domestic regulatory priority for the Trump administration since Inauguration Day.

One of the unintended consequences of the rule is that it limited banks from investing in certain “covered funds,” which was written broadly enough that it, well, covered VC firms as well as hedge funds and other private equity vehicles. Reforms to that policy (and to the rule in general) have been proposed for a decade with little traction until recently.

Now, a number of reforms are underway to the Volcker Rule, which has been a domestic regulatory priority for the Trump administration since Inauguration Day.

First, a simplification to some of the rule’s regulations was passed late last year and went into effect in January. Now, a final rule to reform the Volcker Rule’s applications to VC firms, among other issues, was agreed to by a group of U.S. regulatory agencies, and will go into effect later this year.

Privacy not a blocker for “meaningful” research access to platform data, says report

European lawmakers are eyeing binding transparency requirements for Internet platforms in a Digital Services Act (DSA) due to be drafted by the end of the year. But the question of how to create governance structures that provide regulators and researchers with meaningful access to data so platforms can be held accountable for the content they’re amplifying is a complex one.

Platforms’ own efforts to open up their data troves to outside eyes have been chequered to say the least. Back in 2018, Facebook announced the Social Science One initiative, saying it would provide a select group of academics with access to about a petabyte’s worth of sharing data and metadata. But it took almost two years before researchers got access to any data.

“This was the most frustrating thing I’ve been involved in, in my life,” one of the involved researchers told Protocol earlier this year, after spending some 20 months negotiating with Facebook over exactly what it would release.

Facebook’s political Ad Archive API has similarly frustrated researchers. “Facebook makes it impossible to get a complete picture of all of the ads running on their platform (which is exactly the opposite of what they claim to be doing),” said Mozilla last year, accusing the tech giant of transparency-washing.

Facebook, meanwhile, points to European data protection regulations and privacy requirements attached to its business following interventions by the US’ FTC to justify painstaking progress around data access. But critics argue this is just a cynical shield against transparency and accountability. Plus of course none of these regulations stopped Facebook grabbing people’s data in the first place.

In January, Europe’s lead data protection regulator penned a preliminary opinion on data protection and research which warned against such shielding.

“Data protection obligations should not be misappropriated as a means for powerful players to escape transparency and accountability,” wrote EDPS Wojciech Wiewiorówski. “Researchers operating within ethical governance frameworks should therefore be able to access necessary API and other data, with a valid legal basis and subject to the principle of proportionality and appropriate safeguards.”

Nor is Facebook the sole offender here, of course. Google brands itself a ‘privacy champion’ on account of how tight a grip it keeps on access to user data, heavily mediating data it releases in areas where it claims ‘transparency’. While, for years, Twitter routinely disparaged third party studies which sought to understand how content flows across its platform — saying its API didn’t provide full access to all platform data and metadata so the research couldn’t show the full picture. Another convenient shield to eschew accountability.

More recently the company has made some encouraging noises to researchers, updating its dev policy to clarify rules, and offering up a COVID-related dataset — though the included tweets remains self selected. So Twitter’s mediating hand remains on the research tiller.

A new report by AlgorithmWatch seeks to grapple with the knotty problem of platforms evading accountability by mediating data access — suggesting some concrete steps to deliver transparency and bolster research, including by taking inspiration from how access to medical data is mediated, among other discussed governance structures.

The goal: “Meaningful” research access to platform data. (Or as the report title puts it: Operationalizing Research Access in Platform Governance: What to Learn from Other Industries?

“We have strict transparency rules to enable accountability and the public good in so many other sectors (food, transportation, consumer goods, finance, etc). We definitely need it for online platforms — especially in COVID-19 times, where we’re even more dependent on them for work, education, social interaction, news and media consumption,” co-author Jef Ausloos tells TechCrunch.

The report, which the authors are aiming at European Commission lawmakers as they ponder how to shape an effective platform governance framework, proposes mandatory data sharing frameworks with an independent EU-institution acting as an intermediary between disclosing corporations and data recipients.

“Such an institution would maintain relevant access infrastructures including virtual secure operating environments, public databases, websites and forums. It would also play an important role in verifying and pre-processing corporate data in order to ensure it is suitable for disclosure,” they write in a report summary.

Discussing the approach further, Ausloos argues it’s important to move away from “binary thinking” to break the current ‘data access’ trust deadlock. “Rather than this binary thinking of disclosure vs opaqueness/obfuscation, we need a more nuanced and layered approach with varying degrees of data access/transparency,” he says. “Such a layered approach can hinge on types of actors requesting data, and their purposes.”

A market research purpose might only get access to very high level data, he suggests. Whereas medical research by academic institutions could be given more granular access — subject, of course, to strict requirements (such as a research plan, ethical board review approval and so on).

“An independent institution intermediating might be vital in order to facilitate this and generate the necessary trust. We think it is vital that that regulator’s mandate is detached from specific policy agendas,” says Ausloos. “It should be focused on being a transparency/disclosure facilitator — creating the necessary technical and legal environment for data exchange. This can then be used by media/competition/data protection/etc authorities for their potential enforcement actions.”

Ausloos says many discussions on setting up an independent regulator for online platforms have proposed too many mandates or competencies — making it impossible to achieve political consensus. Whereas a leaner entity with a narrow transparency/disclosure remit should be able to cut through noisy objections, is the theory.

The infamous example of Cambridge Analytica does certainly loom large over the ‘data for research’ space — aka, the disgraced data company which paid a Cambridge University academic to use an app to harvest and process Facebook user data for political ad targeting. And Facebook has thought nothing of turning this massive platform data misuse scandal into a stick to beat back regulatory proposals aiming to crack open its data troves.

But Cambridge Analytica was a direct consequence of a lack of transparency, accountability and platform oversight. It was also, of course, a massive ethical failure — given that consent for political targeting was not sought from people whose data was acquired. So it doesn’t seem a good argument against regulating access to platform data. On the contrary.

With such ‘blunt instrument’ tech talking points being lobbied into the governance debate by self-interested platform giants, the AlgorithmWatch report brings both welcome nuance and solid suggestions on how to create effective governance structures for modern data giants.

On the layered access point, the report suggests the most granular access to platform data would be the most highly controlled, along the lines of a medical data model. “Granular access can also only be enabled within a closed virtual environment, controlled by an independent body — as is currently done by Findata [Finland’s medical data institution],” notes Ausloos.

Another governance structure discussed in the report — as a case study from which to draw learnings on how to incentivize transparency and thereby enable accountability — is the European Pollutant Release and Transfer Register (E-PRTR). This regulates pollutant emissions reporting across the EU, and results in emissions data being freely available to the public via a dedicated web-platform and as a standalone dataset.

“Credibility is achieved by assuring that the reported data is authentic, transparent and reliable and comparable, because of consistent reporting. Operators are advised to use the best available reporting techniques to achieve these standards of completeness, consistency and credibility,” the report says on the E-PRTR.

“Through this form of transparency, the E-PRTR aims to impose accountability on operators of industrial facilities in Europe towards to the public, NGOs, scientists, politicians, governments and supervisory authorities.”

While EU lawmakers have signalled an intent to place legally binding transparency requirements on platforms — at least in some less contentious areas, such as illegal hate speech, as a means of obtaining accountability on some specific content problems — they have simultaneously set out a sweeping plan to fire up Europe’s digital economy by boosting the reuse of (non-personal) data.

Leveraging industrial data to support R&D and innovation is a key plank of the Commission’s tech-fuelled policy priorities for the next five+ years, as part of an ambitious digital transformation agenda.

This suggests that any regional move to open up platform data is likely to go beyond accountability — given EU lawmakers are pushing for the broader goal of creating a foundational digital support structure to enable research through data reuse. So if privacy-respecting data sharing frameworks can be baked in, a platform governance structure that’s designed to enable regulated data exchange almost by default starts to look very possible within the European context.

“Enabling accountability is important, which we tackle in the pollution case study; but enabling research is at least as important,” argues Ausloos, who does postdoc research at the University of Amsterdam’s Institute for Information Law. “Especially considering these platforms constitute the infrastructure of modern society, we need data disclosure to understand society.”

“When we think about what transparency measures should look like for the DSA we don’t need to reinvent the wheel,” adds Mackenzie Nelson, project lead for AlgorithmWatch’s Governing Platforms Project, in a statement. “The report provides concrete recommendations for how the Commission can design frameworks that safeguard user privacy while still enabling critical research access to dominant platforms’ data.”