Adopting a ratings system for social media like the ones used for film and TV won’t work

Internet platforms like Google, Facebook, and Twitter are under incredible pressure to reduce the proliferation of illegal and abhorrent content on their services.

Interestingly, Facebook’s Mark Zuckerberg recently called for the establishment of “third-party bodies to set standards governing the distribution of harmful content and to measure companies against those standards.” In a follow-up conversation with Axios, Kevin Martin of Facebook “compared the proposed standard-setting body to the Motion Picture Association of America’s system for rating movies.”

The ratings group, whose official name is the Classification and Rating Administration (CARA), was established in 1968 to stave off government censorship by educating parents about the contents of films. It has been in place ever since – and as longtime filmmakers, we’ve interacted with the MPAA’s ratings system hundreds of times – working closely with them to maintain our filmmakers’ creative vision, while, at the same time, keeping parents informed so that they can decide if those movies are appropriate for their children.  

CARA is not a perfect system. Filmmakers do not always agree with the ratings given to their films, but the board strives to be transparent as to why each film receives the rating it does. The system allows filmmakers to determine if they want to make certain cuts in order to attract a wider audience. Additionally, there are occasions where parents may not agree with the ratings given to certain films based on their content. CARA strives to consistently strike the delicate balance between protecting a creative vision and informing people and families about the contents of a film.

 CARA’s effectiveness is reflected in the fact that other creative industries including televisionvideo games, and music have also adopted their own voluntary ratings systems. 

While the MPAA’s ratings system works very well for pre-release review of content from a professionally- produced and curated industry, including the MPAA member companies and independent distributors, we do not believe that the MPAA model can work for dominant internet platforms like Google, Facebook, and Twitter that rely primarily on post hoc review of user-generated content (UGC).

Image: Bryce Durbin / TechCrunch

 Here’s why: CARA is staffed by parents whose judgment is informed by their experiences raising families – and, most importantly, they rate most movies before they appear in theaters. Once rated by CARA, a movie’s rating will carry over to subsequent formats, such as DVD, cable, broadcast, or online streaming, assuming no other edits are made.

By contrast, large internet platforms like Facebook and Google’s YouTube primarily rely on user-generated content (UGC), which becomes available almost instantaneously to each platform’s billions of users with no prior review. UGC platforms generally do not pre-screen content – instead they typically rely on users and content moderators, sometimes complemented by AI tools, to flag potentially problematic content after it is posted online.

The numbers are also revealing. CARA rates about 600-900 feature films each year, which translates to approximately 1,500 hours of content annually. That’s the equivalent of the amount of new content made available on YouTube every three minutes. Each day, uploads to YouTube total about 720,000 hours – that is equivalent to the amount of content CARA would review in 480 years!

 Another key distinction: premium video companies are legally accountable for all the content they make available, and it is not uncommon for them to have to defend themselves against claims based on the content of material they disseminate.

By contrast, as CreativeFuture said in an April 2018 letter to Congress: “the failure of Facebook and others to take responsibility [for their content] is rooted in decades-old policies, including legal immunities and safe harbors, that actually absolve internet platforms of accountability [for the content they host.]”

In short, internet platforms whose offerings consist mostly of unscreened user-generated content are very different businesses from media outlets that deliver professionally-produced, heavily-vetted, and curated content for which they are legally accountable.

Given these realities, the creative content industries’ approach to self-regulation does not provide a useful model for UGC-reliant platforms, and it would be a mistake to describe any post hoc review process as being “like MPAA’s ratings system.” It can never play that role.

This doesn’t mean there are not areas where we can collaborate. Facebook and Google could work with us to address rampant piracy. Interestingly, the challenge of controlling illegal and abhorrent content on internet platforms is very similar to the challenge of controlling piracy on those platforms. In both cases, bad things happen – the platforms’ current review systems are too slow to stop them, and harm occurs before mitigation efforts are triggered. 

Also, as CreativeFuture has previously said, “unlike the complicated work of actually moderating people’s ‘harmful’ [content], this is cut and dried – it’s against the law. These companies could work with creatives like never before, fostering a new, global community of advocates who could speak to their good will.”

Be that as it may, as Congress and the current Administration continue to consider ways to address online harms, it is important that those discussions be informed by an understanding of the dramatic differences between UGC-reliant internet platforms and creative content industries. A content-reviewing body like the MPAA’s CARA is likely a non-starter for the reasons mentioned above – and policymakers should not be distracted from getting to work on meaningful solutions.

Unregulated facial recognition technology presents unique risks for the LGBTQ+ community

It seems consumers today are granted ever-dwindling opportunities to consider the safety and civil liberties implications of a new technology before it becomes widely adopted. Facial recognition technology is no exception. The well-documented potential for abuse and misuse of these tools built by giant and influential companies as well as government and law enforcement agencies should give serious pause to anyone who values their privacy – especially members of communities that have been historically marginalized and discriminated against.

The cavalier attitude toward unregulated surveillance tools demonstrated by some law enforcement and other local, state, and federal government entities seem to reinforce the notion that forfeiting your personal data and privacy for greater convenience, efficiency, and safety is a fair trade. For vulnerable communities this could not be further from the truth. Without proper oversight, facial recognition technology has the potential to exacerbate existing inequalities and make daily life challenging and dangerous for LGBTQ+ individuals.

Biometric data can provide a uniquely intimate picture of a person’s digital life. Skilled and persistent hackers seeking to exploit access to an individual’s messages on social media, financial records, or location data would view the information collected by facial recognition software as a particularly valuable and worthwhile target, especially as biometric data has become increasingly popular as a form of authentication.

Without proper privacy protections in place, data breaches that target facial recognition data may become far more likely. In the wrong hands, a person’s previously undisclosed sexual orientation or gender identity can become a tool for discrimination, harassment, or harm to their life or livelihood.

The risks to transgender, nonbinary, or gender non-conforming individuals is even more acute.  Most facial recognition algorithms are trained on data sets designed to sort individuals into two groups – often male or female. The extent of the misgendering problem was highlighted in a recent report that found that over the last three decades of facial recognition researchers used a binary construct of gender over 90 percent of the time and understood gender to be a solely physiological construct over 80 percent of the time.

Consider the challenge – not to mention emotional toil – for a transgender individual trying to catch a flight who is now subject to routine stops and additional security screening all because the facial recognition systems expected to be used in all airports by 2020 are not built to be able to reconcile their true gender identity with their government issued ID.

Members of the LGBTQ+ community cannot shoulder the burden of lax digital privacy standards without also assuming unnecessary risks to their safety online and offline. Our vibrant communities deserve comprehensive, national privacy protections to fully participate in society and live without the fear that their data – biometric or otherwise – will be used to further entrench existing bias and prejudice.

Our communities face the challenge of trying to protect themselves from rules that neither they, or the people implementing them, fully understand. Congress must act to ensure that current and future applications for facial recognition are built, deployed, and governed with necessary protections in mind.

This is why LGBT Tech signed on to a letter by the ACLU, along with over 60 other privacy, civil liberties, civil rights, and investor and faith groups to urge Congress to put in place a federal moratorium on face recognition for law enforcement and immigration enforcement purposes until Congress fully debates what, if any, uses should be permitted.

Given the substantial concerns, which representatives on both sides of the aisle recognized at a recent hearing, prompt action is necessary to protect people from harm.  We should not move forward with the deployment of this technology until and unless our rights can be fully safeguarded.

NSA improperly collected phone records for a second time, documents reveal

Newly released documents reveal the National Security Agency improperly collected Americans’ call records for a second time, just months after the agency was forced to purge hundreds of millions of collected calls and text records it unlawfully obtained.

The document, obtained by the American Civil Liberties Union, shows the NSA had collected a “larger than expected” number of call detail records from one of the U.S. phone providers, though the redacted document did not reveal which provider nor how many records were improperly collected.

The document said the erroneously collected call detail records were “not authorized” by the orders issued by the Foreign Intelligence Surveillance Court, which authorizes and oversees the U.S. government’s surveillance activities.

Greg Julian, a spokesperson for the NSA, confirmed the report in an email to TechCrunch, saying the agency “identified additional data integrity and compliance concerns caused by the unique complexities of using company-generated business records for intelligence purposes.”

NSA said the issues were “addressed and reported” to the agency’s overseers, but did not comment further on the violations as they involve operational matters.

The ACLU called on lawmakers to investigate the improper collection and to shut down the program altogether.

“These documents further confirm that this surveillance program is beyond redemption and a privacy and civil liberties disaster,” said Patrick Toomey, a staff attorney with the ACLU’s National Security Project. “The NSA’s collection of Americans’ call records is too sweeping, the compliance problems too many, and evidence of the program’s value all but nonexistent.”

“There is no justification for leaving this surveillance power in the NSA’s hands,” he said.

Under the government’s so-called Section 215 powers, the NSA collects millions of phone records every year by compelling U.S. phone giants to turn over daily records, a classified program first revealed in a secret court order compelling Verizon — which owns TechCrunch — from documents leaked by whistleblower Edward Snowden. Those call records include the phone numbers of those communicating and when — though not the contents — which the agency uses to make connections between targets of interest.

But the government was forced to curtail the phone records collection program in 2015 following the introduction of the Freedom Act, the only law passed by Congress since the Snowden revelations which successfully reined in what critics said was the NSA’s vast surveillance powers.

In recent years, the number of call records has gone down but not gone away completely. In its last transparency report, the government said it collected 434 million phone records, down 18% on the year earlier.

But the government came under fire in June 2018 after it emerged the NSA had unlawfully collected 600 million call and text logs without the proper authority. The agency said “technical irregularities” meant it received call detail records it “was not authorized to receive.”

The agency deleted the entire batch of improperly collected records from its systems.

Following the incidents, the NSA reportedly shut down the phone records collection program citing overly burdensome legal requirements imposed on the agency. In January, the agency’s spokesperson said the NSA was “carefully evaluating all aspects” of the program and its future, amid rumors that the agency would not ask Congress to reauthorized its expiring Section 215 powers, set to expire later this year.

In an email Wednesday, the NSA spokesperson didn’t comment on the future of the program, saying only that it was “a deliberative interagency process that will be decided by the Administration.”

The government’s Section 215 powers are expected to be debated by Congress in the coming months.

Rep. Will Hurd to keynote Black Hat draws ire for women’s rights voting record

A decision to confirm Rep. Will Hurd as the keynote speaker at the Black Hat security conference this year has prompted anger and concern by some long-time attendees because of his voting record on women’s rights.

Hurd, an outspoken Texas Republican who has drawn fire from his own party for regularly opposing the Trump administration, was confirmed as keynote speaker at the conference Thursday for his background in cybersecurity. Since taking office in Texas’ 23rd district, the congressman has introduced several bills that would aim to secure Internet of Things devices and pushed to reauthorize the role of a federal chief information officer.

But several people we’ve spoken to have described their unease that Black Hat organizers have asked Hurd, a self-described pro-life lawmaker, given his consistent opposition to bills supporting women’s rights.

An analysis of Hurd’s voting record shows he supports bills promoting women’s rights only two percent of the time. He has voted against a bill that would financially support women in STEM fields, voted in favor of allowing states to restrict access and coverage to abortions, and voted to defund Planned Parenthood.

Many of those we spoke to asked to be kept anonymous amid worries of retaliation or personal attacks. One person who we asked for permission to quote said Hurd’s voting record was “simply awful” for women’s rights. Others in tweets said the move doesn’t reflect well on companies sponsoring the event.

Black Hat says it aims to create an “inclusive environment,” but others have questioned how a political figure with views that cause harm to an entire gender can be considered inclusive. But at a time when women’s rights — including right to access abortions — is being all but outlawed by controversial measures in several states, some have found Hurd’s selection tone-deaf and offensive.

When asked, a spokesperson for Black Hat defended the decision for Hurd to speak:

“Hurd has a strong background in computer science and information security and has served as an advocate for specific cybersecurity initiatives in Congress,” said the spokesperson. “He will offer the Black Hat audience a unique perspective of the infosec landscape and its effect on the government.”

Although previous keynote speakers have included senior government figures, this is the first time Black Hat has confirmed a lawmaker to keynote the conference.

Although abortion rights and cybersecurity are unrelated topics, it’s becoming increasingly difficult to separate social issues from technology and gatherings. It’s also valid for attendees to express concern that the keynote speaker at a professional security conference opposes what many will consider a human right.

Kat Fitzgerald, chief operating officer of the Diana Initiative, a conference for women in cybersecurity, said Hurd’s choosing was a “painfully poor choice” for a keynote speaker. “Simply put, in 2019 women and minorities continue to be ignored,” she said. “This keynote selection, regardless of the voting record, is just another indication of ignoring the InfoSec community that exists today.”

The Diana Initiative, which hosts its annual conference in August, is “about inclusion at all levels, especially in today’s charged environment of excluding women and minorities in so many areas, said Fitzgerald.

Hurd’s office did not return a request for comment.

Read more:

DEEPFAKES Accountability Act would impose unenforceable rules — but it’s a start

The new DEEPFAKES Accountability Act in the House — and yes, that’s an acronym — would take steps to criminalize the synthetic media referred to in its name, but its provisions seem too optimistic in the face of the reality of this threat. On the other hand, it also proposes some changes that will help bring the law up to date with the tech.

The bill, proposed by Representative Yvette Clarke (D-NY), it must be said, has the most ridiculous name I’ve encountered: the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act. Amazingly, that acronym (backronym, really) actually makes sense.

It’s intended to stem the potential damage of synthetic media purporting to be authentic, which is rare enough now but soon may be commonplace. With just a few minutes (or even a single frame) of video and voice, a fake version of a person, perhaps a public figure or celebrity, can be created that is convincing enough to fool anyone not looking too closely. And the quality is only getting better.

DEEPFAKES would require anyone creating a piece of synthetic media imitating a person to disclose that the video is altered or generated, using “irremovable digital watermarks, as well as textual descriptions.” Failing to do so will be a crime.

The act also establishes a right on the part of victims of synthetic media to sue the creators and/or otherwise “vindicate their reputations” in court.

Many of our readers will have already spotted the enormous loopholes gaping in this proposed legislation.

First, if a creator of a piece of media is willing to put their name to it and document that it is fake, those are almost certainly not the creators or the media we need to worry about. Jordan Peele is the least of our worries (and in fact the subject of many of our hopes). Requiring satirists and YouTubers to document their modified or generated media seems only to assign paperwork to people already acting legally and with no harmful intentions.

Second, watermark and metadata-based markers are usually trivial to remove. Text can be cropped, logos removed (via more smart algorithms), and even a sophisticated whole-frame watermark might be eliminated simply by being re-encoded for distribution on Instagram or YouTube. Metadata and documentation are often stripped or otherwise made inaccessible. And the inevitable reposters seem to have no responsibility to keep that data intact, either — so as soon as this piece of media leaves the home of its creator, it is out of their control and very soon will no longer be in compliance with the law.

Third, it’s far more likely that truly damaging synthetic media will be created with an eye to anonymity and distributed by secondary methods. The law here is akin to asking bootleggers to mark their barrels with their contact information. No malicious actor will even attempt to mark their work as an “official” fake.

That said, just because these rules are unlikely to prevent people from creating and distributing damaging synthetic media — what the bill calls “advanced technological false personation records” — that doesn’t mean the law serves no purpose here.

One of the problems with the pace of technology is that it frequently is some distance ahead of the law, not just in spirit but in letter. With something like revenge porn or cyberbullying, there’s often literally no legal recourse because these are unprecedented behaviors that may not fit neatly under any specific criminal code. A law like this, flawed as it is, defines the criminal behavior and puts it on the books, so it’s clear what is and isn’t against the law. So while someone faking a Senator’s face may not voluntarily identify themselves, if they are identified, they can be charged.

To that end a later portion of the law is more relevant and realistic: It seeks to place unauthorized digital recreations of people under the umbrella of unlawful impersonation statutes. Just as it’s variously illegal to pretend you’re someone you’re not, to steal someone’s ID, to pretend you’re a cop, and so on, it would be illegal to nefariously misrepresent someone digitally.

That gives police and the court system a handhold when cases concerning synthetic media begin pouring in. They can say “ah, this falls under statute so and so” rather than arguing about jurisdiction or law and wasting everyone’s time — an incredibly common (and costly) occurrence.

The bill puts someone at the U.S. Attorney’s Office in charge of things like revenge porn (“false intimate depictions”) to coordinate prosecution and so on. Again, these issues are so new that it’s often not even clear who you or your lawyer or your local police are supposed to call.

Lastly the act would create a task force at the Department of Homeland Security that would form the core of government involvement with the practice of creating deep fakes, and any countermeasures created to combat them. The task force would collaborate with private sector companies working on their own to prevent synthetic media from gumming up their gears (Facebook has just had a taste), and report regularly on the state of things.

It’s a start, anyway — rare it is that the government acknowledges something is a problem and attempts to mitigate it before that thing is truly a problem. Such attempts are usually put down as nanny state policies, alas, so we wait for a few people to have their lives ruined then get to work with hindsight. So while the DEEPFAKES Accountability Act would not, I feel, create much in the way of accountability for the malicious actors most likely to cause problems, it does begin to set a legal foundation for victims and law enforcement to fight against those actors.

You can track the progress of the bill (H.R. 3230 in the 116th Congress) here.

Protecting the integrity U.S. elections will require a massive regulatory overhaul, academics say

Ahead of the 2020 elections former Facebook chief security officer Alex Stamos and his colleagues at Stanford University have unveiled a sweeping new plan to secure U.S. electoral infrastructure and combat foreign campaigns seeking to interfere in U.S. politics.

As the Mueller investigation into electoral interference made clear, foreign agents from Russia (and elsewhere) engaged in a strategic campaign to influence the 2016 U.S. elections. As the chief security officer of Facebook at the time, Stamos was both a witness to the influence campaign on social media and a key architect of the efforts to combat its spread.

Along with Michael McFaul, a former ambassador to Russia, and a host of other academics from Stanford, Stamos lays out a multi-pronged plan that incorporates securing U.S. voting systems, providing clearer guidelines for advertising and the operations of foreign media in the U.S. and integrating government action more closely with media and social media organizations to combat the spread of misinformation or propaganda by foreign governments.

The paper lays out a number of suggestions for securing elections including:

  • Increasing the Security of the U.S. Election Infrastructure
  • Explicitly prohibit foreign governments and individuals from purchasing online advertisements targeting the American electorate
  • Require greater disclosure measures for FARA-registered foreign media organizations.
  • Create standardized guidelines for labeling content affiliated with disinformation campaign producers.
  • Mandate transparency in the use of foreign consultants and foreign companies in U.S. political campaigns.
  • Foreground free and fair elections as part of U.S. policy and identifying election rights as human rights
  • Signal a clear and credible commitment to respond to election interference.

A lot of heavy lifting by Congress and media and social media companies would be required to enact all of these policy recommendations and many of them speak to core issues that policymakers and corporate executives are already attempting to manage.

For lawmakers that means drafting legislation that would require paper trails for all ballots and improve threat assessments of computerized election systems along with a complete overhaul of campaign laws related to advertising, financing, and press freedoms (for foreign press).

The Stanford proposals call for the strict regulation of foreign involvement in campaigns including a ban on foreign governments and individuals from buying online ads that would target the U.S. electorate with an eye toward influencing elections. The proposals also call for greater disclosure requirements indicating articles, opinion pieces or media produced by foreign media organizations. Furthermore, any campaign working with a foreign company or consultant or with significant foreign business interests should be required to disclose those connections.

Clearly, the echoes of Facebook’s Cambridge Analytica and political advertising scandals can be heard in some of the suggestions made by the paper’s authors.

Indeed, the paper leans heavily on the use and abuse of social media and tech as a critical vector for an attack on future U.S. elections. And the Stanford proposals don’t shirk from calling on legislators to demand that these companies do more to protect their platforms from being used and abused by foreign governments or individuals.

In some cases companies are already working to enact suggestions from the report. Facebook, Alphabet, and Twitter have said that they will work together to coordinate and encourage the spread of best practices. Media companies need to create (and are working to create) norms for handling stolen information. Labeling manipulated videos or propaganda (or articles and videos that come from sources known to disseminate propaganda) is another task that platforms are undertaking, but an area where there is still significant work to be done (especially when it comes to deepfakes).

As the report’s author’s note:

Existing user interface features and platforms’ content delivery algorithms need to be utilized as much as possible to provide contextualization for questionable information and help users escape echo chambers. In addition, social media platforms should provide more transparency around users who are paid to promote certain content. One area ripe for innovation is the automatic labeling of synthetic content, such as videos created by a variety of techniques that are often lumped under the term “deepfakes”. While there are legitimate uses of synthetic media technologies, there is no legitimate need to mislead social media users about the authenticity of that media. Automatically labeling content, which shows technical signs of being modified in this manner, is the minimum level of due diligence required of the major video hosting sites.

There’s more work that needs to be done to limit the targeting capabilities for political advertising and improving transparency around paid and unpaid political content as well, according to the report.

And somewhat troubling is the report’s call for the removal of barriers around sharing information relating to disinformation campaigns that would include changes to privacy laws.

Here’s the argument from the report:

At the moment, access to the content used by disinformation actors is generally restricted to analysts who archived the content before it was removed or governments with lawful request capabilities. Few organizations have been able to analyze the full paid and unpaid content created by Russian groups in 2016, and the analysis we have is limited to data from the handful of companies who investigated the use of their platforms and were able to legally provide such data to Congressional committees. Congress was able to provide that content and metadata to external researchers, an action that is otherwise proscribed by U.S. and European law. Congress needs to establish a legal framework within which the metadata of disinformation actors can be shared in real-time between social media platforms, and removed disinformation content can be shared with academic researchers under reasonable privacy protections.

Ultimately, these suggestions are meaningless without real action from the Congress and the President to ensure the security of elections. As the events of 2016  — documented in the Mueller report — revealed there are a substantial number of holes in the safeguards erected to secure our elections. As the country looks for a place to build walls for security, perhaps one around election integrity would be a good place to start.

Maine lawmakers pass bill to prevent ISPs from selling browsing data without consent

Good news!

Maine lawmakers have passed a bill that will prevent internet providers from selling consumers’ private internet data to advertisers.

The state’s senate unanimously passed the bill 35-0 on Thursday following an earlier vote by state representatives 96-45 in favor of the bill.

The bill, if signed into law by state governor Janet Mills, will force the national and smaller regional internet providers operating in the state to first obtain permission from residents before their data can be sold or passed onto advertisers or other third parties.

Maine has about 1.3 million residents.

The Republican-controlled Federal Communications Commission voted in 2017 to allow internet providers to sell customers’ private and personal internet data and browsing histories — including which websites a user visits and for how long — to advertisers for the biggest buck. Congress later passed the measure into law.

At the time, the ACLU explained how this rule change affected ordinary Americans:

Your internet provider sees everything you do online. Even if the website you’re visiting is encrypted, your ISP can still see the website name, how frequently you visit the website, and how long you’re there for. And, because you are a paying customer, your ISP knows your social security number, full legal name, address, and bank account information. Linking all that information can reveal a lot about you – for example, if you are visiting a religious website or a support site for people with a particular illness.

In its latest remarks, the ACLU — which along with the Open Technology Institute and New America helped to draft the legislation — praised lawmakers for passing the bill, calling it the “strongest” internet privacy bill of any state.

“Today, the Maine legislature did what the U.S. Congress has thus far failed to do and voted to put consumer privacy before corporate profits,” said Oamshri Amarasingham, advocacy director at the ACLU of Maine, in  a statement.

“Nobody should have to choose between using the internet and protecting their own data,” she said.

CFIUS Cometh: What this Obscure Agency Does and Why It Matters to Your Fund or Startup

On January 12, 2016, Grindr announced it had sold a 60% controlling stake in the company to Beijing Kunlun Tech, a Chinese gaming firm, valuing the company at $155 million. Champagne bottles were surely popped at the small-ish firm.

Though not at a unicorn-level valuation, the 9-figure exit was still respectable and signaled a bright future for the gay hookup app. Indeed, two years later, Kunlun bought the rest of the firm at more than double the valuation and was planning a public offering for Grindr.

On March 27, 2019, it all fell apart. Kunlun was putting Grindr up for sale instead.

What went wrong? It wasn’t that Grindr’s business ground to a halt. By all accounts, its business seems to actually be growing. The problem was that Kunlun owning Grindr was viewed as a threat to national security. Consequently, CFIUS, or the Committee for Foreign Investment in the United States, stepped in to block the transaction.

So what changed? CFIUS was expanded by FIRRMA, or the Foreign Risk Review Modernization Act, in late 2018, which gave it massive new power and scale. Unlike before, FIRRMA gave CFIUS a technology focus. So now CFIUS isn’t just an American problem—it’s an American tech problem. And in the coming years, it will transform venture capital, Chinese involvement in US tech, and maybe even startups as we know it.

Here’s a closer look at how it all fits together.

What is CFIUS?

Image via Getty Images / Busà Photography

CFIUS is the most important agency you’ve never heard of, and until recently it wasn’t even more than a committee. In essence, CFIUS has the ability to stop foreign entities, called “covered entities,” from acquiring companies when it could adversely affect national security—a “covered transaction.”

Once a filing is made, CFIUS investigates the transaction and both parties, which can take over a month in its first pass. From there, the company and CFIUS enter a negotiation to see if they can resolve any issues.

Tech stocks tumble as China retaliates in latest salvo of the trade war

Shares of technology companies were hit hard as China retaliated against the U.S. in the latest salvo of the ongoing trade war between the two countries.

The S&P 500 Index shed roughly $1.1 trillion of value while the Dow Jones Industrial Average and the Nasdaq Composite Index fell 2.38 percent and 3.41percent, respectively.

On Monday, China responded in equal measure to the U.S. raising tariffs on imports to 25%, by imposing 25% duties on some $60 billion of U.S. exports to the country.

On June 1, Beijing will impose 25% tariffs on more than 5,000 products. Several more exports to the country will see their duties rise to 20%. That’s up from 10% and 5% previously. The highest tariffs seem to be on products designed to cause pain among President Donald Trump’s political base of support — animal products, fruits and vegetables that come from the Midwest.

But tech companies are particularly expose in the trade war. Indeed, the news sent technology shares spiraling in what venture capitalist (and former TechCrunch co-editor-in-chief) Alexia Bonatsos called the “Tech Red Wedding”.

Rising tariffs will make the tech products from Apple and other American tech companies more expensive to manufacture, which will likely cause hardware manufacturers to raise prices at home, while duties on the finished goods coming to China could make them prohibitively expensive for local buyers in the country.

More expensive consumer products also mean less money to spend on non-essential items, which could mean more frugal behavior from consumers and less spending in the on-demand economy. It could also cause a pull-back in advertising as companies retrench and cut spending in areas that are considered to be non-core.

All of that could leave tech stocks exposed — beyond algorithms just dumping holdings and taking profits in what looks to be a prolonged market downturn.

The trade war, which already took a toll on Uber’s initial public offering, took another bite out of the company’s (short term) stock market performance today.

Uber was far from the only tech stock seeing red. Shares of Amazon were down 3.56 percent, Alphabet was down 2.66 percent, and Apple fell 5.81 percent. Meanwhile Facebook shares fell 3.61 percent; Netflix tumbled over 4 percent on the day.

Things may look up for some tech companies again, but they’re unlikely to receive the kind of bailouts or subsidies that the President is offering to American farmers hit by the economic battle with China. Unless Congress can get stalled negotiations around an infrastructure package back on track (something that seems less and less likely as the 2020 elections start to cast their shadow over the business of governing), there’s little hope for any government assistance that could cushion the blow.

“Our view is this could escalate for at least a matter of weeks, if not months, and it’s really to get the two back to the negotiating table and finish the deal, is probably going to require more pain in the markets…Really the only question is if we need a 5%, 10% or bigger market correction,” Ethan Harris, head of global economics at Bank of America Merrill Lynch, told CNBC.

Another day, another U.S. company forced to divest of Chinese investors

Foreign investment scrutiny continues to creep into the startup world via a once obscure U.S. government agency that has new tools and a shift in focus that stands to impact young, high-growth companies in huge ways. The Committee on Foreign Investment in the U.S., or CFIUS, recently made waves when it forced Chinese investors into two American companies to divest because of national security concerns.

There is much to learn from these developments about how government concerns over foreign investment will affect startups and investors going forward.

It is important to understand how we got here. CFIUS has long had the authority to review investments for national security concerns when the investment delivers “control” of a U.S. entity to a foreign entity — and control is defined broadly to mean the ability to determine important matters of the business. CFIUS is the body that rejected Broadcom’s acquisition of Qualcomm to name one well-known example.

The Treasury Department-led body can tap a few powers if it has concerns about an investment, such as blocking it outright, requiring mitigation measures, or—as we saw recently—forcing a fire sale of assets long after a deal is complete.

In the last few weeks, CFIUS has forced Chinese investors to divest from PatientsLikeMe, a healthcare startup that claims to have millions of data points about diseases, and Grindr, the LGBTQ dating app that collects personal data.

Historically, CFIUS’s focus has been on things like ports, computer systems, and real estate adjacent to military bases, but in recent years its emphasis has included data as a national security threat. The Grindr and PatientsLikeMe actions underscore that CFIUS is more focused than ever on how data can pose a security threat.

For example, the U.S. government’s move against Grindr was reportedly motivated by concerns the Chinese government could blackmail individuals with security clearances or its location data could help unmask intelligence agents.  These developments make CFIUS highly relevant to tech and healthcare startups, which frequently hold valuable data about customers and users.

Last year, Congress expanded CFIUS’s jurisdiction and gave it new tools to scrutinize even minority, non-controlling investments into critical technology companies or those with sensitive personal data of U.S. citizens if the investor receives certain rights, like a board seat.  These might be direct investments into startups by a foreign corporation or individual, or indirect investments into a venture fund by institutional investors like foreign pensions, endowments, or family offices.

Many aspects of the new law have been partially implemented through a pilot program that is impacting foreign investors into venture funds and direct investments into startups. One piece of the law that has not been implemented through the pilot program is the authority of CFIUS to scrutinize certain non-controlling investments into companies that maintain or collect “sensitive personal data of United States citizens that may be exploited in a manner that threatens national security.”

This piece is likely to go into effect in early 2020.

Keep in mind that in the cases of Grindr and PatientsLikeMe, the government relied on its preexisting authority to police investments that delivered control to a foreign person. Due to CFIUS reform, we are likely to see it similarly scrutinize minority, non-controlling investments into companies with sensitive personal data once the authorities are fully in force. Now is the time for investors and startups to go to school on recent cases to understand what is at stake.

Three lessons stand out from the Grindr and PatientsLikeMe actions.

First, CFIUS’s focus has evolved over the years to include control over data-rich companies. That is a trend that is likely to pick up considerably now that Congress has directed the agency to examine some of these deals, even when the investment does not give control to a foreign person.

Second, in both the Grindr and PatientsLikeMe cases, reporting indicates that neither company filed with CFIUS in advance of the transaction, thereby opening both companies up to the deals being unwound. Once CFIUS’s focus on sensitive data expands to non-controlling investments, we can assume CFIUS will not be shy about forcing divestiture for venture-style investments if the parties did not file and get approval for the transaction in advance.

Finally, it is important to understand that while recent newsworthy cases involved China, CFIUS’s jurisdiction applies on a global basis, so its data concerns may port over to investments from other countries as well.  The National Venture Capital Association, where I work, is urging Treasury to use authority it has in the CFIUS reform bill to not apply the expansion to non-controlling investments from friendly countries. This makes perfect sense, since the impetus for CFIUS expansion was largely China, and narrowing the scope of foreign actors will help CFIUS focus on true threats.  However, as long as the pilot rules are in effect—and perhaps longer—the full suite of CFIUS’s authorities apply whether you are from China, Canada, or Chile.

The one constant of the enhanced foreign investment scrutiny we have seen of late is that it is always shifting.  Investors, entrepreneurs, and companies must be on their toes going forward to understand how to raise and deploy capital in innovative American companies.