Tech stocks tumble as China retaliates in latest salvo of the trade war

Shares of technology companies were hit hard as China retaliated against the U.S. in the latest salvo of the ongoing trade war between the two countries.

The S&P 500 Index shed roughly $1.1 trillion of value while the Dow Jones Industrial Average and the Nasdaq Composite Index fell 2.38 percent and 3.41percent, respectively.

On Monday, China responded in equal measure to the U.S. raising tariffs on imports to 25%, by imposing 25% duties on some $60 billion of U.S. exports to the country.

On June 1, Beijing will impose 25% tariffs on more than 5,000 products. Several more exports to the country will see their duties rise to 20%. That’s up from 10% and 5% previously. The highest tariffs seem to be on products designed to cause pain among President Donald Trump’s political base of support — animal products, fruits and vegetables that come from the Midwest.

But tech companies are particularly expose in the trade war. Indeed, the news sent technology shares spiraling in what venture capitalist (and former TechCrunch co-editor-in-chief) Alexia Bonatsos called the “Tech Red Wedding”.

Rising tariffs will make the tech products from Apple and other American tech companies more expensive to manufacture, which will likely cause hardware manufacturers to raise prices at home, while duties on the finished goods coming to China could make them prohibitively expensive for local buyers in the country.

More expensive consumer products also mean less money to spend on non-essential items, which could mean more frugal behavior from consumers and less spending in the on-demand economy. It could also cause a pull-back in advertising as companies retrench and cut spending in areas that are considered to be non-core.

All of that could leave tech stocks exposed — beyond algorithms just dumping holdings and taking profits in what looks to be a prolonged market downturn.

The trade war, which already took a toll on Uber’s initial public offering, took another bite out of the company’s (short term) stock market performance today.

Uber was far from the only tech stock seeing red. Shares of Amazon were down 3.56 percent, Alphabet was down 2.66 percent, and Apple fell 5.81 percent. Meanwhile Facebook shares fell 3.61 percent; Netflix tumbled over 4 percent on the day.

Things may look up for some tech companies again, but they’re unlikely to receive the kind of bailouts or subsidies that the President is offering to American farmers hit by the economic battle with China. Unless Congress can get stalled negotiations around an infrastructure package back on track (something that seems less and less likely as the 2020 elections start to cast their shadow over the business of governing), there’s little hope for any government assistance that could cushion the blow.

“Our view is this could escalate for at least a matter of weeks, if not months, and it’s really to get the two back to the negotiating table and finish the deal, is probably going to require more pain in the markets…Really the only question is if we need a 5%, 10% or bigger market correction,” Ethan Harris, head of global economics at Bank of America Merrill Lynch, told CNBC.

Another day, another U.S. company forced to divest of Chinese investors

Foreign investment scrutiny continues to creep into the startup world via a once obscure U.S. government agency that has new tools and a shift in focus that stands to impact young, high-growth companies in huge ways. The Committee on Foreign Investment in the U.S., or CFIUS, recently made waves when it forced Chinese investors into two American companies to divest because of national security concerns.

There is much to learn from these developments about how government concerns over foreign investment will affect startups and investors going forward.

It is important to understand how we got here. CFIUS has long had the authority to review investments for national security concerns when the investment delivers “control” of a U.S. entity to a foreign entity — and control is defined broadly to mean the ability to determine important matters of the business. CFIUS is the body that rejected Broadcom’s acquisition of Qualcomm to name one well-known example.

The Treasury Department-led body can tap a few powers if it has concerns about an investment, such as blocking it outright, requiring mitigation measures, or—as we saw recently—forcing a fire sale of assets long after a deal is complete.

In the last few weeks, CFIUS has forced Chinese investors to divest from PatientsLikeMe, a healthcare startup that claims to have millions of data points about diseases, and Grindr, the LGBTQ dating app that collects personal data.

Historically, CFIUS’s focus has been on things like ports, computer systems, and real estate adjacent to military bases, but in recent years its emphasis has included data as a national security threat. The Grindr and PatientsLikeMe actions underscore that CFIUS is more focused than ever on how data can pose a security threat.

For example, the U.S. government’s move against Grindr was reportedly motivated by concerns the Chinese government could blackmail individuals with security clearances or its location data could help unmask intelligence agents.  These developments make CFIUS highly relevant to tech and healthcare startups, which frequently hold valuable data about customers and users.

Last year, Congress expanded CFIUS’s jurisdiction and gave it new tools to scrutinize even minority, non-controlling investments into critical technology companies or those with sensitive personal data of U.S. citizens if the investor receives certain rights, like a board seat.  These might be direct investments into startups by a foreign corporation or individual, or indirect investments into a venture fund by institutional investors like foreign pensions, endowments, or family offices.

Many aspects of the new law have been partially implemented through a pilot program that is impacting foreign investors into venture funds and direct investments into startups. One piece of the law that has not been implemented through the pilot program is the authority of CFIUS to scrutinize certain non-controlling investments into companies that maintain or collect “sensitive personal data of United States citizens that may be exploited in a manner that threatens national security.”

This piece is likely to go into effect in early 2020.

Keep in mind that in the cases of Grindr and PatientsLikeMe, the government relied on its preexisting authority to police investments that delivered control to a foreign person. Due to CFIUS reform, we are likely to see it similarly scrutinize minority, non-controlling investments into companies with sensitive personal data once the authorities are fully in force. Now is the time for investors and startups to go to school on recent cases to understand what is at stake.

Three lessons stand out from the Grindr and PatientsLikeMe actions.

First, CFIUS’s focus has evolved over the years to include control over data-rich companies. That is a trend that is likely to pick up considerably now that Congress has directed the agency to examine some of these deals, even when the investment does not give control to a foreign person.

Second, in both the Grindr and PatientsLikeMe cases, reporting indicates that neither company filed with CFIUS in advance of the transaction, thereby opening both companies up to the deals being unwound. Once CFIUS’s focus on sensitive data expands to non-controlling investments, we can assume CFIUS will not be shy about forcing divestiture for venture-style investments if the parties did not file and get approval for the transaction in advance.

Finally, it is important to understand that while recent newsworthy cases involved China, CFIUS’s jurisdiction applies on a global basis, so its data concerns may port over to investments from other countries as well.  The National Venture Capital Association, where I work, is urging Treasury to use authority it has in the CFIUS reform bill to not apply the expansion to non-controlling investments from friendly countries. This makes perfect sense, since the impetus for CFIUS expansion was largely China, and narrowing the scope of foreign actors will help CFIUS focus on true threats.  However, as long as the pilot rules are in effect—and perhaps longer—the full suite of CFIUS’s authorities apply whether you are from China, Canada, or Chile.

The one constant of the enhanced foreign investment scrutiny we have seen of late is that it is always shifting.  Investors, entrepreneurs, and companies must be on their toes going forward to understand how to raise and deploy capital in innovative American companies.

Nancy Pelosi warns tech companies that Section 230 is ‘in jeopardy’

In a new interview with Recode, House Speaker Nancy Pelosi made some notable comments on what by all accounts is the most important law underpinning the modern internet as we know it.

Section 230 is as short as it is potent, so it’s worth getting familiar with. It states “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

When asked about Section 230, Pelosi referred to the law as a “gift” to tech companies that have leaned heavily on the law to grow their business. That provision, providing tech platforms legal cover for content created by their users, is what allowed services like Facebook, YouTube and many others to swell into the massive companies they are today.

Pelosi continued:

“It is a gift to them and I don’t think that they are treating it with the respect that they should, and so I think that that could be a question mark and in jeopardy… I do think that for the privilege of 230, there has to be a bigger sense of responsibility on it. And it is not out of the question that that could be removed.”

Expect to hear a lot more about Section 230. In recent months, a handful of Republicans in Congress have taken aim at the law. Section 230 is what’s between the lines in Devin Nunes’ recent lawsuit accusing critics for defaming him on Twitter. It’s also the extremely consequential subtext beneath conservative criticism that Twitter, Facebook and Google do not run “neutral” platforms.

While the idea of stripping away Section 230 is by no means synonymous with broader efforts to regulate big tech, it is the nuclear option. And when tech’s most massive companies behave badly, it’s a reminder to some of them that their very existences hinge on 26 words that Congress giveth and Congress can taketh away.

Whatever the political motivations, imperiling Section 230 is a fearsome cudgel against even tech’s most seemingly untouchable companies. While it’s not clear what some potentially misguided lawmakers would stand to gain by dismantling the law, Pelosi’s comments are a reminder that tech’s biggest companies and users alike have everything to lose.

Net neutrality restoring Save the Internet Act passes House, moving on to Senate

A bill intended to restore 2015’s net neutrality rules has passed in the House of Representatives 232-190, and will soon be in consideration in the Senate. The ‘Save the Internet Act’ may be doomed to an eventual veto, but its broad support among voters and the relatively bipartisan push in Congress make it an important one to follow regardless.

The act was introduced in March, and is little more than a re-establishment of the FCC’s 2015 rules with Congressional approval and removing the ones they were replaced with in late 2017.

The bill was introduced and debated yesterday by its many co-sponsors, including House Speaker Nancy Pelosi (D-CA), though it was primarily Rep. Doyle (D-PA) who defended the bill against the crusty arguments of its detractors and gamely accepted a handful of amendments that did not affect the meat of the bill.

In the first amendment, Rep. Burgess (R-TX) asked that the Government Accountability Office issue a report on the potential effect of edge providers on internet freedoms. As we’ve discussed many times before, this red herring argument has to do with an entirely different industry and domain of regulation, and both can and should be looked at — in a bill or investigation of its own. But the report is nonbinding and nonpartisan so it was accepted.

The second, much more ridiculous amendment requires the FCC to list the 700 parts of the Communications Act that the 2015 rules does not invoke. In fact, these forbearances are comprehensively detailed in the rules themselves, which have for years been public and extensively documented and analyzed.

Responding to this request, Doyle was in good humor: “Importantly, this wasn’t an issue at all when these rules were in place for nearly 3 years,” he said. “I’m amazed you didn’t have the list already, Greg, that’s your good friend over there [i.e. FCC Chairman Ajit Pai] and I’m sure a quick phone call would have done it.”

A third amendment, from Rep. Waters (D-CA) commissioned a second report from the GAO, “on the importance of net neutrality and what access to the internet means to vulnerable communities” such as the poor, disabled, elderly, and ethnic minorities. This was accepted without argument, as you can imagine.

A fourth amendment commissioned a third report from the GAO on the benefit and necessity of offering standalone broadband, as opposed to having it bundled with cable, landlines, and other services. This too was accepted without (meaningful) argument.

A fifth requires the FCC to tell Congress what fines it has imposed on ISPs as well as what it actually collected. This information isn’t top secret or anything, so this requirement seems redundant, but the more documentation the better.

The sixth amendment requires the FCC to issue a report on how it would improve the Form 477 data, the self-reported information from internet providers about their coverage and broadband offerings. It is continually found to be more than a little inaccurate, and the FCC itself has been looking into making it better as well.

Are you beginning to see how a two-page law like this one grows to many times its size? At least, however, these amendments are non-destructive and may even prove salutary to future efforts to improve broadband and regulation.

At all events the bill passed today 232-190. Its proponents cheered its success:

“The American people are rightfully demanding that critical net neutrality protections be restored in law, and I’m hopeful this strong House vote helps build momentum for action in the Senate,” said Rep. Pallone (D-NJ) in a statement.

“Today, the House took a firm stand on behalf of internet users across the country,” wrote Mozilla. “We hope that the Senate will recognize the need for strong net neutrality protections and pass this legislation into law. In the meantime, we will continue to fight in the courts as the DC Circuit considers Mozilla v. FCC, our effort to restore essential net neutrality protections for consumers through litigation.”

FCC Chairman Ajit Pai denounced the “so-called” Save the Internet Act: “This legislation is a big-government solution in search of a problem. The Internet is free and open, while faster broadband is being deployed across America. This bill should not and will not become law.”

But his colleague, Commissioner Jessica Rosenworcel, has a different story: “Their legislative effort gets right what the FCC got so wrong. When the agency rolled back net neutrality protections, it gave broadband providers the power to block websites, throttle services, and censor online content. This decision put the FCC on the wrong side of history, the wrong side of the law, and the wrong side of the American public.”

The bill may be, as Senate Majority Leader Mitch McConnell has called it, “dead on arrival” in the Senate, and the White House has implied its own hostility. But the legislation is popular and the issue a highly visible one that voters will be considering in the 2020 elections. So there may be those in the Senate who will cross the line. We’ll know when it is set forth for consideration there.

For true transparency around political advertising, U.S. tech companies must collaborate

In October 2017 online giants Twitter, Facebook, and Google announced plans to voluntarily increase transparency for political advertising on their platforms. The three plans to tackle disinformation had roughly the same structure: funder disclaimers on political ads, stricter verification measures to prevent foreign entities from posting such ads, and varying formats of ad archives.

All three announcements came just before representatives from the companies were due to testify before Congress about Russian interference in the 2016 election and reflected fears of forthcoming regulation, as well as concessions to consumer pressure.

Since then, the companies have continued to attempt to address the issue of digital deception occurring on their platforms.

Google recently released a white paper detailing how it would deal with online disinformation campaigns across many of its products. In the run-up to the 2018 midterm elections, Facebook announced it would ban false information about voting. These efforts reflect an awareness that the public is concerned about the use of social media to manipulate their votes and is pushing for tech companies to actively address the issue.

These efforts at self-regulation are a step in the right direction — but they fall far short of providing the true transparency necessary to inform voters about who is trying to influence them. The lack of consistency in disclosure across platforms, indecision over issue ads, and inaction on wider digital deception issues including fake and automated accounts, harmful micro-targeting, and the exposure of user data are major defects of this self-governing model.

For example, individuals looking at Facebook’s ad transparency platform are currently able to see information about who viewed an ad that is not currently available on Google’s platform. However, on Google the same user can see top keywords for advertisements, or search political ads by district, which cannot be done on Facebook.

With this inconsistency in disclosure across platforms, users are not able to get a full picture of who is trying to influence them, which prevents them from being able to cast an informed vote.

One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018. Advocacy group Avaaz is calling attention to what the groups says are hundreds of millions of fake accounts still spreading disinformation on Facebook. (Photo: SAUL LOEB/AFP/Getty Images)

Issue ads pose an additional problem. These are public communications that do not reference particular candidates, focusing instead on hot-button political issues such as gun control or immigration. Issue ads cannot currently be regulated in the same way that political communications that refer to a candidate can due to the Supreme Court’s interpretation of the First Amendment.

Moreover, as Bruce Flack, Twitter’s General Manager for Revenue Product, pointed out in a blog post addressing the platform’s impending transparency efforts, “there is currently no clear industry definition for issue-based ads.”

In the same post, Flack indicated a potential solution, writing, “We will work with our peer companies, other industry leaders, policy makers and ad partners to clearly define [issue ads] quickly and integrate them into the new approach mentioned above.” This post was written 18 months ago, but no definition has been established—possibly because tech companies are not collaborating to systemically confront digital deception.

This lack of collaboration damages the public’s right to be politically informed. If representatives from the platforms where digital deception occurs most often — Facebook, Twitter, and Google — were to form an independent advisory group that met regularly and worked with regulators and civil society to discuss solutions to digital deception, transparency and disclosure across the platforms would be more complete.

The platforms could look to the example set by the nuclear power industry, where national and international nonprofit advisory bodies facilitate cooperation among utilities to ensure nuclear safety. The World Association of Nuclear Operators (WANO) connects all 115 nuclear power plant operators in 34 countries in order to facilitate the exchange of experience and expertise. The Institute of Nuclear Power Operations (INPO) in the U.S. functions in a similar fashion but is able to institute tighter sanctions since it operates at the national level.

Similar to WANO and INPO, an independent advisory group for the technology sector could develop a consistent set of disclosure guidelines — based on policy regulations put in place by government — that would apply evenly across all social media platforms and search engines.

These guidelines would hopefully include a unified database of ads purchased by political groups as well as clear and uniform disclaimers of the source of each ad, how much it cost, and who it targeted. Beyond paid ads, the industry group could develop guidelines to increase transparency for all communications by organized political entities, address computational propaganda, and determine how best to safeguard users’ data.

Additionally, if the companies were working together, they could set up a consistent definition of what an issue ad is and determine what transparency guidelines should apply. This is particularly relevant given policymakers’ limited authority to regulate issue ads.

Importantly, working together regularly would allow platforms to identify technological advances that might catch policymakers by surprise. Deepfakes — fabricated images, audio, or video that purport to be authentic — represent one area where technology companies will almost certainly be ahead of lawmakers’ expertise. If digital corporations were working together as well as cooperating with government agencies, they could flag new technologies like these in advance and help regulators determine the best way to maintain transparency in the face of a rapidly changing technological landscape.

Would such collaboration ever happen? The extensive aversion to regulation shown by these companies indicates a worrying preference towards appeasing advertisers at the expense of the American public.

However, in August 2018, in advance of the midterm elections, representatives from large tech firms did meet to discuss countering manipulation on their platforms. This followed a meeting in May with U.S. intelligence officials, also to discuss the midterm elections. Additionally, Facebook, Microsoft, Twitter, and YouTube formed the Global Internet Forum to Counter Terrorism to disrupt terrorists’ ability to promote extremist viewpoints on those platforms. This shows that when they are motivated, technology companies can work together.

It’s time for Facebook, Twitter, and Google to put their obligation to the public interest first and work together to systematically address the threat to democracy posed by digital deception.

‘Hateful comments’ result in YouTube disabling chat during a livestreamed hearing on hate

At today’s House Judiciary hearing addressing “Hate Crimes and the Rise of White Nationalism,” hate appears to have prevailed.

As the hearing’s livestream aired on the House Judiciary’s YouTube channel, comments in the live chat accompanying the stream were so inflammatory that YouTube actually disabled the chat feature mid-hearing. Many of those comments were anti-semitic in nature.

Unsurprisingly, the hearing struggled to balance its crowded witness list, which included Facebook public policy director Neil Potts and Google public policy lead Alexandria Walden. Potts emphasized that Facebook recently righted its course with regard to white nationalism, though this shift is still in its earliest days.

“Facebook rejects not just hate speech, but all hateful ideologies,” Potts said in the hearing. “Our rules have always been clear that white supremacists are not allowed on our platform under any circumstances.”

The hearing was probably ill-fated from the start. As Democrats attempt to grapple with the real world effects of white supremacist violence, voices on the far right — recently amplified by figures in Congress — denounce that conversation outright. When political parties can’t even agree on a hearing’s topic, it usually guarantees a performative rather than productive few hours and in spite of some of its serious witnesses, this hearing was no exception.

Hours after the hearing, anti-semitic comments continue to pour into the House Judiciary YouTube page, many focused on Rep. Jerry Nadler, the committee’s chair. “White nationalism isn’t a crime its [sic] a human right,” one user declared. “(((They))) are taking over our government,” another wrote, alluding to widespread anti-semitic conspiracy theories. Many more defended white nationalism as a form of pride rather than a hate-based belief system tied to real world violence.

“… Hate speech and violent extremism have no place on YouTube,” YouTube’s Walden said during the hearing. “We believe we have developed a responsible approach to address the evolving and complex issues that manifest on our platform.”

Proposed bill would forbid big tech platforms from using dark pattern design

A new piece of bipartisan legislation aims to protect people from one of the sketchiest practices that tech companies employ to subtly influence user behavior. Known as “dark patterns,” this dodgy design strategy often pushes users toward giving up their privacy unwittingly and allowing a company deeper access to their personal data.

To fittingly celebrate the one year anniversary of Mark Zuckerberg’s appearance before Congress, Senators Mark Warner (D-VA) and Deb Fischer (R-NE) have proposed the Deceptive Experiences To Online Users Reduction (DETOUR) Act. While the acronym is a bit of a stretch, the bill would forbid online platforms with more than 100 million users from “relying on user interfaces that intentionally impair user autonomy, decision-making, or choice.”

“Any privacy policy involving consent is weakened by the presence of dark patterns,” Senator Fischer said of the proposed bipartisan bill. “These manipulative user interfaces intentionally limit understanding and undermine consumer choice.”

While this particular piece of legislation might not go on to generate much buzz in Congress, it does point toward some regulatory themes that we’ll likely hear more about as lawmakers build support for regulating big tech.

The bill, embedded below, would create a standards body to coordinate with the FTC on user design best practices for large online platforms. That entity would also work with platforms to outline what sort of design choices infringe on user rights, with the FTC functioning as a “regulatory backstop.”

Whether the bill gets anywhere or not, the FTC itself is probably best suited to take on the issue of dark pattern design, issuing its own guidelines and fines for violating them. Last year, after a Norwegian consumer advocacy group published a paper detailing how tech companies abuse dark pattern design, a coalition of eight U.S. watchdog groups called on the FTC to do just that.

Beyond eradicating dark pattern design, the bill also proposes prohibiting user interface designs that cultivate “compulsive usage” in children under the age of 13 as well as disallowing online platforms from conducting “behavioral experiments” without informed user consent. Under the guidelines set out by the bill, big online tech companies would have to organize their own Institutional Review Boards. These groups, more commonly called IRBs, provide powerful administrative oversight in any scientific research that uses human subjects.

“For years, social media platforms have been relying on all sorts of tricks and tools to convince users to hand over their personal data without really understanding what they are consenting to,” Senator Warner said of the proposed legislation. “Our goal is simple: to instill a little transparency in what remains a very opaque market and ensure that consumers are able to make more informed choices about how and when to share their personal information.”

The full text of the legislation is embedded below.

Google’s anti-trans controversy is the latest case of big tech overcorrecting to the right

Google just smoothed over one spat with the LGBT community, but it’s already well into the next one.

Last week, in an effort to monitor the ethical development of artificial intelligence and presumably to assuage public concern, Google launched an eight-person advisory group dedicated to the task.

Controversially, Google included Heritage Foundation President Kay Cole James among the technologists and domain specialists on its newly minted Advanced Technology External Advisory Council (ATEAC).

The inclusion of leadership from the Heritage Foundation, a hyper-conservative think tank with vehemently anti-LGBT views and a deep track record of advocating for climate change denialism in the service of the oil and gas industry, would seem to be an odd fit for an AI council if not a downright puzzling one.

While the group’s less scientific views alone would seem to fly in the face of much of Google’s cutting-edge, scientifically grounded work, the inclusion of a figure openly dedicated to fighting against the rights of the transgender community is causing the company’s latest culture conflagration.

A group calling itself Googlers Against Transphobia in a petition denounced the company’s decision to include James:

In selecting James, Google is making clear that its version of “ethics” values proximity to power over the wellbeing of trans people, other LGBTQ people, and immigrants. Such a position directly contravenes Google’s stated values. Many have emphasized this publicly, and a professor appointed to ATEAC has already resigned in the wake of the controversy.

Following the announcement, the person who took credit for appointing James stood by the decision, saying that James was on the council to ensure “diversity of thought.” This is a weaponization of the language of diversity. By appointing James to the ATEAC, Google elevates and endorses her views, implying that hers is a valid perspective worthy of inclusion in its decision making. This is unacceptable.

The group has called on Google to remove James from the council, arguing that trans people are disproportionately vulnerable to technologies like AI, a problem compounded by the perspective of an advisor incapable of seeing trans people as people — one who casually called transgender women “biological males” just a few weeks ago. At the time of writing, 1,437 Googlers had signed the petition. When reached for comment about the Heritage Foundation’s presence on the ATEAC, Google declined to provide insight on the choice.

Beyond James, the ATEAC includes a behavioral economist, a mathematician, a natural language researcher, the CEO of a drone company focused on energy and defense (some have objected to this as well), an AI ethics specialist, a digital ethicist and William Joseph Burns, a former diplomat and current president of the Carnegie Endowment for International Peace, a formally nonpartisan though practically left-leaning think tank. The decision to loop in James is presumably an effort to counterbalance Burns, but the man’s bipartisan reputation and observable failure to be as far left as James is right undermines that particular argument.

Google’s choice to honor the Heritage Foundation by seeking its counsel on one of the sector’s most high-stakes issues epitomizes big tech’s ongoing fear of looking out of step with the right. To that end, companies like Google, Twitter and Facebook have often over-corrected to the right and continue to do so.

It took Facebook two years to realize that white nationalism is just an expedient synonym for white supremacist values rather than a harmless form of pride akin to American pride or Basque separatism. Last month, Jack Dorsey appeared on the Joe Rogan Experience, a clearinghouse where fringe conspiracy theorists and far-right hate mongers can launder their views without the threat of critical thinking or a proper interrogator.

Meanwhile, Apple takes an admirable leadership stance on issues of identity, particularly around LGBTQ issues, but its CEO Tim Cook is still happy to take a seat next to President Trump, whose administration has taken aggressive steps to limit the rights of transgender Americans again and again. Surely the fact that Trump invited the company to repatriate the 94 percent of its total cash holdings previously stashed outside the United States at a deep discount had nothing to do with Cook’s ongoing courtship.

Unfortunately, tech’s underlying fear of being “found out” as liberal and its obsession with a misguided notion of ideological balance is enough for many tech companies to court extreme viewpoints that don’t fall anywhere near the middle. More unfortunate yet, disingenuous grifters wait in the wings to devour every scrap of validation that falls their way, ready to clamber up these companies’ own platforms with their outsized soapboxes, shouting until the Overton window inches their way.

It’s increasingly clear that anything goes in Silicon Valley’s craven attempts to placate opportunists on the right — both within Congress and without — so long as that corporate cognitive dissonance keeps the lobbying wheels greased.

Why a top antitrust lawmaker thinks it’s time to break up Facebook

When the newly-minted chair of a congressional antitrust committee calls you out, it’s probably time to start worrying.

In an op-ed for the New York Times, Rhode Island Representative David N. Cicilline has called on the Federal Trade Commission to look into Facebook’s behavior for potential antitrust violations, citing TechCrunch’s own reporting that the company collected data on teens through a secret paid program among many other scandals.

“After each misdeed becomes public, Facebook alternates between denial, hollow promises and apology campaigns,” Cicilline wrote. “But nothing changes. That’s why, as chairman of the House Subcommittee on Antitrust, Commercial and Administrative Law, I am calling for an investigation into whether Facebook’s conduct has violated antitrust laws.”

Cicilline’s op-ed intends to put pressure on the FTC, a useful regulatory arm that he accuses of “facing a massive credibility crisis” due to its inaction to date against Facebook. And while the FTC is the focus of Cicilline’s call to action, the op-ed provides an insightful glimpse into what Facebook actions are salient for the lawmaker that Bloomberg called “the most powerful person in tech” when he became the ranking member of the House Judiciary’s Subcommittee on Antitrust, Commercial and Administrative Law this year.

That committee, now led by a Democratic party increasingly interested in breaking up big tech as a platform pillar, is a potentially powerful mechanism for antitrust action against the monopolistic power brokers that dominate the Silicon Valley we’ve come to know.

“For years, privacy advocates have alerted the commission that Facebook was likely violating its commitments under the agreement. Not only did the commission fail to enforce its order, but by failing to block Facebook’s acquisition of WhatsApp and Instagram, it enabled Facebook to extend its dominance,” Cicilline wrote, noting a fine must be multiple billions of dollars to impact the massive company at all. As we reported last month, the FTC is reportedly looking at a potentially multi-billion dollar fine but such a costly reprimand has yet to materialize.

The lawmaker also cites Facebook’s “predatory acquisition strategy” in which it buys up potential competitors before they can pose a threat, stifling innovation in the process. Cicilline also views the company’s decision to restrict API access for competing products as “evidence of anticompetitive conduct” from the social giant.

Cicilline also takes a familiar cynical view of Mark Zuckerberg’s recent announcement that Facebook would weave its products together in a move toward private messaging, calling it “a dangerous power grab to head off antitrust action.” That perspective gives us a clear glimpse of the what lies ahead for Facebook faces as the antitrust headwinds pick up around the 2020 presidential race.

“American antitrust agencies have not pursued a significant monopoly case in more than two decades, even as corporate concentration and monopoly power have reached historic levels,” Cicilline wrote.

“It’s clear that serious enforcement is long overdue.”

Bipartisan bill proposes oversight for commercial facial recognition

On Thursday, Hawaii Senator Brian Schatz and Missouri Senator Roy Blunt introduced a bill designed to offer legislative oversight for commercial applications of facial recognition technology. Known as the Commercial Facial Recognition Privacy Act, the bill would obligate companies to inform consumers about any use of facial recognition and proposes limiting companies from freely sharing facial recognition data with third parties without first obtaining explicit user consent.

“Consumers are increasingly concerned about how their data is being collected and used, including data collected through facial recognition technology,” Senator Blunt said of the bill. “That’s why we need guardrails to ensure that, as this technology continues to develop, it is implemented responsibly.”

Microsoft endorsed the bipartisan bill, which dovetails with some of the company’s own ideas about how facial recognition tech might be regulated. “We believe it’s important for governments in 2019 to start adopting laws to regulate this technology,” Microsoft President Brad Smith wrote in December. “The facial recognition genie, so to speak, is just emerging from the bottle.”

As The Hill points out, the proposed legislation does not include some of the same provisions around the use of facial recognition by law enforcement that Microsoft has mentioned previously, including the requirement of a court order to limit “ongoing government surveillance of specified individuals.” The bill instead focuses on risks specific to the commercial side of facial recognition tech. Other facial recognition legislation has been making the rounds at a state level in Microsoft’s home state this year with buy-in from the company.

“Our faces are our identities. They’re personal. So the responsibility is on companies to ask people for their permission before they track and analyze their faces,” Senator Schatz said of the proposed legislation. “Our bill makes sure that people are given the information and – more importantly – the control over how their data is shared with companies using facial recognition technology.”

Whether the bill goes anywhere or not, proposed legislation does provide insight into the regulatory trends bouncing around Congress at any given moment. As Microsoft’s involvement makes clear, facial recognition is another area of intense interest in which companies may seek to shape legislation before it becomes law.