Equity Shot: The DoJ, Google and what the suit could mean for startups

Hello and welcome back to Equity, TechCrunch’s venture-capital-focused podcast where we unpack the numbers behind the headlines.

It’s a big day in tech because the U.S. federal government is going after Google on anti-competitive grounds. Sure, the timing appears crassly political and the case is not picking up huge plaudits thus far for its air-tightness, but that doesn’t mean we can ignore it.

So Danny and I got on the horn to chat it up for about 10 minutes to fill you in. For reference, you can read the full filing here, in case you want to get your nails in. It’s not a complicated read. Get in there.

As a pair we dug into what stood out from the suit, what we think about the historical context and also noodled at the end about what the whole situation could mean for startups; it’s not all good news, but adding lots of competitive space to the market would be a net-good for upstart tech companies in the long-run.

And consumers. Competition is good.

You can read TechCrunch’s early coverage of the suit here, and our look at the market’s reaction here. Let’s go!

Equity drops every Monday at 7:00 a.m. PDT and Thursday afternoon as fast as we can get it out, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts.

Trump says ‘nobody gets hacked’ but forgot his hotel chain was hacked — twice

According to President Trump speaking at a campaign event in Tucson, Arizona, on Monday, “nobody gets hacked.” You don’t need someone who covers security day in and day out to call bullshit on this one.

“Nobody gets hacked. To get hacked you need somebody with 197 IQ and he needs about 15 percent of your password,” Trump said, referencing the recent suspension of C-SPAN political editor Steve Scully, who admitted falsely claiming his Twitter account was hacked this week after sending a tweet to former White House communications director Anthony Scaramucci.

There’s a lot to unpack in those two-dozen words. But aside from the fact that not all hackers are male (and it’s sexist to assume that), and glossing over the two entirely contrasting sentences, Trump also neglected to mention that his hotel chain was hacked twice — once over a year-long period between 2014 and 2015 and again between 2016 and 2017.

We know this because the Trump business was legally required to file notice with state regulators after each breach, which they did.

In both incidents, customers of Trump’s hotels had their credit card data stolen. The second breach was blamed on a third-party booking system, called Sabre, which also exposed guest names, emails, phone numbers and more.

The disclosures didn’t say how many people were affected. Suffice it to say, it wasn’t “nobody.”

A spokesperson for the Trump campaign did not return a request for comment.

It’s easy to ignore what could be considered a throwaway line: To say that “nobody gets hacked” might seem harmless on the face of it, but to claim so is dangerous. It’s as bad as saying something is “unhackable” or “hack-proof.” Ask anyone who works in cybersecurity and they’ll tell you that no person or company can ever make such assurances.

Absolute security doesn’t exist. But for those who don’t know any different, it’s an excuse not to think about their own security. Yes, you should use a password manager. Absolutely turn on two-factor authentication whenever you can. Do the basics, because hackers don’t need an IQ score of 197 to break into your accounts. All they need is for you to lower your guard.

If “nobody gets hacked” as Trump claims, it makes you wonder whatever happened to the 400-pound hacker the president mentioned during his first White House run.

Who regulates social media?

Social media platforms have repeatedly found themselves in the United States government’s crosshairs over the last few years, as it has been progressively revealed just how much power they really wield, and to what purposes they’ve chosen to wield it. But unlike, say, a firearm or drug manufacturer, there is no designated authority who says what these platforms can and can’t do. So who regulates them? You might say everyone and no one.

Now, it must be made clear at the outset that these companies are by no means “unregulated,” in that no legal business in this country is unregulated. For instance Facebook, certainly a social media company, received a record $5 billion fine last year for failure to comply with rules set by the FTC. But not because the company violated its social media regulations — there aren’t any.

Facebook and others are bound by the same rules that most companies must follow, such as generally agreed-upon definitions of fair business practices, truth in advertising, and so on. But industries like medicine, energy, alcohol, and automotive have additional rules, indeed entire agencies, specific to them; Not so social media companies.

I say “social media” rather than “tech” because the latter is much too broad a concept to have a single regulator. Although Google and Amazon (and Airbnb, and Uber, and so on) need new regulation as well, they may require a different specialist, like an algorithmic accountability office or online retail antitrust commission. (Inasmuch as tech companies act within regulated industries, such as Google in broadband, they are already regulated as such.)

Social media can roughly defined as platforms where people sign up to communicate and share messages and media, and that’s quite broad enough already without adding in things like ad marketplaces, competition quashing and other serious issues.

Who, then, regulates these social media companies? For the purposes of the U.S., there are four main directions from which meaningful limitations or policing may emerge, but each one has serious limitations, and none was actually created for the task.

1. Federal regulators

Image Credits: Andrew Harrer/Bloomberg

The Federal Communications Commission and Federal Trade Commission are what people tend to think of when “social media” and “regulation” are used in a sentence together. But one is a specialist — not the right kind, unfortunately — and the other a generalist.

The FCC, unsurprisingly, is primarily concerned with communication, but due to the laws that created it and grant it authority, it has almost no authority over what is being communicated. The sabotage of net neutrality has complicated this somewhat, but even the faction of the Commission dedicated to the backwards stance adopted during this administration has not argued that the messages and media you post are subject to their authority. They have indeed called for regulation of social media and big tech — but are for the most part unwilling and unable to do so themselves.

The Commission’s mandate is explicitly the cultivation of a robust and equitable communications infrastructure, which these days primarily means fixed and mobile broadband (though increasingly satellite services as well). The applications and businesses that use that broadband, though they may be affected by the FCC’s decisions, are generally speaking none of the agency’s business, and it has repeatedly said so.

The only potentially relevant exception is the much-discussed Section 230 of the Communications Decency Act (an amendment to the sprawling Communications Act), which waives liability for companies when illegal content is posted to their platforms, as long as those companies make a “good faith” effort to remove it in accordance with the law.

But this part of the law doesn’t actually grant the FCC authority over those companies or define good faith, and there’s an enormous risk of stepping into unconstitutional territory, because a government agency telling a company what content it must keep up or take down runs full speed into the First Amendment. That’s why although many think Section 230 ought to be revisited, few take Trump’s feeble executive actions along these lines seriously.

The agency did announce that it will be reviewing the prevailing interpretation of Section 230, but until there is some kind of established statutory authority or Congress-mandated mission for the FCC to look into social media companies, it simply can’t.

The FTC is a different story. As watchdog over business practices at large, it has a similar responsibility towards Twitter as it does towards Nabisco. It doesn’t have rules about what a social media company can or can’t do any more than it has rules about how many flavors of Cheez-It there should be. (There are industry-specific “guidelines” but these are more advisory about how general rules have been interpreted.)

On the other hand, the FTC is very much the force that comes into play should Facebook misrepresent how it shares user data, or Nabisco overstate the amount of real cheese in its crackers. The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.

You can thank the FTC for the now-familiar, carefully worded statements that avoid any real claims or responsibilities: “We take security very seriously” and “we think we have the best method” and that sort of thing — so pretty much everything that Mark Zuckerberg says. Companies and executives are trained to do this to avoid tangling with the FTC: “Taking security seriously” isn’t enforceable, but saying “user data is never shared” certainly is.

In some cases this can still have an effect, as in the $5 billion fine recently dropped into Facebook’s lap (though for many reasons that was actually not very consequential). It’s important to understand that the fine was for breaking binding promises the company had made — not for violating some kind of social-media-specific regulations, because again, there really aren’t any.

The last point worth noting is that the FTC is a reactive agency. Although it certainly has guidelines on the limits of legal behavior, it doesn’t have rules that when violated result in a statutory fine or charges. Instead, complaints filter up through its many reporting systems and it builds a case against a company, often with the help of the Justice Department. That makes it slow to respond compared with the lightning-fast tech industry, and the companies or victims involved may have moved beyond the point of crisis while a complaint is being formalized there. Equifax’s historic breach and minimal consequences are an instructive case:

So: While the FCC and FTC do provide important guardrails for the social media industry, it would not be accurate to say they are its regulators.

2. State legislators

States are increasingly battlegrounds for the frontiers of tech, including social media companies. This is likely due to frustration with partisan gridlock in Congress that has left serious problems unaddressed for years or decades. Two good examples of states that lost their patience are California’s new privacy rules and Illinois’s Biometric Information Privacy Act (BIPA).

The California Consumer Privacy Act (CCPA) was arguably born out the ashes of other attempts at a national level to make companies more transparent about their data collection policies, like the ill-fated Broadband Privacy Act.

Californian officials decided that if the feds weren’t going to step up, there was no reason the state shouldn’t at least look after its own. By convention, state laws that offer consumer protections are generally given priority over weaker federal laws — this is so a state isn’t prohibited from taking measures for their citizens’ safety while the slower machinery of Congress grinds along.

The resulting law, very briefly stated, creates formal requirements for disclosures of data collection, methods for opting out of them, and also grants authority for enforcing those laws. The rules may seem like common sense when you read them, but they’re pretty far out there compared to the relative freedom tech and social media companies enjoyed previously. Unsurprisingly, they have vocally opposed the CCPA.

BIPA has a somewhat similar origin, in that a particularly far-sighted state legislature created a set of rules in 2008 limiting companies’ collection and use of biometric data like fingerprints and facial recognition. It has proven to be a huge thorn in the side of Facebook, Microsoft, Amazon, Google, and others that have taken for granted the ability to analyze a user’s biological metrics and use them for pretty much whatever they want.

Many lawsuits have been filed alleging violations of BIPA, and while few have produced notable punishments like this one, they have been invaluable in forcing the companies to admit on the record exactly what they’re doing, and how. Sometimes it’s quite surprising! The optics are terrible, and tech companies have lobbied (fortunately, with little success) to have the law replaced or weakened.

What’s crucially important about both of these laws is that they force companies to, in essence, choose between universally meeting a new, higher standard for something like privacy, or establishing a tiered system whereby some users get more privacy than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment,” they start asking why Mainers or Puerto Ricans aren’t getting it as well.

In this way state laws exert outsize influence, forcing companies to make changes nationally or globally because of decisions that technically only apply to a small subset of their users. You may think of these states as being activists (especially if their attorneys general are proactive), or simply ahead of the curve, but either way they are making their mark.

This is not ideal, however, because taken to the extreme, it produces a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. That, at least, is the doomsday scenario predicted almost universally by companies in a position to lose out.

State laws act as a test bed for new policies, but tend to only emerge when movement at the federal level is too slow. Although they may hit the bullseye now and again, like with BIPA, it would be unwise to rely on a single state or any combination among them to miraculously produce, like so many simian legislators banging on typewriters, a comprehensive regulatory structure for social media. Unfortunately, that leads us to Congress.

3. Congress

Image: Bryce Durbin/TechCrunch

What can be said about the ineffectiveness of Congress that has not already been said, again and again? Even in the best of times few would trust these people to establish reasonable, clear rules that reflect reality. Congress simply is not the right tool for the job, because of its stubborn and willful ignorance on almost all issues of technology and social media, its countless conflicts of interest, and its painful sluggishness — sorry, deliberation — in actually writing and passing any bills, let alone good ones.

Companies oppose state laws like the CCPA while calling for national rules because they know that it will take forever and there’s more opportunity to get their finger in the pie before it’s baked. National rules, in addition to coming far too late, are much more likely also be watered down and riddled with loopholes by industry lobbyists. (This is indicative of the influence these companies wield over their own regulation, but it’s hardly official.)

But Congress isn’t a total loss. In moments of clarity it has established expert agencies like those in the first item, which have Congressional oversight but are otherwise independent, empowered to make rules, and kept technically — if somewhat limply — nonpartisan.

Unfortunately, the question of social media regulation is too recent for Congress to have empowered a specialist agency to address it. Social media companies don’t fit neatly into any of the categories that existing specialists regulate, something that is plainly evident by the present attempt to stretch Section 230 beyond the breaking point just to put someone on the beat.

Laws at the federal level are not to be relied on for regulation of this fast-moving industry, as the current state of things shows more than adequately. And until a dedicated expert agency or something like it is formed, it’s unlikely that anything spawned on Capitol Hill will do much to hold back the Facebooks of the world.

4. European regulators

eu gdpr 1Of course, however central it considers itself to be, the U.S. is only a part of a global ecosystem of various and shifting priorities, leaders, and legal systems. But in a sort of inside-out version of state laws punching above their weight, laws that affect a huge part of the world except the U.S. can still have a major effect on how companies operate here.

The most obvious example is the General Data Protection Regulation or GDPR, a set of rules, or rather augmentation of existing rules dating to 1995, that has begun to change the way some social media companies do business.

But this is only the latest step in a fantastically complex, decades-long process that must harmonize the national laws and needs of the E.U. member states in order to provide the clout it needs to compel adherence to the international rules. Red tape seldom bothers tech companies, which rely on bottomless pockets to plow through or in-born agility to dance away.

Although the tortoise may eventually in this case overtake the hare in some ways, at present the GDPR’s primary hindrance is not merely the complexity of its rules, but the lack of decisive enforcement of them. Each country’s Data Protection Agency acts as a node in a network that must reach consensus in order to bring the hammer down, a process that grinds slow and exceedingly fine.

When the blow finally lands, though, it may be a heavy one, outlawing entire practices at an industry-wide level rather than simply extracting pecuniary penalties these immensely rich entities can shrug off. There is space for optimism as cases escalate and involve heavy hitters like antitrust laws in efforts that grow to encompass the entire “big tech” ecosystem.

The rich tapestry of European regulations is really too complex of a topic to address here in the detail it deserves, and also reaches beyond the question of who exactly regulates social media. Europe’s role in that question of, if you will, speaking slowly and carrying a big stick promises to produce results on a grand scale, but for the purposes of this article it cannot really be considered an effective policing body.

(TechCrunch’s E.U. regulatory maven Natasha Lomas contributed to this section.)

5. No one? Really?

As you can see, the regulatory ecosystem in which social media swims is more or less free of predators. The most dangerous are the small, agile ones — state legislatures — that can take a bite before the platforms have had a chance to brace for it. The other regulators are either too slow, too compromised, or too involved (or some combination of the three) to pose a real threat. For this reason it may be necessary to introduce a new, but familiar, species: the expert agency.

As noted above, the FCC is the most familiar example of one of these, though its role is so fragmented that one could be forgiven for forgetting that it was originally created to ensure the integrity of the telephone and telegraph system. Why, then, is it the expert agency for orbital debris? That’s a story for another time.

Capitol building

Image Credit: Bryce Durbin/TechCrunch

What is clearly needed is the establishment of an independent expert agency or commission in the U.S., at the federal level, that has statutory authority to create and enforce rules pertaining to the handling of consumer data by social media platforms.

Like the FCC (and somewhat like the E.U.’s DPAs), this should be officially nonpartisan — though like the FCC it will almost certainly vacillate in its allegiance — and should have specific mandates on what it can and can’t do. For instance, it would be improper and unconstitutional for such an agency to say this or that topic of speech should be disallowed from Facebook or Twitter. But it would be able to say that companies need to have a reasonable and accessible definition of the speech they forbid, and likewise a process for auditing and contesting takedowns. (The details of how such an agency would be formed and shaped is well beyond the scope of this article.)

Even the likes of the FAA lags behind industry changes, such as the upsurge in drones that necessitated a hasty revisit of existing rules, or the huge increase in commercial space launches. But that’s a feature, not a bug. These agencies are designed not to act unilaterally based on the wisdom and experience of their leaders, but are required to perform or solicit research, consult with the public and industry alike, and create evidence-based policies involving, or at least addressing, a minimum of sufficiently objective data.

Sure, that didn’t really work with net neutrality, but I think you’ll find that industries have been unwilling to capitalize on this temporary abdication of authority by the FCC because they see that the Commission’s current makeup is fighting a losing battle against voluminous evidence, public opinion, and common sense. They see the writing on the wall and understand that under this system it can no longer be ignored.

With an analogous authority for social media, the evidence could be made public, the intentions for regulation plain, and the shareholders — that is to say, users — could make their opinions known in a public forum that isn’t owned and operated by the very companies they aim to rein in.

Without such an authority these companies and their activities — the scope of which we have only the faintest clue to — will remain in a blissful limbo, picking and choosing by which rules to abide and against which to fulminate and lobby. We must help them decide, and weigh our own priorities against theirs. They have already abused the naive trust of their users across the globe — perhaps it’s time we asked them to trust us for once.

Lockheed picks Relativity’s 3D-printed rocket for experimental NASA mission

Relativity Space has bagged its first public government contract, and with a major defense contractor at that. The launch startup’s 3D-printed rockets are a great match for a particularly complex mission Lockheed is undertaking for NASA’s Tipping Point program.

The mission is a test of a dozen different cryogenic fluid management systems, including liquid hydrogen, which is a very difficult substance to work with indeed. The tests will take place on a single craft in orbit, which means it will be a particularly complicated one to design and accommodate.

The payload itself and its cryogenic systems will be designed and built by Lockheed and their partners at NASA, of course, but the company will need to work closely with its launch provider during development and especially in the leadup to the actual launch.

Relativity founder and CEO Tim Ellis explained that the company’s approach of 3D printing the entire rocket top to bottom is especially well suited for this.

“We’re building a custom payload fairing that has specific payload loading interfaces they need, custom fittings and adapters,” he said. “It still needs to be smooth, of course — to a lay person it will look like a normal rocket,” he added.

Every fairing (the external part of the launch vehicle covering the payload) is necessarily custom, but this one much more so. The delicacy of having a dozen cryogenic operations being loaded up and tested until moments before launch necessitates a number of modifications that, in other days, would result in a massive increase in manufacturing complexity.

“If you look at the manufacturing tools being used today, they’re not much different from the last 60 years,” Ellis explained. “It’s fixed tooling, giant machines that look impressive but only make one shape or one object that’s been designed by hand. And it’ll take 12-24 months to make it.”

Not so with Relativity.

“With our 3D printed approach we can print the entire fairing in under 30 days,” Ellis said. “It’s also software defined, so we can just change the file to change the dimensions and shape. For this particular object we have some custom features that we’re able to do more quickly and adapt. Even though the mission is three years out, there will always be last minute changes as you get closer to launch, and we can accommodate that. Otherwise you’d have to lock in the design now.”

Ellis was excited about the opportunity to publicly take on a mission with such a major contractor. These enormous companies field billions of government dollars and take part in many launches, so it’s important to be in their good books, or at least in their rolodexes. A mission like this, complex but comparatively low stakes (compared with a crewed launch or billion-dollar satellite) is a great chance for a company like Relativity to show its capabilities. (Having presold many of its launches already, there’s clearly no lack of interest in the 3D printed launch vehicles, but more is always better.)

The company will be going to space before then, though, if all continues to go according to plan. The first orbital test flight is scheduled for late 2021. “We’re actually printing the launch hardware right now, the last few weeks,” Ellis mentioned.

The NASA Tipping Point program that is funding Lockheed with an $89.7 million contract for this experiment is one intended to, as its name indicates, help tip promising technologies over the edge into commercial viability. With hundreds of millions awarded yearly for companies pursuing things like lunar hoppers and robotic arms, it’s a bit like the agency’s venture fund.

U.S. charges Russian hackers blamed for Ukraine power outages and the NotPetya ransomware attack

Six Russian intelligence officers accused of launching some of the “world’s most destructive malware” — including an attack that took down the Ukraine power grid in December 2015 and the NotPetya global ransomware attack in 2017 — have been charged by the U.S. Justice Department.

Prosecutors said the group of hackers, who work for the Russian GRU, are behind the “most disruptive and destructive series of computer attacks ever attributed to a single group.”

“No country has weaponized its cyber capabilities as maliciously or irresponsibly as Russia, wantonly causing unprecedented damage to pursue small tactical advantages and to satisfy fits of spite,” said John Demers, U.S. U.S. assistant attorney general for national security. “Today the Department has charged these Russian officers with conducting the most disruptive and destructive series of computer attacks ever attributed to a single group, including by unleashing the NotPetya malware. No nation will recapture greatness while behaving in this way.”

The six accused Russian intelligence officers. (Image: FBI/supplied)

In charges laid out Monday, the hackers are accused of developing and launching attacks using the KillDisk and Industroyer (also known as Crash Override) to target and disrupt the power supply in Ukraine, which left hundreds of thousands of customers without electricity two days before Christmas. The prosecutors also said the hackers were behind the NotPetya attack, a ransomware attack that spread across the world in 2017, causing billions of dollars in damages.

The hackers are also said to have used Olympic Destroyer, designed to knock out internet connections during the opening ceremony of the 2018 PyeongChang Winter Olympics in South Korea.

Prosecutors also blamed the six hackers for trying to disrupt the 2017 French elections by launching a “hack and leak” operation to discredit the then-presidential frontrunner, Emmanuel Macron, as well as launching targeted spearphishing attacks against the Organization for the Prohibition of Chemical Weapons and the U.K.’s Defense Science and Technology Laboratory, tasked with investigating the use of the Russian nerve agent Novichok in Salisbury, U.K. in 2018, and attacks against targets in Georgia, the former Soviet state.

The alleged hackers — Yuriy Sergeyevich Andrienko, 32; Sergey Vladimirovich Detistov, 35; Pavel Valeryevich Frolov, 28; Anatoliy Sergeyevich Kovalev, 29; Artem Valeryevich Ochichenko, 27; and Petr Nikolayevich Pliskin, 32 — are all charged with seven counts of conspiracy to hack, commit wire fraud, and causing computer damage.

The accused are believed to be in Russia. But the indictment serves as a “name and shame” effort, frequently employed by Justice Department prosecutors in recent years where arrests or extraditions are not likely or possible.

UK’s ICO downgrades British Airways data breach fine to £20M, after originally setting it at £184M

One of the biggest data breaches in UK corporate history has been closed off by regulators not with a bang, but a whimper — as a result of Covid-19. Today the Information Commissioner’s Office, the UK’s data watchdog, announced that it would be fining British Airways £20 million for a data breach in which the personal details of more than 400,000 customers were leaked after BA suffered a two-month cyberattack and lacked adequate security to detect and defend itself against it. It had originally planned to fine BA nearly £184 million, but it reduced the penalty in light of the economic impact that BA (like other airlines) has faced as a result of Covid-19.

The major step down in the fine underscores what kind of an impact the coronavirus pandemic is having on regulations. In some cases, in order more quickly address issues that potentially impact business growth, we’ve seen regulators try to speed up their responsiveness and even leave behind some previous reservations to green light activities, as in the case of e-scooters.

But in the case of the BA fine, we’re seeing the other side of the Covid-19 impact: regulators are taking a less hard line with penalties on companies that are already struggling. That raises questions of how impactful their decisions are, and what kind of a precedent they are setting for future security and data protection neglect.

Even with the reduced penalty size, the ICO is sticking by its original conclusions:

“People entrusted their personal details to BA and BA failed to take adequate measures to keep those details secure,” said Information Commissioner Elizabeth Denham in a statement. “Their failure to act was unacceptable and affected hundreds of thousands of people, which may have caused some anxiety and distress as a result. That’s why we have issued BA with a £20m fine – our biggest to date. When organisations take poor decisions around people’s personal data, that can have a real impact on people’s lives. The law now gives us the tools to encourage businesses to make better decisions about data, including investing in up-to-date security.”

The fine is the highest-ever leveled by the ICO. But it’s a major step down from the £184 million penalty — 1.5% of BA’s revenues in the 2018 calendar year — that the regulator had originally set last year. That was, of course, before the coronavirus pandemic hit, halting travel globally and bringing many airlines to their knees. The original order went through a process of appeal, which included an assessment of the state of the company in the current market.

“In June 2019 the ICO issued BA with a notice of intent to fine,” the ICO noted in its statement on the reduced fine. “As part of the regulatory process the ICO considered both representations from BA and the economic impact of COVID-19 on their business before setting a final penalty.”

The salient facts of the investigation’s findings remained the same: the ICO had determined that BA had “weaknesses in its security” that could have been prevented with security systems — procedures and software — that were available at the time.

As a result, data from 429,612 customers and staff was leaked, including “names, addresses, payment card numbers and CVV numbers of 244,000 BA customers,” the ICO said, adding that the combined card and CVV numbers of 77,000 customers and card numbers only for 108,000 customers were also believed to be a part of the breach, as well as the usernames and passwords of BA employee and administrator accounts, and the usernames and PINs of up to 612 BA Executive Club accounts (these last two were also not completely verified, it seems).

On top of that, BA never detected the attack, it said: it was notified of the breach by a third party.

The ICO said that its action has been approved by other DPA’s in the European Union: this is because the attack happened while the UK was still in the EU, and so the investigation was carried out by the ICO on behalf of the EU authorities, it said.

With ‘absurd’ timing, FCC announces intention to revisit Section 230

FCC Chairman Ajit Pai has announced his intention to pursue a reform of Section 230 of the Communications Act, which among other things limits the liability of internet platforms for content they host. Commissioner Rosenworcel described the timing — immediately after Conservative outrage at Twitter and Facebook limiting the reach of an article relating to Hunter Biden — as “absurd.” But it’s not necessarily the crackdown the Trump administration clearly desires.

In a statement, Chairman Pai explained that “members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set forth in Section 230,” and that there is broad support for changing the law — in fact there are already several bills under consideration that would do so.

At issue is the legal protections for platforms when they decide what content to allow and what to block. Some say they are clearly protected by the First Amendment (this is how it is currently interpreted), while others assert that some of those choices amount to violations of users’ right to free speech.

Though Pai does not mention specific recent circumstances in which internet platforms have been accused of having partisan bias in one direction or the other, it is difficult to imagine they — and the constant needling of the White House — did not factor into the decision.

A long road with an ‘unfortunate detour’

In fact the push to reform Section 230 has been progressing for years, with the limitations of the law and the FCC’s interpretation of its pertinent duties discussed candidly by the very people who wrote the original bill and thus have considerable insight into its intentions and shortcomings.

In June Commissioner Starks disparaged pressure from the White House to revisit the FCC’s interpretation of the law, saying that the First Amendment protections are clear and that Trump’s executive order “seems inconsistent with those core principles.” That said, he proposed that the FCC take the request to reconsider the law seriously.

“And if, as I suspect it ultimately will, the petition fails at a legal question of authority,” he said, “I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season that can use a pending proceeding to, in my estimation, intimidate private parties.”

The latter part of his warning seems especially prescient given the choice by the Chairman to open proceedings less than three weeks before the election, and the day after Twitter and Facebook exercised their authority as private platforms to restrict the distribution of articles which, as Twitter belatedly explained, clearly broke guidelines on publishing private information. (The New York Post article had screenshots of unredacted documents with what appeared to be Hunter Biden’s personal email and phone number, among other things.)

Commissioner Rosenworcel did not mince words, saying “The timing of this effort is absurd. The FCC has no business being the President’s speech police.” Starks echoed her, saying “We’re in the midst of an election… the FCC shouldn’t do the President’s bidding here.” (Trump has repeatedly called for the “repeal” of Section 230, which is just part of a much larger and important set of laws.)

Considering the timing and the utter impossibility of reaching any kind of meaningful conclusion before the election — rulemaking is at a minimum a months-long process — it is hard to see Pai’s announcement as anything but a pointed warning to internet platforms. Platforms which, it must be stressed, the FCC has essentially no regulatory powers over.

Foregone conclusion

The Chairman telegraphed his desired outcome clearly in the announcement, saying “Many advance an overly broad interpretation that in some cases shields social media companies from consumer protection laws in a way that has no basis in the text of Section 230… Social media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Whether the FCC has anything to do with regulating how these companies exercise that right remains to be seen, but it’s clear that Pai thinks the agency should, and doesn’t. With the makeup of the FCC currently 3:2 in favor of the Conservative faction, it may be said that this rulemaking is a forgone conclusion; the net neutrality debacle showed that these Commissioners are willing to ignore and twist facts in order to justify the end they choose, and there’s no reason to think this rulemaking will be any different.

The process will be just as drawn out and public as previous ones, however, which means that a cavalcade of comments may yet again indicate that the FCC ignores public opinion, experts, and lawmakers alike in its decision to invent or eliminate its roles as it sees fit. Be ready to share your feedback with the FCC, but no need to fire up the outrage just yet — chances are this rulemaking won’t even exist in draft form until after the election, at which point there may be something of a change in the urgency of this effort to reinterpret the law to the White House’s liking.

Alibaba-affiliated marketplace to leave Taiwan, again

Separated by a strait, the internet in Taiwan and mainland China are two different worlds. Even mainland tech giants Alibaba and Tencent have had little success entering the island, often running into regulatory hurdles.

Less than a year after Taobao launched on the island through an Alibaba-backed joint venture, the marketplace announced it will cease operations by the end of this year, the platform said in a notice to customers on Thursday.

The decision came two months after the Investment Commission under Taiwan’s Ministry of Economic Affairs ruled that Taobao Taiwan is a Chinese-controlled company and required the firm to either leave or re-register under a different corporate structure. Under Taiwanese law, Chinese investors must obtain permission from the government to directly or indirectly acquire a stake of more than 30% in any Taiwanese company.

Taobao Taiwan is owned and operated by British-registered Claddagh Venture Investment, which is 28.77% owned by Alibaba. Nonetheless, the investment regulator ruled that the one with de facto control over Taobao Taiwan is Alibaba, which has “veto power” over Claddagh’s board decisions.

The app is currently the most downloaded shopping app in the Taiwanese Google Play store, according to app tracking firm App Annie. Unexpectedly, the Chinese edition of Taobao comes in sixth in the iOS shopping category, where Shopee tops.

Taobao Taiwan is separate from Alibaba’s main marketplaces, which last boast 874 million mobile monthly users. Most of Alibaba’s shoppers are in mainland China, though customers in Hong Kong and Taiwan have long been able to shop on the Chinese Taobao app and have the goods imported to them with extra fees.

Taobao Taiwan, on the other hand, established to attract local vendors in a market of around 24 million people, competing with popular alternatives like Singapore-headquartered Shopee and the indigenous PChome 24.

This isn’t the first time Taobao has been hit by local law. In 2015, the authority ordered Taobao Taiwan, at the time set up by a Hong Kong entity of Alibaba, to leave because of its Chinese association. Even Shopee wasn’t exempt and was under investigation in 2017 for Tencent owned around 40% of its parent company Sea.

“We respect the decision by Claddagh,” an Alibaba representative said in a statement to TechCrunch. “Alibaba businesses are operating as normal in the Taiwan market, and we will continue to serve local consumers with quality products through our Taobao app.”

It’s unclear how Claddagh came to decide on its retreat rather than restructuring the joint venture. The firm has not responded to TechCrunch’s request for comment.

Trump’s latest immigration restrictions are bad news for American workers

I’m an immigrant, and since arriving from India two decades ago I’ve earned a Ph.D., launched two companies, created almost 100 jobs, sold a business to Google and generated a 10x-plus return for my investors.

I’m grateful to have had the chance to live the American dream, becoming a proud American citizen and creating prosperity for others along the way. But here’s the rub: I’m exactly the kind of person that President Trump’s added immigration restrictions that require U.S. companies to offer jobs to U.S. citizens first and narrowing the list of qualifications to make one eligible for the H-1B visa, is designed to keep out of the country.

In tightening the qualifications for H-1B admittances, along with the L visas used by multinationals and the J visas used by some students, the Trump administration is closing the door to economic growth. Study after study shows that the H-1B skilled-worker program creates jobs and drives up earnings for American college grads. In fact, economists say that if we increased H-1B admittances, instead of suspending them, we’d create 1.3 million new jobs and boost GDP by $158 billion by 2045.

Barring people like me will create short-term chaos for tech companies already struggling to hire the people they need. That will slow growth, stifle innovation and reduce job creation. But the lasting impact could be even worse. By making America less welcoming, President Trump’s order will take a toll on American businesses’ ability to attract and retain the world’s brightest young people.

Consider my story. I came to the United States after earning a degree in electrical engineering from the Indian Institute of Technology (IIT), a technical university known as the MIT of India. The year I entered, several hundred thousand people applied for just 10,000 spots, making IIT significantly more selective than the real MIT. Four years later, I graduated and, along with many of the other top performers in my cohort, decided to continue my studies in America.

Back then, it was simply a given that bright young Indians would travel to America to continue their education and seek their fortune. Many of us saw the United States as the pinnacle of technological innovation, and also as a true meritocracy — somewhere that gave immigrants a fair shake, rewarded hard work and let talented young people build a future for themselves.

I was accepted by 10 different colleges, and chose to do a Ph.D. at the University of Illinois because of its top-ranked computer science program. As a grad student, I developed new ways of keeping computer chips from overheating that are now used in server farms all over the world. Later, I put in a stint at McKinsey before launching my own tech startup, an app-testing platform called Appurify, which Google bought and integrated into their Cloud offerings.

I spent a couple of years at Google, but missed building things from scratch, so in 2016 I launched atSpoke, an AI-powered ticketing platform that streamlines IT and HR support. We’ve raised $28 million, hired 60 employees and helped companies including Cloudera, DraftKings and Mapbox create more efficient workplaces and manage the transition to remote working.

Stories like mine aren’t unusual. Moving to a new country takes optimism, ambition and tolerance for risk — all factors that drive many immigrants to start businesses of their own. Immigrants found businesses at twice the rate of the native born, starting about 30% of all new businesses in 2016 and more than half of the country’s billion-dollar unicorn startups. Many now-iconic American brands, including Procter & Gamble, AT&T, Google, Apple, and even Bank of America, were founded by immigrants or their children.

We take it for granted that America is the destination of choice for talented young people, especially those with vital technical skills. But nothing lasts forever. Since I arrived two decades ago, India’s tech scene has blossomed, making it far easier for kids to find opportunities without leaving the country. China, Canada, Australia and Europe are also competing for global talent by making it easier for young immigrants to bring their talent and skills, often including an American education, to join their workforces or start new businesses.

To shutter employment-based visa programs, even temporarily, is to shut out the innovation and entrepreneurialism our economy desperately needs. Worse still, though, doing so makes it harder for the world’s best and brightest young people to believe in the American dream and drives many to seek opportunities elsewhere. The true legacy of Trump’s executive order is that it will be far harder for American businesses to compete for global talent in years to come — and that will ultimately hamper job creation, slow our economy and hurt American workers.

A prison video visitation service exposed private calls between inmates and their attorneys

Fearing the spread of coronavirus, jails and prisons remain on lockdown. Visitors are unable to see their loved ones serving time, forcing friends and families to use prohibitively expensive video visitation services that often don’t work.

But now the security and privacy of these systems are under scrutiny after one St Louis-based prison video visitation provider had a security lapse that exposed thousands of phone calls between inmates and their families, but also calls with their attorneys that were supposed to be protected by attorney-client privilege.

HomeWAV, which serves a dozen prisons across the U.S., left a dashboard for one of its databases exposed to the internet without a password, allowing anyone to read, browse and search the call logs and transcriptions of calls between inmates and their friends and family members. The transcriptions also showed the phone number of the caller, which inmate, and the duration of the call.

Security researcher Bob Diachenko found the dashboard, which had been public since at least April, he said. TechCrunch reported the issue to HomeWAV, which shut down the system hours later.

In an email, HomeWAV chief executive John Best confirmed the security lapse.

“One of our third-party vendors has confirmed that they accidentally took down the password, which allowed access to the server,” he told TechCrunch, without naming the third-party. Best said the company will inform inmates, families and attorneys of the incident.

Somil Trivedi, a senior staff attorney at the ACLU’s Criminal Law Reform Project, told TechCrunch: “What we see again and again is that the rights of incarcerated people are the first to be trampled when the system fails — as it always, invariably does.”

“Our justice system is only as good as the protections for the most vulnerable. As always, people of color, those who can’t afford lawyers, and those with disabilities will pay the highest price for this mistake. Technology cannot fix the fundamental failings of the criminal legal system — and it will exacerbate them if we’re not deliberate and cautious,” said Trivedi.

Inmates have almost no expectations of privacy, and nearly all prisons in the U.S. record the phone and video calls of their inmates — even if it’s not disclosed at the beginning of each call. Prosecutors and investigators are known to listen back to recordings in case an inmate incriminates themselves on a call.

HomeWAV, a prison video visitation tech company, exposed thousands of phone calls between inmates and their families, but also calls with their attorneys that were supposed to be protected by attorney-client privilege. (Image: HomeWAV/YouTube)

The calls between inmates and their attorneys, however, are not supposed to be monitored because of attorney-client privilege, a rule that protects the communications between an attorney and their client from being used in court.

Despite this, there are known cases of U.S. prosecutors using recorded calls between an attorney and their incarcerated clients. Last year, prosecutors in Louisville, Ky., allegedly listened to dozens of calls between a murder suspect and his attorneys. And, earlier this year defense attorneys in Maine said they were routinely recorded by several county jails, and their calls protected under attorney-client privilege were turned over to prosecutors in at least four cases.

HomeWAV’s website says: “Unless a visitor has been previously registered as a clergy member, or a legal representative with whom the inmate is entitled to privileged communication, the visitor is advised that visits may be recorded, and can be monitored.”

But when asked, HomeWAV’s Best would not say why the company had recorded and transcribed conversations protected by attorney-client privilege.

Several of the transcriptions reviewed by TechCrunch showed attorneys clearly declaring that their calls were covered under attorney-client privilege, effectively telling anyone listening in that the call was off-limits.

TechCrunch spoke to two attorneys, whose communications with their clients in prison over the past six months were recorded and transcribed by HomeWAV, but asked that we not name them or their clients as doing so might harm their client’s legal defense. Both expressed alarm that their calls had been recorded. One of the attorneys said that they had verbally asserted attorney-client privilege on the call, while the other attorney also considered that their call was protected by attorney-client privilege but declined to comment further until they had spoken to their client.

Another defense attorney, Daniel Repka, told TechCrunch confirmed one of his calls with a client in prison in September was recorded, transcribed and subsequently exposed, but said that the call was not sensitive.

“We did not relay any information that would be considered protected by attorney-client privilege,” said Repka. “Anytime I have a client who calls me from a jail, I’m very conscious and aware of the possibility not only of security breaches, but also the potential ability to access these phone calls by the county attorney’s office,” he said.

Repka described attorney-client privilege as “sacred” for attorneys and their clients. “It’s really the only way that we’re able to ensure that attorneys are able to represent their clients in the most effective and zealous way possible,” he said.

“The best practice for attorneys is always, always, always to go visit your client at the jail in person where you’re in a room, and you have far more privacy than over a telephone line that you know has been designated as a recording device,” he said.

But the challenges brought by the pandemic has made in-person visits difficult, or impossible in some states. The Marshall Project, a non-partisan organization focusing on criminal justice in the U.S., said several states have suspended in-person visitation because of the threat posed by coronavirus, including legal visits.

Even prior to the pandemic, some prisons ended in-person visitation in favor of video calls.

Video visitation technology is now a billion-dollar industry, with companies like Securus making millions each year by charging callers often exorbitant fees to call their incarcerated loved ones.

HomeWAV isn’t the only video visitation service to have faced security issues.

In 2015, an apparent breach at Securus resulted in the leak of some 70 million inmate phone calls by an anonymous hacker and shared with The Intercept. Many of the recordings in the cache also contained calls designated protected by attorney-client privilege, the publication reported.

In August, Diachenko reported a similar security lapse at TelMate, another prison visitation provide, which saw millions of inmate messages exposed because of a passwordless database.


You can send tips securely over Signal and WhatsApp to +1 646-755-8849 or you can send an encrypted email to: [email protected]