The FCC rejects ZTE’s petition to stop designating it a “national security threat”

The Federal Communications Commission has rejected ZTE’s petition to remove its designation as a “national security threat.” This means that American companies will continue to be barred from using the FCC’s $8.3 billion Universal Service Fund to buy equipment and services from ZTE .

The Universal Service Fund includes subsidies to build telecommunication infrastructure across the United States, especially for low-income or high-cost areas, rural telehealth services, and schools and libraries. The FCC issued an order on June 30 banning U.S. companies from using the fund to buy technology from Huawei and ZTE, claiming that both companies have close ties with the Chinese Communist Party and military.

Many smaller carriers rely on Huawei and ZTE, two of the world’s biggest telecom equipment providers, for cost-efficient technology. After surveying carriers, the FCC estimated in September that replacing Huawei and ZTE equipment would cost more than $1.8 billion.

Under the Secure and Trusted Communications Networks Act, passed by Congress this year, most of that amount would be eligible for reimbursements under a program referred to as “rip and replace.” But the program has not been funded by Congress yet, despite bipartisan support.

In today’s announcement about ZTE, chairman Ajit Pai also said the FCC will vote on rules to implement the reimbursement program at its next Open Meeting, scheduled to take place on December 10.

The FCC passed its order barring companies deemed national security threats from receiving money from the Universal Service Fund in November 2019. Huawei fought back by suing the FCC over the ban, claiming it exceeded the agency’s authority and violated the Constitution.

TechCrunch has contacted ZTE for comment.

House Reps ask FCC to ‘stop work on all partisan, controversial items’ during transition

Two U.S. Representatives who oversee the FCC have asked the agency to respect the results of the election by abandoning any “partisan, controversial items under consideration.” This likely includes the FCC’s effort to reinterpret Section 230, an important protection for internet platforms, at the Trump administration’s request.

In the letter sent to FCC Chairman Ajit Pai today (a similar letter was sent to FTC Chairman Joseph Simons), Representatives Frank Pallone (D-NJ) writes:

With the results of the 2020 presidential election now apparent, leadership of the FCC will undoubtedly be changing. As a traditional part of the peaceful transfer of power — and as part of our oversight responsibilities — we strongly urge the agency to only pursue consensus and administrative matters for the remainder of your tenure.

And his colleague at the House Commerce Committee, Mike Doyle (D-PA) adds:

We note that that you have previously welcomed calls from congressional leaders for the FCC to “halt further action on controversial items during the transition period.” We hope you will respect this time-honored tradition now.

For a receipt, the letter references Pai’s own call for this exact thing to happen almost exactly four years ago, referring in his turn to the same practice occurring eight years prior under the previous chairman. “I hope Chairman Wheeler follows his [2008 Chairman Kevin Martin’s] example and honors the wishes of our congressional leaders, including by withdrawing the four major items on the November meeting agenda,” Pai wrote in 2016 (PDF).

The matters scheduled for consideration during the upcoming November FCC meeting are not particularly partisan, though they could be considered major — there is, for instance, a new rule being looked at that would simplify satellite licenses.

The letter, then, almost certainly refers to the “absurd” announcement that the agency would revisit Section 230 only weeks before an election and seemingly at the express request of Trump himself (the FCC is an independent agency and cannot be forced to consider any rule changes).

This is most certainly a partisan matter, as there are not only dueling bills attempting to reform the law, which limits the liability of internet platforms for the content posted on them, but Trump has loudly and publicly blamed Section 230 for what he perceives as censorship of certain viewpoints on those platforms.

Even if the FCC had dropped everything and started working full time on its proposed review of Section 230, it could not have even issued a draft of any new rules or changes before the election, making the announcement seem nakedly political: an embrace of the Executive’s displeasure with the way the FCC currently interprets the law. Even in the best case scenario and unanimous support it would be many months before such a rulemaking could be accomplished.

I’ve asked the Chairman’s office if he intends to do as the letter asks, and will update this post if I hear back. Of course, to do so would tacitly acknowledge the victory of President-Elect Joe Biden over Trump in last week’s election, which few Republican leaders in the government seem willing to do.

While Pai considers, his colleagues, Commissioners Jessica Rosenworcel and Brendan Starks, have issued statements that they are eager to comply with the request from Congress.

“Historically, the FCC has honored the transfer of power from one Administration to the next by pausing any controversial activity. I urge FCC Chairman Ajit Pai to follow this past practice in order to ensure an orderly transition of agency affairs,” said Rosenworcel in her statement.

“As two of my Republican colleagues observed in 2016, it is long-standing Commission practice that, upon a presidential transition, the agency suspends its consideration of any partisan, controversial items until the transition period is complete. Our congressional leaders have called for Chairman Pai to respect this precedent, and I expect that he will abide by their request,” said Starks.

If the FCC accedes to the request, this and other items will be held until the new administration announces its plan for the FCC. Traditionally the previous Chairman resigns when a new administration is incoming, as Tom Wheeler did in late 2016, and a new leader is announced and confirmed the next year.

Who regulates social media?

Social media platforms have repeatedly found themselves in the United States government’s crosshairs over the last few years, as it has been progressively revealed just how much power they really wield, and to what purposes they’ve chosen to wield it. But unlike, say, a firearm or drug manufacturer, there is no designated authority who says what these platforms can and can’t do. So who regulates them? You might say everyone and no one.

Now, it must be made clear at the outset that these companies are by no means “unregulated,” in that no legal business in this country is unregulated. For instance Facebook, certainly a social media company, received a record $5 billion fine last year for failure to comply with rules set by the FTC. But not because the company violated its social media regulations — there aren’t any.

Facebook and others are bound by the same rules that most companies must follow, such as generally agreed-upon definitions of fair business practices, truth in advertising, and so on. But industries like medicine, energy, alcohol, and automotive have additional rules, indeed entire agencies, specific to them; Not so social media companies.

I say “social media” rather than “tech” because the latter is much too broad a concept to have a single regulator. Although Google and Amazon (and Airbnb, and Uber, and so on) need new regulation as well, they may require a different specialist, like an algorithmic accountability office or online retail antitrust commission. (Inasmuch as tech companies act within regulated industries, such as Google in broadband, they are already regulated as such.)

Social media can roughly defined as platforms where people sign up to communicate and share messages and media, and that’s quite broad enough already without adding in things like ad marketplaces, competition quashing and other serious issues.

Who, then, regulates these social media companies? For the purposes of the U.S., there are four main directions from which meaningful limitations or policing may emerge, but each one has serious limitations, and none was actually created for the task.

1. Federal regulators

Image Credits: Andrew Harrer/Bloomberg

The Federal Communications Commission and Federal Trade Commission are what people tend to think of when “social media” and “regulation” are used in a sentence together. But one is a specialist — not the right kind, unfortunately — and the other a generalist.

The FCC, unsurprisingly, is primarily concerned with communication, but due to the laws that created it and grant it authority, it has almost no authority over what is being communicated. The sabotage of net neutrality has complicated this somewhat, but even the faction of the Commission dedicated to the backwards stance adopted during this administration has not argued that the messages and media you post are subject to their authority. They have indeed called for regulation of social media and big tech — but are for the most part unwilling and unable to do so themselves.

The Commission’s mandate is explicitly the cultivation of a robust and equitable communications infrastructure, which these days primarily means fixed and mobile broadband (though increasingly satellite services as well). The applications and businesses that use that broadband, though they may be affected by the FCC’s decisions, are generally speaking none of the agency’s business, and it has repeatedly said so.

The only potentially relevant exception is the much-discussed Section 230 of the Communications Decency Act (an amendment to the sprawling Communications Act), which waives liability for companies when illegal content is posted to their platforms, as long as those companies make a “good faith” effort to remove it in accordance with the law.

But this part of the law doesn’t actually grant the FCC authority over those companies or define good faith, and there’s an enormous risk of stepping into unconstitutional territory, because a government agency telling a company what content it must keep up or take down runs full speed into the First Amendment. That’s why although many think Section 230 ought to be revisited, few take Trump’s feeble executive actions along these lines seriously.

The agency did announce that it will be reviewing the prevailing interpretation of Section 230, but until there is some kind of established statutory authority or Congress-mandated mission for the FCC to look into social media companies, it simply can’t.

The FTC is a different story. As watchdog over business practices at large, it has a similar responsibility towards Twitter as it does towards Nabisco. It doesn’t have rules about what a social media company can or can’t do any more than it has rules about how many flavors of Cheez-It there should be. (There are industry-specific “guidelines” but these are more advisory about how general rules have been interpreted.)

On the other hand, the FTC is very much the force that comes into play should Facebook misrepresent how it shares user data, or Nabisco overstate the amount of real cheese in its crackers. The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.

You can thank the FTC for the now-familiar, carefully worded statements that avoid any real claims or responsibilities: “We take security very seriously” and “we think we have the best method” and that sort of thing — so pretty much everything that Mark Zuckerberg says. Companies and executives are trained to do this to avoid tangling with the FTC: “Taking security seriously” isn’t enforceable, but saying “user data is never shared” certainly is.

In some cases this can still have an effect, as in the $5 billion fine recently dropped into Facebook’s lap (though for many reasons that was actually not very consequential). It’s important to understand that the fine was for breaking binding promises the company had made — not for violating some kind of social-media-specific regulations, because again, there really aren’t any.

The last point worth noting is that the FTC is a reactive agency. Although it certainly has guidelines on the limits of legal behavior, it doesn’t have rules that when violated result in a statutory fine or charges. Instead, complaints filter up through its many reporting systems and it builds a case against a company, often with the help of the Justice Department. That makes it slow to respond compared with the lightning-fast tech industry, and the companies or victims involved may have moved beyond the point of crisis while a complaint is being formalized there. Equifax’s historic breach and minimal consequences are an instructive case:

So: While the FCC and FTC do provide important guardrails for the social media industry, it would not be accurate to say they are its regulators.

2. State legislators

States are increasingly battlegrounds for the frontiers of tech, including social media companies. This is likely due to frustration with partisan gridlock in Congress that has left serious problems unaddressed for years or decades. Two good examples of states that lost their patience are California’s new privacy rules and Illinois’s Biometric Information Privacy Act (BIPA).

The California Consumer Privacy Act (CCPA) was arguably born out the ashes of other attempts at a national level to make companies more transparent about their data collection policies, like the ill-fated Broadband Privacy Act.

Californian officials decided that if the feds weren’t going to step up, there was no reason the state shouldn’t at least look after its own. By convention, state laws that offer consumer protections are generally given priority over weaker federal laws — this is so a state isn’t prohibited from taking measures for their citizens’ safety while the slower machinery of Congress grinds along.

The resulting law, very briefly stated, creates formal requirements for disclosures of data collection, methods for opting out of them, and also grants authority for enforcing those laws. The rules may seem like common sense when you read them, but they’re pretty far out there compared to the relative freedom tech and social media companies enjoyed previously. Unsurprisingly, they have vocally opposed the CCPA.

BIPA has a somewhat similar origin, in that a particularly far-sighted state legislature created a set of rules in 2008 limiting companies’ collection and use of biometric data like fingerprints and facial recognition. It has proven to be a huge thorn in the side of Facebook, Microsoft, Amazon, Google, and others that have taken for granted the ability to analyze a user’s biological metrics and use them for pretty much whatever they want.

Many lawsuits have been filed alleging violations of BIPA, and while few have produced notable punishments like this one, they have been invaluable in forcing the companies to admit on the record exactly what they’re doing, and how. Sometimes it’s quite surprising! The optics are terrible, and tech companies have lobbied (fortunately, with little success) to have the law replaced or weakened.

What’s crucially important about both of these laws is that they force companies to, in essence, choose between universally meeting a new, higher standard for something like privacy, or establishing a tiered system whereby some users get more privacy than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment,” they start asking why Mainers or Puerto Ricans aren’t getting it as well.

In this way state laws exert outsize influence, forcing companies to make changes nationally or globally because of decisions that technically only apply to a small subset of their users. You may think of these states as being activists (especially if their attorneys general are proactive), or simply ahead of the curve, but either way they are making their mark.

This is not ideal, however, because taken to the extreme, it produces a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. That, at least, is the doomsday scenario predicted almost universally by companies in a position to lose out.

State laws act as a test bed for new policies, but tend to only emerge when movement at the federal level is too slow. Although they may hit the bullseye now and again, like with BIPA, it would be unwise to rely on a single state or any combination among them to miraculously produce, like so many simian legislators banging on typewriters, a comprehensive regulatory structure for social media. Unfortunately, that leads us to Congress.

3. Congress

Image: Bryce Durbin/TechCrunch

What can be said about the ineffectiveness of Congress that has not already been said, again and again? Even in the best of times few would trust these people to establish reasonable, clear rules that reflect reality. Congress simply is not the right tool for the job, because of its stubborn and willful ignorance on almost all issues of technology and social media, its countless conflicts of interest, and its painful sluggishness — sorry, deliberation — in actually writing and passing any bills, let alone good ones.

Companies oppose state laws like the CCPA while calling for national rules because they know that it will take forever and there’s more opportunity to get their finger in the pie before it’s baked. National rules, in addition to coming far too late, are much more likely also be watered down and riddled with loopholes by industry lobbyists. (This is indicative of the influence these companies wield over their own regulation, but it’s hardly official.)

But Congress isn’t a total loss. In moments of clarity it has established expert agencies like those in the first item, which have Congressional oversight but are otherwise independent, empowered to make rules, and kept technically — if somewhat limply — nonpartisan.

Unfortunately, the question of social media regulation is too recent for Congress to have empowered a specialist agency to address it. Social media companies don’t fit neatly into any of the categories that existing specialists regulate, something that is plainly evident by the present attempt to stretch Section 230 beyond the breaking point just to put someone on the beat.

Laws at the federal level are not to be relied on for regulation of this fast-moving industry, as the current state of things shows more than adequately. And until a dedicated expert agency or something like it is formed, it’s unlikely that anything spawned on Capitol Hill will do much to hold back the Facebooks of the world.

4. European regulators

eu gdpr 1Of course, however central it considers itself to be, the U.S. is only a part of a global ecosystem of various and shifting priorities, leaders, and legal systems. But in a sort of inside-out version of state laws punching above their weight, laws that affect a huge part of the world except the U.S. can still have a major effect on how companies operate here.

The most obvious example is the General Data Protection Regulation or GDPR, a set of rules, or rather augmentation of existing rules dating to 1995, that has begun to change the way some social media companies do business.

But this is only the latest step in a fantastically complex, decades-long process that must harmonize the national laws and needs of the E.U. member states in order to provide the clout it needs to compel adherence to the international rules. Red tape seldom bothers tech companies, which rely on bottomless pockets to plow through or in-born agility to dance away.

Although the tortoise may eventually in this case overtake the hare in some ways, at present the GDPR’s primary hindrance is not merely the complexity of its rules, but the lack of decisive enforcement of them. Each country’s Data Protection Agency acts as a node in a network that must reach consensus in order to bring the hammer down, a process that grinds slow and exceedingly fine.

When the blow finally lands, though, it may be a heavy one, outlawing entire practices at an industry-wide level rather than simply extracting pecuniary penalties these immensely rich entities can shrug off. There is space for optimism as cases escalate and involve heavy hitters like antitrust laws in efforts that grow to encompass the entire “big tech” ecosystem.

The rich tapestry of European regulations is really too complex of a topic to address here in the detail it deserves, and also reaches beyond the question of who exactly regulates social media. Europe’s role in that question of, if you will, speaking slowly and carrying a big stick promises to produce results on a grand scale, but for the purposes of this article it cannot really be considered an effective policing body.

(TechCrunch’s E.U. regulatory maven Natasha Lomas contributed to this section.)

5. No one? Really?

As you can see, the regulatory ecosystem in which social media swims is more or less free of predators. The most dangerous are the small, agile ones — state legislatures — that can take a bite before the platforms have had a chance to brace for it. The other regulators are either too slow, too compromised, or too involved (or some combination of the three) to pose a real threat. For this reason it may be necessary to introduce a new, but familiar, species: the expert agency.

As noted above, the FCC is the most familiar example of one of these, though its role is so fragmented that one could be forgiven for forgetting that it was originally created to ensure the integrity of the telephone and telegraph system. Why, then, is it the expert agency for orbital debris? That’s a story for another time.

Capitol building

Image Credit: Bryce Durbin/TechCrunch

What is clearly needed is the establishment of an independent expert agency or commission in the U.S., at the federal level, that has statutory authority to create and enforce rules pertaining to the handling of consumer data by social media platforms.

Like the FCC (and somewhat like the E.U.’s DPAs), this should be officially nonpartisan — though like the FCC it will almost certainly vacillate in its allegiance — and should have specific mandates on what it can and can’t do. For instance, it would be improper and unconstitutional for such an agency to say this or that topic of speech should be disallowed from Facebook or Twitter. But it would be able to say that companies need to have a reasonable and accessible definition of the speech they forbid, and likewise a process for auditing and contesting takedowns. (The details of how such an agency would be formed and shaped is well beyond the scope of this article.)

Even the likes of the FAA lags behind industry changes, such as the upsurge in drones that necessitated a hasty revisit of existing rules, or the huge increase in commercial space launches. But that’s a feature, not a bug. These agencies are designed not to act unilaterally based on the wisdom and experience of their leaders, but are required to perform or solicit research, consult with the public and industry alike, and create evidence-based policies involving, or at least addressing, a minimum of sufficiently objective data.

Sure, that didn’t really work with net neutrality, but I think you’ll find that industries have been unwilling to capitalize on this temporary abdication of authority by the FCC because they see that the Commission’s current makeup is fighting a losing battle against voluminous evidence, public opinion, and common sense. They see the writing on the wall and understand that under this system it can no longer be ignored.

With an analogous authority for social media, the evidence could be made public, the intentions for regulation plain, and the shareholders — that is to say, users — could make their opinions known in a public forum that isn’t owned and operated by the very companies they aim to rein in.

Without such an authority these companies and their activities — the scope of which we have only the faintest clue to — will remain in a blissful limbo, picking and choosing by which rules to abide and against which to fulminate and lobby. We must help them decide, and weigh our own priorities against theirs. They have already abused the naive trust of their users across the globe — perhaps it’s time we asked them to trust us for once.

With ‘absurd’ timing, FCC announces intention to revisit Section 230

FCC Chairman Ajit Pai has announced his intention to pursue a reform of Section 230 of the Communications Act, which among other things limits the liability of internet platforms for content they host. Commissioner Rosenworcel described the timing — immediately after Conservative outrage at Twitter and Facebook limiting the reach of an article relating to Hunter Biden — as “absurd.” But it’s not necessarily the crackdown the Trump administration clearly desires.

In a statement, Chairman Pai explained that “members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set forth in Section 230,” and that there is broad support for changing the law — in fact there are already several bills under consideration that would do so.

At issue is the legal protections for platforms when they decide what content to allow and what to block. Some say they are clearly protected by the First Amendment (this is how it is currently interpreted), while others assert that some of those choices amount to violations of users’ right to free speech.

Though Pai does not mention specific recent circumstances in which internet platforms have been accused of having partisan bias in one direction or the other, it is difficult to imagine they — and the constant needling of the White House — did not factor into the decision.

A long road with an ‘unfortunate detour’

In fact the push to reform Section 230 has been progressing for years, with the limitations of the law and the FCC’s interpretation of its pertinent duties discussed candidly by the very people who wrote the original bill and thus have considerable insight into its intentions and shortcomings.

In June Commissioner Starks disparaged pressure from the White House to revisit the FCC’s interpretation of the law, saying that the First Amendment protections are clear and that Trump’s executive order “seems inconsistent with those core principles.” That said, he proposed that the FCC take the request to reconsider the law seriously.

“And if, as I suspect it ultimately will, the petition fails at a legal question of authority,” he said, “I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season that can use a pending proceeding to, in my estimation, intimidate private parties.”

The latter part of his warning seems especially prescient given the choice by the Chairman to open proceedings less than three weeks before the election, and the day after Twitter and Facebook exercised their authority as private platforms to restrict the distribution of articles which, as Twitter belatedly explained, clearly broke guidelines on publishing private information. (The New York Post article had screenshots of unredacted documents with what appeared to be Hunter Biden’s personal email and phone number, among other things.)

Commissioner Rosenworcel did not mince words, saying “The timing of this effort is absurd. The FCC has no business being the President’s speech police.” Starks echoed her, saying “We’re in the midst of an election… the FCC shouldn’t do the President’s bidding here.” (Trump has repeatedly called for the “repeal” of Section 230, which is just part of a much larger and important set of laws.)

Considering the timing and the utter impossibility of reaching any kind of meaningful conclusion before the election — rulemaking is at a minimum a months-long process — it is hard to see Pai’s announcement as anything but a pointed warning to internet platforms. Platforms which, it must be stressed, the FCC has essentially no regulatory powers over.

Foregone conclusion

The Chairman telegraphed his desired outcome clearly in the announcement, saying “Many advance an overly broad interpretation that in some cases shields social media companies from consumer protection laws in a way that has no basis in the text of Section 230… Social media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Whether the FCC has anything to do with regulating how these companies exercise that right remains to be seen, but it’s clear that Pai thinks the agency should, and doesn’t. With the makeup of the FCC currently 3:2 in favor of the Conservative faction, it may be said that this rulemaking is a forgone conclusion; the net neutrality debacle showed that these Commissioners are willing to ignore and twist facts in order to justify the end they choose, and there’s no reason to think this rulemaking will be any different.

The process will be just as drawn out and public as previous ones, however, which means that a cavalcade of comments may yet again indicate that the FCC ignores public opinion, experts, and lawmakers alike in its decision to invent or eliminate its roles as it sees fit. Be ready to share your feedback with the FCC, but no need to fire up the outrage just yet — chances are this rulemaking won’t even exist in draft form until after the election, at which point there may be something of a change in the urgency of this effort to reinterpret the law to the White House’s liking.

FCC dings company for $164K after its false broadband claims distorted national report

The FCC was deeply embarrassed last year when it was found that its rosy broadband deployment report was off by millions, owing to a single, extremely suspect filing that conjured 62 million customers out of thin air. The company responsible is being assessed a $163,912 fine, but the underlying problems that let it happen haven’t been fixed — as Commissioner Jessica Rosenworcel assesses it: “What a mess.”

The issue, as critics and the Commission itself have pointed out for years, is that the broadband industry reports its own numbers on what amounts to the honor system. There’s no other explanation for how BarrierFree, which no one outside of one county could know of, could — after failing to file its paperwork properly 26 times — suddenly claim to serve a quarter of the country, and that information made it all the way to a national report and public statement by the Chairman.

“This should have set off alarm bells at the FCC,” said Rosenworcel in a statement accompanying the agency’s announcement of the fine. “In fact, agency staff reached out to the company nearly a dozen times over multiple years, including after this suspect data was filed. Despite these efforts behind the scenes, on February 19, 2019, the FCC used the erroneous data filed by BarrierFree in a press release, claiming great progress in closing the nation’s digital divide. When an outside party pointed out this was based on fraudulent information, the FCC was forced to revise its claim.”

The outside party in question, Free Press, has consistently provided a valuable counterpoint to the FCC’s narrative on various efforts, from this report to long-running issues like net neutrality.

Since (and really, before) then, there have been calls to revise the way the FCC measures and reports the progress of broadband deployment across the country. Considering this is one of the agency’s primary responsibilities, it’s worth updating a process that has proven over and over to be error-ridden and inaccurate.

There are better tools in the works, but they didn’t arrive fast enough to save the credibility of the 2020 broadband deployment report. In fact, the FCC should have a formal proposal for how to improve its information gathering process by the end of the month.

Meanwhile, BarrierFree gets away with what Rosenworcel and her fellow Commissioner evaluate as little more than a slap on the wrist, even though it’s the maximum penalty allowed by law.

“That limitation means that the forfeiture proposed here cannot be, in my opinion, severe enough to adequately address the harm BarrierFree caused and deter future violations,” said Commissioner Geoffrey Starks in a statement.

“This hardly feels like the vigorous enforcement our data-gathering efforts need,” said Rosenworcel. “At a minimum, we should have admonished the carrier before us to send a clear message that failing to file essential data with the agency and filing false data both result in penalty.”

“I worry about the signals this enforcement action sends today. Giving a carrier a pass for failing to file information with the FCC 26 times is not a vigorous response to the deficiencies that plague our broadband data.”

FCC invites public comment on Trump’s attempt to nerf Section 230

FCC Chairman Ajit Pai has decided to ask the public for its thoughts on an attempt initiated in Trump in May to water down certain protections that arguably led to the creation of the modern internet economy. The nakedly retaliatory order seems to be, legally speaking, laughable, and could be resolved without public input — but the FCC wants your opinion, so you may as well give it to them.

You can submit your comment here at the FCC’s long-suffering electronic comment filing system, but before you do so, perhaps acquaint yourself with a few facts.

Section 230 essentially prevents companies like Facebook and Google from being liable for content they merely host, as long as they work to take down illegal content quickly. Some feel these protections has given the companies the opportunity to manipulate speech on their platforms — Trump felt targeted by a fact-check warning placed by Twitter on his unsupported claims of fraud in mail-in warning.

To understand the order itself and see commentary from the companies that would be affected, as well as Senator Ron Wyden (D-OR), who co-authored the law in the first place, read our story from the day Trump signed the order. (Wyden called it “plainly illegal.”)

For a bipartisan legislative approach that actually addresses shortcomings in Section 230, check out the PACT Act announced in June. (Sen. Brian Schatz (D-HI) says they’re approaching the law “with a scalpel rather than a jackhammer.”)

More relevant to the FCC’s proceedings, however, are the comments of sitting commissioner Brendan Starks, who questioned the order’s legality and ethics, likening it to a personal vendetta intended to intimidate certain companies. As he explained:

The broader debate about Section 230 long predates President Trump’s conflict with Twitter in particular, and there are so many smart people who believe the law here should be updated. But ultimately that debate belongs to Congress. That the president may find it more expedient to influence a five-member commission than a 538-member Congress is not a sufficient reason, much less a good one, to circumvent the constitutional function of our democratically elected representatives.

Incidentally, Starks may be who Pai is referring to in a memo announcing the commentary period. “I strongly disagree with those who demand that we ignore the law and deny the public and all stakeholders the opportunity to weigh in on this important issue. We should welcome vigorous debate—not foreclose it,” Pai wrote.

This may be a reference to Commissioner Starks’s suggestion that the FCC address the order quickly and authoritatively: “If, as I suspect it ultimately will, the petition fails at a legal question of authority, I think we should say it loud and clear, and close the book on this unfortunate detour,” he said. After all, public opinion doesn’t count for much if the order has no legal effect to begin with and the FCC doesn’t even have to consider how it might revisit Section 230.

Whatever the case, the proposal is ready for you to comment on it. To do so, visit this page and click, in the box on the left, “+New Filing” or “+Express” — the first is if you would like to submit a document or evidence in support of your opinion, and the second is if you just want to explain your position in plain text. Remember, this information will be filed publicly, so anything you put in those fields — name, address and everything — will be visible online.

To be clear, you’re commenting on the  NTIA proposal that the FCC draw up new rules regarding Section 230, which the executive order compelled that organization to send, not the executive order itself.

As with the net neutrality debacle, the FCC does not have to take your opinion into account, or reality for that matter. The comment period lasts 45 days, after which the item will likely go to internal deliberations at the Commission.

Dish closes Boost Mobile purchase, following T-Mobile/Sprint merger

T-Mobile today announced that it has closed a deal that divests Sprint’s pre-paid businesses, including Boost and Virgin Mobile. The news finds Dish entering the wireless carrier game in earnest, courtesy of the $1.4 billion deal.

The whole thing was, of course, a key part of T-Mobile’s bid to merge with Sprint. It was a relatively small concession to those worried that such a deal would decrease competitiveness in the market, as the number of major U.S. carriers shrunk from four down to three. The $26 billion T-Mobile/Sprint deal was finally completed April of this year, and has already resulted in hundreds of lost jobs, as reported on last month by TechCrunch.

The deal gives Dish a nice head start in the pre-paid phone game, with north of nine million customers and access to T-Mobile’s wireless network for the next seven years. It also finds current Dish’s COO John Swieringa stepping in to lead the new subsidiary. Oh, and there’s a new Boost logo, too, seen below,

Dish

See? It’s basically the old Boost Mobile logo, but with the little Dish wireless symbols in the middle, to really show you who’s boss. Dish used the opportunity to announce a new plan for Boost users with 15GB of data for $45, and has already begun switching consumers with compatible devices over to the new T-Mobile-backed network.

FCC Commissioner disparages Trump’s social media order: ‘The decision is ours alone’

FCC Commissioner Geoffrey Starks has examined the President’s Executive Order that attempts to spur the FCC into action against social media companies and found it wanting. “There are good reasons for the FCC to stay out of this debate,” he said. “The decision is ours alone.”

The Order targets Section 230 of the Communications Decency Act, which ensures that platforms like Facebook and YouTube aren’t liable for illegal content posted to them, as long as they are making efforts to take them down in accordance with the law.

Some in government feel these protections go too far and have led to social media companies suppressing free speech. Trump himself clearly felt suppressed when Twitter placed a fact-check warning on unsupported claims of fraud in mail-in voting, leading directly to the Order.

Starks gave his take on the topic in an interview with the Information Technology and Innovation Foundation, a left-leaning think tank that pursues tech-related issues. While he is just one of five commissioners and the FCC has yet to consider the order in any official sense, his words have weight, as they indicate serious legal and procedural objections to it.

“The Executive Order definitely gets one thing right, and that is that the President cannot instruct the FCC to do this or anything else,” he said. “We’re an independent agency.”

He was careful to make clear that he doesn’t think the law is perfect — just that this method of changing it is completely unjustified.

“The broader debate about section 230 long predates President Trump’s conflict with Twitter in particular, and there are so many smart people who believe the law here should be updated,” he explained. “But ultimately that debate belongs to Congress. That the president may find it more expedient to influence a 5-member commission than a 538-member Congress is not a sufficient reason, much less a good one, to circumvent the constitutional function of our democratically elected representatives.”

The Justice Department has entered the picture as well, offering its own recommendations for changing Section 230 today — though like the White House, Justice has no power to directly change or invent responsibilities for the FCC.

Fellow Commissioner Jessica Rosenworcel echoed his concerns, paraphrasing an earlier statement on the order: “Social media can be frustrating, but turning the FCC into the President’s speech police is not the answer.”

After detailing some of the legal limitations of the FCC, Section 230, and the difficulty and needlessness of narrowly defining “good faith” actions, Starks concluded that the order simply doesn’t make a lot of sense in their context.

“The first amendment allows social media companies to censor content freely in ways the government never could, and it prohibits the government from retaliating against them for that speech,” he said. “So much — so much — of what the president proposes here seems inconsistent with those core principles, making an FCC rulemaking even less desirable.”

“The worst case scenario, the one that burdens the proper functioning of our democracy, would be to allow the laxity here to bestow some type of credibility on the Executive Order, one that threatens certainly a new regulatory regime upon internet service providers with no credible legal support,” he continued.

Having said that, he acknowledged that the order does mean that some action should take place at the FCC — it may just not be the kind of resolution Trump wishes.

“I’m calling to press [the National Telecommunications Industry Association] to send the petition as quickly as possible. I see no reason why they should need more than 30 days from the Executive Order’s issuance itself so we can get on with it, have the FCC review it and vote,” he said. “And if, as I suspect it ultimately will, the petition fails at a legal question of authority, I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season can use a pending proceeding to, in my estimation, intimidate private parties.”

A lot of this is left to Chairman Ajit Pai, who has fairly consistently fallen in line with the administration’s wishes. And if the eagerness of Commissioner Carr is any indicator, the Republican members of the Commission are happy to respond to the President’s “call for guidance.”

So far there has been no official announcement of FCC business relating to the Executive Order, but if the NTIA moves quickly we could hear about it as early as next month’s open meeting.

Republican senators ask FCC to examine Section 230, following Trump order

On May 29, the president of the United States of America tweeted, simply, “REVOKE 230!” The message was all caps, with an exclamation mark for good measure. The message was nothing, if not direct, following the issuing of an executive order, which, among other things, seeks to strip away key protections under Section 230 of the Communications Decency Act.

Today, four Republican senators sent an open letter to the FCC, urging chairman Ajit Pai to examine the “special status” afforded to social media sites under the statute. The letter, authored by Marco Rubio, Kelly Loeffler, Kevin Cramer and Josh Hawley reads, in part:

Social media companies have become involved in a range of editorial and promotional activity; like publishers, they monetize, edit, and otherwise editorialize user content. It is time to take a fresh look at Section 230 and to interpret the vague standard of “good faith” with specific guidelines and direction. In addition, it appears that courts have granted companies immunity for editing and altering content even though the text of Section 230 prohibits immunity for any content that the company “in part … develop[s].” These interpretations also deserve a fresh look. We therefore request that the FCC clearly define the framework under which technology firms, including social media companies, receive protections under Section 230.

The letter adds that, unlike Trump, who currently has around 82 million followers, “everyday Americans” are “sidelined, silenced, or otherwise censored by these corporations.” Trump himself has had a longstanding problem with the rule, which he and fellow Republicans have accused of enabling the censorship of conservative free speech. While he’s long been rumored to be interested in killing the legislation, Twitter’s decision to issue a warning label on a Trump tweet appears to have been the final straw.

Pai previously voiced a disinterest in regulating social media sites in that manner. Speaking to Reuters, the chairman declined to comment one way or the other, stating that he didn’t want to “prejudge a petition that I haven’t seen.”

Earlier today, the statute’s author, Senator Ron Wyden, penned an op-ed defending 230, writing:

Just look at Black Lives Matter and the protests against police violence over the past week as an example. The cellphone video that captured the officer kneeling on George Floyd’s neck spread across social media platforms — and it’s the reason Americans learned about his unjust killing in the first place. So many of these cases of unconscionable use of force against black Americans have come to light as a result of videos posted to social media.

Robocallers face $225M fine from FCC and lawsuits from multiple states

Two men embodying the zenith of human villainy have admitted to making approximately a billion robocalls in the first few months of 2019 alone, and now face an FCC fine of $225 million and a lawsuit from multiple attorneys general that could amount to as much or more — not that they’ll actually end up paying that.

John Spiller and Jakob Mears, Texans of ill repute, are accused of (and have confessed to) forming a pair of companies to make millions of robocalls a day with the aim of selling health insurance from their shady clients.

The operation not only ignored the national Do Not Call registry, but targeted it specifically, as it was “more profitable to target these consumers.” Numbers were spoofed, making further mischief as angry people called back to find bewildered strangers on the other end of the line.

These calls amounted to billions over two years, and were eventually exposed by the FCC, the offices of several attorneys general and industry anti-fraud associations.

Now the pair have been slapped with a $225 million proposed fine, the largest in the FCC’s history. The lawsuit involves multiple states and varying statutory damages per offense, and even a conservative estimate of the amounts could exceed that number.

Unfortunately, as we’ve seen before, the fines seem to have little correlation with the amounts actually paid. The FCC and FTC do not have the authority to enforce the collection of these fines, leaving that to the Department of Justice. And even should the DoJ attempt to collect the money, they can’t get more than the defendants have.

For instance, last year the FTC fined one robocaller $5 million, but he ended up paying $18,332 and the market price of his Mercedes. Unsurprisingly, these individuals performing white-collar crimes are no strangers to methods to avoid punishment for them. Disposing of cash assets before the feds come knocking on your door is just part of the game.

In this case the situation is potentially even more dire: the DoJ isn’t even involved. As FCC Commissioner Jessica Rosenworcel put it in a statement accompanying the agency’s announcement:

There’s something missing in this all-hands effort. That’s the Department of Justice. They aren’t a part of taking on this fraud. Why not? What signals does their refusal to be involved send?

Here’s the signal I see. Over the last several years the FCC has levied hundreds of millions in fines against robocallers just like the folks we have here today. But so far collections on these eye-popping fines have netted next to nothing. In fact, it was last year that The Wall Street Journal did the math and found that we had collected no more than $6,790 on hundreds of millions in fines. Why? Well, one reason is that the FCC looks to the Department of Justice to collect on the agency’s fines against robocallers. We need them to help. So when they don’t get involved—as here—that’s not a good sign.

While the FCC’s fine and the lawsuit will certainly put these robocallers out of business and place further barriers to their conducting more scam operations, they’re not really going to be liable for nine figures, because they’re not billionaires.

It’s good that the fines are large enough to bankrupt operations like these, but as Rosenworcel put it back in 2018 when another enormous fine was levied against a robocaller, “it’s like emptying the ocean with a teaspoon.” While the FCC and states were going after a pair of ne’er-do-wells, a dozen more have likely popped up to fill the space.

Industry-wide measures to curb robocalls have been underway for years now, but only recently have been mandated by the FCC after repeated warnings and delays. Expect the new anti-fraud frameworks to take effect over the next year.