PACT Act takes on internet platform content rules with ‘a scalpel rather than a jackhammer’

The PACT Act is a new bipartisan effort to reform Section 230, the crucial liability shield that enables internet platforms to exist, approaching the law’s shortcomings “with a scalpel rather than a jackhammer,” as Senator Brian Schatz (D-HI) describes it. It is a welcome alternative to the dangerous EARN IT Act and risible executive order also in the running.

Section 230 protects companies online from being liable for content posted by their users, as long as those companies remove illegal content when it is pointed out to them. Politicians have recently characterized it as an excuse for companies like Facebook and Twitter to control speech on their platform and avoid responsibility for shoddy or arbitrary moderation policies.

But the two most high-profile attempts to change this law, which arguably made the modern internet possible, are riddled with problems. The EARN IT Act is widely understood to be an end run against encryption by an impotent and furious Justice Department. President Trump’s recent executive order, in addition to plainly being retaliation against Twitter for fact-checking his tweets, doesn’t actually appear to do much of anything.

Yet there is growing consensus that Section 230, while it has filled its purpose admirably for two decades, needs to be adjusted to accommodate for a changed digital environment. To that end, Sen. Schatz and his colleague Sen. John Thune (R-SD), leaders of the Communications, Technology, Innovation and Internet Subcommittee, are proposing a reasonable alternative.

“The best thing we can do for the internet, and for the law that enabled the internet to happen, is to modify this law so that it works for another 20 years instead of pretending that it’s perfect just the way it is,” Sen. Schatz said in a call with press.

Their Platform Accountability and Consumer Transparency Act focuses more on exposing the process rather than changing it. Under the proposed law, companies using Section 230 would have to:

  • Publicly document their moderation practices and issue a standardized quarterly report on actions they’ve taken and the complaints that prompted them.
  • Make and report moderation decisions within 14 days of user reports, and allow appeals.
  • Remove “court-determined illegal content and activity” within 24 hours, with some flexibility allowed for smaller platforms.

The act would also limit the scope of Section 230 in protecting companies when they are facing action from federal regulators and state attorneys general, or when they are provably aware of the illegal nature of the content.

It would not affect or involve changes to encryption, which is another tool companies have to distance themselves from illegal content: If they can’t read the data, they can’t tell if it’s illegal. But attempts to weaken encryption or reduce its use have been met by polite but firm rejection by the tech industry — it’s clear that we have been traveling down a one way street in that regard.

“This is not designed to attract people who want to bully tech companies into political submission,” said Sen. Schatz. “It’s designed to improve federal law.”

“Here’s why we think this bill is significant,” he continued. “First, because we believe it is the most serious effort to retain what works in 230, and try to fix what is broken about 230. Second, you have the chair and ranking member of the subcommittee introducing the bill, which is not a trivial matter. And third, because we do think there is an appetite to legislate here. Though the volume gets turned up when someone wants to beat up on the platforms via cable TV or Twitter, the serious work of the Commerce Committee has always been bipartisan.”

You can read the full text of the bill here. We’ll soon hear whether the senators’ effort bears any fruit.

Affirming the position of tech advocates, Supreme Court overturns Trump’s termination of DACA

The U.S. Supreme Court ruled today that President Donald Trump’s administration unlawfully ended the federal policy providing temporary legal status for immigrants who came to the country as children.

The decision, issued Thursday, called the termination of the Obama-era policy known as the Deferred Action for Childhood Arrivals “arbitrary and capricious.” As a result of its ruling, nearly 640,000 people living in the United States are now temporarily protected from deportation.

While a blow to the Trump Administration, the ruling is sure to be hailed nearly unanimously by the tech industry and its leaders, who had come out strongly in favor of the policy in the days leading up to its termination by the current President and his advisors.

At the beginning of 2018, many of tech’s most prominent executives, including the CEOs of Apple, Facebook, Amazon and Google, joined more than 100 American business leaders in signing an open letter asking Congress to take action on the Deferred Action for Childhood Arrivals (DACA) program before it expired in March.

Tim Cook, Mark Zuckerberg, Jeff Bezos and Sundar Pichai who made a full throated defense of the policy and pleaded with Congress to pass legislation ensuring that Dreamers, or undocumented immigrants who arrived in the United States as children and were granted approval by the program, can continue to live and work in the country without risk of deportation.

At the time, those executives said the decision to end the program could potentially cost the U.S. economy as much as $215 billion.

In a 2017 tweet, Tim Cook noted that Apple employed roughly 250 of the company’s employees were “Dreamers”.

The list of tech executives who came out to support the DACA initiative is long. It included: IBM CEO Ginni Rometty; Brad Smith, the president and chief legal officer of Microsoft; Hewlett-Packard Enterprise CEO Meg Whitman; and CEOs or other leading executives of AT&T, Dropbox, Upwork, Cisco Systems, Salesforce.com, LinkedIn, Intel, Warby Parker, Uber, Airbnb, Slack, Box, Twitter, PayPal, Code.org, Lyft, Etsy, AdRoll, eBay, StitchCrew, SurveyMonkey, DoorDash, Verizon (the parent company of Verizon Media Group, which owns TechCrunch).

At the heart of the court’s ruling is the majority view that Department of Homeland Security officials didn’t provide a strong enough reason to terminate the program in September 2017. Now, the issue of immigration status gets punted back to the White House and Congress to address.

As the Boston Globe noted in a recent article, the majority decision written by Chief Justice John Roberts did not determine whether the Obama-era policy or its revocation were correct, just that the DHS didn’t make a strong enough case to end the policy.

“We address only whether the agency complied with the procedural requirement that it provide a reasoned explanation for its action,” Roberts wrote. 

While the ruling from the Supreme Court is some good news for the population of “dreamers,” the question of their citizenship status in the country is far from settled. And the U.S. government’s response to the COVID-19 pandemic has basically consisted of freezing as much of the nation’s immigration apparatus as possible.

An Executive Order in late April froze the green card process for would-be immigrants, and the administration was rumored to be considering a ban on temporary workers under H1-B visas as well.

The President has, indeed, ramped up the crackdown with strict border control policies and other measures to curb both legal and illegal immigration. 

More than 800,000 people joined the workforce as a result of the 2012 program crafted by the Obama administration. DACA allows anyone under 30 to apply for protection from deportation or legal action on their immigration cases if they were younger than 16 when they were brought to the US, had not committed a crime, and were either working or in school.

In response to the Supreme Court decision, the President tweeted “Do you get the impression that the Supreme Court doesn’t like me?”

 

 

Amazon won’t say if its facial recognition moratorium applies to the feds

In a surprise blog post, Amazon said it will put the brakes on providing its facial recognition technology to police for one year, but refuses to say if the move applies to federal law enforcement agencies.

The moratorium comes two days after IBM said in a letter it was leaving the facial recognition market altogether. Arvind Krishna, IBM’s chief executive, cited a “pursuit of justice and racial equity” in light of the recent protests sparked by the killing of George Floyd by a white police officer in Minneapolis last month.

Amazon’s statement — just 102 words in length — did not say why it was putting the moratorium in place, but noted that Congress “appears ready” to work on stronger regulations governing the use of facial recognition — again without providing any details. It’s likely in response to the Justice in Policing Act, a bill that would, if passed, restrict how police can use facial recognition technology.

“We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested,” said Amazon in the unbylined blog post.

But the statement did not say if the moratorium would apply to the federal government, the source of most of the criticism against Amazon’s facial recognition technology. Amazon also did not say in the statement what action it would take after the yearlong moratorium expires.

Amazon is known to have pitched its facial recognition technology, Rekognition, to federal agencies, like Immigration and Customs Enforcement. Last year, Amazon’s cloud chief Andy Jassy said in an interview the company would provide Rekognition to “any” government department.

Amazon spokesperson Kristin Brown declined to comment further or say if the moratorium applies to federal law enforcement.

There are dozens of companies providing facial recognition technology to police, but Amazon is by far the biggest. Amazon has come under the most scrutiny after its Rekognition face-scanning technology showed bias against people of color.

In 2018, the ACLU found that Rekognition falsely matched 28 members of Congress as criminals in a mugshot database. Amazon criticized the results, claiming the ACLU had lowered the facial recognition system’s confidence threshold. But a year later, the ACLU of Massachusetts found that Rekognition had falsely matched 27 New England professional athletes against a mugshot database. Both tests disproportionately mismatched Black people, the ACLU found.

Investors brought a proposal to Amazon’s annual shareholder meeting almost exactly a year ago that would have forcibly banned Amazon from selling its facial recognition technology to the government or law enforcement. Amazon defeated the vote with a wide margin.

The ACLU acknowledged Amazon’s move to pause sales of Rekognition, which it called a “threat to our civil rights and liberties,” but called on the company and other firms to do more.

With pandemic-era acquisitions, big tech is back in the antitrust crosshairs

With many major sectors totally frozen and reeling from losses, tech’s biggest players are proving themselves to be the exception to the rule yet again. On Friday, Facebook confirmed its plans to buy Giphy, a popular gif search engine, in a deal believed to be worth $400 million.

Facebook has indicated it wants to forge new developer and content relationships for Giphy, but what the world’s largest social network really wants with the popular gif platform might be more than meets the eye. As Bloomberg and other outlets have suggested, it’s possible that Facebook really wants the company as a lens into how users engage with its competitors’ social platforms. Giphy’s gif search tools are currently integrated into a number of messaging platforms, including TikTok, Twitter and Apple’s iMessage.

In 2018, Facebook famously got into hot water over its use of a mobile app called Onavo, which gave the company a peek into mobile usage beyond Facebook’s own suite of apps—and violated Apple’s policies around data collection in the process. After that loophole closed, Facebook was so desperate for this kind of insight on the competition that it paid people—including teens—to sideload an app granting the company root access and allowing Facebook to view all of their mobile activity, as TechCrunch revealed last year.

For lawmakers and other regulatory powers, the Giphy buy could ring two separate sets of alarm bells: one for the further evidence of anti-competitive behavior stacking the deck in the tech industry and another for the deal’s potential consumer privacy implications.

“The Department of Justice or the Federal Trade Commission must investigate this proposed deal,” Minnesota Senator Amy Klobuchar said in a statement provided to TechCrunch. “Many companies, including some of Facebook’s rivals, rely on Giphy’s library of sharable content and other services, so I am very concerned about this proposed acquisition.”

In proposed legislation late last month, Sen. Elizabeth Warren (D-MA) and Rep. Alexandria Ocasio-Cortez (D-NY) called for a freeze on big mergers, warning that huge companies might view the pandemic as a chance to consolidate power by buying smaller businesses at fire sale rates.

In a statement, a spokesperson for Sen. Warren called the Facebook news “yet another example of a giant company using the pandemic to further consolidate power,” noting the company’s “history of privacy violations.”

“We need Senator Warren’s plan for a moratorium on large mergers during this crisis, and we need enforcers who will break up Big Tech,” the spokesperson said.

News of Facebook’s latest moves come just days after a Wall Street Journal report revealed that Uber is looking at buying Grubhub, the food delivery service it competes with directly through Uber Eats.

That news also raised eyebrows among pro-regulation lawmakers who’ve been looking to break up big tech. Rep. David Cicilline (D-RI), who chairs the House’s antitrust subcommittee, called that deal “a new low in pandemic profiteering.”

“This deal underscores the urgency for a merger moratorium, which I and several of my colleagues have been urging our caucus to support,” Cicilline said in a statement on the Grubhub acquisition.

The early days of the pandemic may have taken some of the antitrust attention off of tech’s biggest companies, but as the government and the American people fall into a rhythm during the coronavirus crisis, that’s unlikely to last. On Friday, the Wall Street Journal reported that the Department of Justice and a collection of state attorneys general are in the process of filing antitrust lawsuits against Google, with the case expected to hit in the summer months.

Amazon calls on Congress to create anti-price gouging law

Amazon has received a fair amount of criticism for perceived inaction against seller price gouging during the COVID-19 pandemic. While the retail giant has made efforts to rein in the opportunist activity (pulling some half a million offending listings), critics have pointed to the company’s slow response, along with continued problems with a number of high-demand products sold through affiliates.

Today, however, the company’s VP of Public Policy, Brian Huseman, penned an open letter to Congress, asking lawmakers to make price gouging illegal during a national crisis. The executive notes that the policy has been an effective tool in states like Tennessee where such laws already exist.

“Our collaborative efforts to hold price gougers accountable have clarified one thing: to keep pace with bad actors and protect consumers, we need a strong federal anti-price gouging law,” Huseman writes. “As of now, price gouging is prohibited during times of crisis in about two-thirds of the United States. The disparate standards among states present a significant challenge for retailers working to assist law enforcement, protect consumers, and comply with the law.”

Jeff Bezos also addressed the issue in his annual shareholder letter last month, writing, “To accelerate our response to price-gouging incidents, we created a special communication channel for state attorneys general to quickly and easily escalate consumer complaints to us.”

The company has certainly taken some efforts to curb the act. As it notes, nearly 4,000 seller accounts have been suspended in the U.S. store for policy violations. But a cursory search for in-demand products yields plenty of prohibitively expensive listings for once-ubiquitous household products. Home cleaning supplies in particular have seen a massive spike in pricing.

We asked Thinknum to chart the price of Clorox-branded items from late last year through now and the graph speaks for itself:

As retail store shelves have gone bare, Amazon has become an essential lifeline for many — a fact that plenty of predatory sellers have been more than happy to capitalize on. Beyond essential supplies, prices have gone through the roof for a number of products currently in short demand, such as the Nintendo Switch, which has been squeezed through a combination of increased interest and supply chain issues.

The company rightfully notes that enforcing such policies as a matter of the law will be a kind of juggling act. “Put simply, we want to avoid the $400 bottle of Purell for sale right after an emergency goes into effect, while not punishing unavoidable price increases that emergencies can cause, especially as supply chains are disrupted,” Huseman writes. “Furthermore, any prohibitions should apply to all levels of the supply chain so that retailers and resellers are not forced to bear price gouging increases by manufacturers and suppliers.”

Powerful House committee demands Jeff Bezos testify after ‘misleading’ statements

Amazon is in hot water with a powerful congressional committee interested in the company’s potentially anticompetitive business practices.

In a bipartisan letter sent Friday to Jeff Bezos, the House Judiciary committee demanded that the Amazon CEO explain discrepancies between his own prior statements and recent reporting from the Wall Street Journal. Specifically, the letter addressed Amazon’s apparent practice of diving into its trove of data on products and third-party sellers to come up with its own Amazon-branded competing products.

As the Journal notes, Amazon “has long asserted, including to Congress, that when it makes and sells its own products, it doesn’t use information it collects from the site’s individual third-party sellers—data those sellers view as proprietary.”

In documents and interviews with many former employees, the Journal found that Amazon does indeed consult that information when making decisions about pricing, product features and the kinds of products with the most potential to make the company money.

In the letter, the House Judiciary Committee accuses Bezos of making “misleading, and possibly criminally false or perjurious” statements to the committee when asked about the practice in the past.

“It is vital to the Committee, as part of its critical work investigating and understanding competition issues in the digital market, that Amazon respond to these and other critical questions concerning competition issues in digital markets,” the committee wrote, adding that it would subpoena the tech CEO if necessary.

While the coronavirus crisis has taken some of the heat off of tech’s mounting regulatory worries in the U.S., the committee’s actions make it clear that plenty of lawmakers are still interested in taking tech companies to task, even with so many aspects of life still up in the air.

New bill threatens journalists’ ability to protect sources

Online child exploitation is a horrific crime that requires an effective response. A draft bill, first proposed by Sen. Lindsey Graham (R-SC) in January, intends to provide exactly that. However, technology experts warn the bill not only fails to meet the challenge, it creates new problems of its own. My job is to enable journalists to do their work securely — to communicate with others, research sensitive stories and publish hard-hitting news. This bill introduces significant harm to journalists’ ability to protect their sources.

Under the Eliminating Abusive and Rampant Neglect of Interactive Technologies (or EARN IT) Act, a government commission would define best practices for how technology companies should combat this type of material. On the surface, EARN IT proposes an impactful approach. A New York Times investigation in September found that “many tech companies failed to adequately police sexual abuse imagery on their platforms.” The investigation highlighted features, offered by these companies, that provide “digital hiding places for perpetrators.”

In reality, the criticized features are exactly the same ones that protect our privacy online. They help us read The Washington Post in private and ensure we only see authentic content created by the journalists. They allow us to communicate with each other. They empower us to express ourselves. And they enable us to connect with journalists so the truth can make the page. This raises the question of whether the bill will primarily protect children or primarily undermine free speech online.

It should be pointed out that EARN IT does not try to ban the use of these features. In fact, the bill does not specifically mention them at all. But if we look at how companies would apply the “best practices,” it becomes clear that the government is intending to make these features difficult to provide, that the government is looking to discourage companies from offering — and increasing the use of — these features. By accepting EARN IT, we will give up our ability — and our children’s future abilities — to enjoy online, social, connected and private lives.

Our digital life is protected by the same features that allow some bad people to do bad things online.

Four of the “best practices” relate to requiring companies to have the ability to “identify” child sexual abuse material. Unfortunately, it’s not possible to identify this material without also having the ability to identify any and all other types of material — like a journalist communicating with a source, an activist sharing a controversial opinion or a doctor trying to raise the alarm about the coronavirus. Nothing prevents the government from later expanding the bill to cover other illegal acts, such as violence or drugs. And what happens when foreign governments want to have a say in what is “legal” and what is not?

Our digital life is protected by the same features that allow some bad people to do bad things online. They protect us as we visit The Washington Post website, use the Signal app to contact one of its journalists or use the Tor Browser to submit information to their anonymous tip line. These features all enable privacy, a core component of the journalistic process. They enable journalists to pursue and tell the truth, without fear or favor. And not just in the U.S., but globally. We should empower and enable this work, not sabotage it by removing crucial capabilities, even in the name of child protection.

The same New York Times investigation found that law enforcement agencies devoted to fighting online child exploitation “were left understaffed and underfunded, even as they were asked to handle far larger caseloads.” The National Center for Missing and Exploited Children (NCMEC), established by Congress in 1984 to reduce child sexual exploitation and prevent child victimization, “was ill equipped for the expanding demands.” It’s worth asking, then, why EARN IT does not instead empower these agencies with additional resources to solve crimes.

We must consider the possibility that this bill fails to achieve its stated goal. That it will not protect children online, and will introduce harm to their digital presence and ability to speak freely. Everyone deserves good security, and it’s on us to find ways to prevent harm without compromising on our digital rights. To force companies to weaken our protection to give law enforcement greater insight would be the equivalent of forcing people to live without locks and curtains in their homes. Are we willing to go that far?

That’s not to say we have to accept no solution. But it can’t be this one.

‘Deficiencies’ that broke FCC commenting system in net neutrality fight detailed by GAO

Today marks the conclusion of a years-long saga that started when John Oliver did a segment on Net Neutrality that was so popular that it brought the FCC’s comment system to its knees. Two years later it is finally near addressing all the issues brought up in an investigation from the General Accountability Office.

The report covers numerous cybersecurity and IT issues, some of which the FCC addressed quickly, some not so quickly and some it’s still working on.

“Today’s GAO report makes clear what we knew all along: the FCC’s system for collecting public input has problems,” Commissioner Jessica Rosenworcel told TechCrunch. “The agency needs to fully fix this mess because this is the way the FCC is supposed to take input from the public. But as this report demonstrates, we have real work to do.”

Here’s the basic timeline of events, which seem so long ago now:

Then it’s been pretty quiet basically until today, when the report requested in 2017 was publicly released. A version with sensitive information (like exact software configurations and other technical information) was internally circulated in September, then revised for today’s release.

The final report is not much of a bombshell, since much of it has been telegraphed ahead of time. It’s a collection of criticisms of an outdated system with inadequate security and other failings that might have been directed at practically any federal agency, among which cybersecurity practices are notoriously poor.

The investigation indicates that the FCC, for instance, did not consistently implement security and access controls, encrypt sensitive data, update or correctly configure its servers, detect or log cybersecurity events, and so on. It wasn’t always a disaster (even well-run IT departments don’t always follow best practices), but obviously some of these shortcomings and cut corners led to serious issues like ECFS being overwhelmed.

More importantly, of the 136 recommendations made in the September report, 85 have been fully implemented now, 10 partially, and the rest are on track to be so.

That should not be taken to mean that the FCC has waited this whole time to update its commenting and other systems. In fact it was making improvements almost immediately after the event in May of 2017, but refused to describe them. Here are a few of the improvements listed in the GAO report:

Representative Frank Pallone (D-NJ), who has dogged the FCC on this issue since the beginning, issued the following statement:

I requested this report because it was clear, after the net neutrality repeal comment period debacle, that the FCC’s cybersecurity practices had failed. After more than two years of investigating, GAO agrees and found a disturbing lack of security that places the Commission’s information systems at risk… Until the FCC implements all of the remaining recommendations, its systems will remain vulnerable to failure and misuse.

You can read the final GAO report here.

My experience with the CARES Act was frustrating, confusing and unfair

As a small business owner, I was excited to learn about the $2.2 trillion Coronavirus Aid, Relief, and Economic Security Act that offers low-interest loans to firms impacted by the COVID-19 pandemic. However, as I read through the details and began to apply, it became clear that this legislation — while well-intentioned — may not be enough to help many SMBs and startups.

Here’s a quick recap of my experience.

Emergency Economic Injury Grants and Economic Injury Disaster Loans

First and foremost: You need to act swiftly. Emergency Economic Injury Grant and Economic Injury Disaster Loan programs included in the CARES Act function on a first-come, first-served basis, and are funded from a limited pool of resources.

I began my company’s application process by submitting our EIDL and EEIG applications through the SBA website. This was easy, if tedious. It took about two hours to complete the necessary online forms and about two seconds to click the EEIG checkbox. Submission was seamless, but I haven’t received any further communication from the SBA since completing my application, which is a bit confusing — EEIG funds are supposed to be dispersed within 3-5 days of the submission date.

However, I know there’s been a huge volume of submissions recently and this must be exceptionally difficult to handle. I look forward to any email correspondence or updates from the SBA that might give me — and other applicants — an updated estimate of the expected dispersal timeline.

House passes COVID-19 relief package to replenish PPP loan funding

On Thursday, the House passed the newest federal stimulus package aimed at providing financial relief for businesses and institutions hit hard by the COVID-19 crisis.

The bill lingered in the Senate for two weeks of debates, with Republicans seeking to pass a less comprehensive version of the legislation and Democrats working to weave other funding into the package. In the end, the interim legislation will allocate $310 billion to replenish the SBA’s Paycheck Protection Program (PPP), $75 billion for hospitals and $25 billion for COVID-19 testing. The bill also includes additional funds for the Economic Injury Disaster Loan (EIDL) program.

The funding for tests comes with specific requirements for the Trump administration to formulate a “strategic plan” in coordination with states to expand national testing capacity — a key effort that public health experts say is necessary before states begin to reopen for business.

For small businesses around the country, many devastated by the ongoing crisis, the SBA program for forgivable loans began with hope but quickly descended into frustration. The loans are intended for small businesses to put toward payroll and if used to retain employees, they turn into grants. Many small business owners, scrambling to find banks handling PPP loans, were shut out of the program shortly after applications went live. Others never heard back about the loans and still remain in limbo. Within days, the funds had dried up.

Large banks are accused of eschewing a first-come-first-served system, instead doling the loans out preferentially based on their potential to make money. Some relatively large businesses also used gaping loopholes in the program to soak up the free funds. In one example, the restaurant chain Ruth’s Chris Steakhouse secured $20 million, which the company now says it will repay.

Democrats fought to include new carve-outs that could address some of these problems, and the final bill allocates $30 billion in loans for banks and credit unions with $10 to $50 billion in assets and another $30 billion for banks with less than $10 billion in assets.

The president previously expressed his approval of the bill and his intention to sign it and make the funds available as quickly as possible.