The FDA should regulate Instagram’s algorithm as a drug

The Wall Street Journal on Tuesday reported Silicon Valley’s worst-kept secret: Instagram harms teens’ mental health; in fact, its impact is so negative that it introduces suicidal thoughts.

Thirty-two percent of teen girls who feel bad about their bodies report that Instagram makes them feel worse. Of teens with suicidal thoughts, 13% of British and 6% of American users trace those thoughts to Instagram, the WSJ report said. This is Facebook’s internal data. The truth is surely worse.

President Theodore Roosevelt and Congress formed the Food and Drug Administration in 1906 precisely because Big Food and Big Pharma failed to protect the general welfare. As its executives parade at the Met Gala in celebration of the unattainable 0.01% of lifestyles and bodies that we mere mortals will never achieve, Instagram’s unwillingness to do what is right is a clarion call for regulation: The FDA must assert its codified right to regulate the algorithm powering the drug of Instagram.

The FDA should consider algorithms a drug impacting our nation’s mental health: The Federal Food, Drug and Cosmetic Act gives the FDA the right to regulate drugs, defining drugs in part as “articles (other than food) intended to affect the structure or any function of the body of man or other animals.” Instagram’s internal data shows its technology is an article that alters our brains. If this effort fails, Congress and President Joe Biden should create a mental health FDA.

Researchers can study what Facebook prioritizes and the impact those decisions have on our minds. How do we know this? Because Facebook is already doing it — they’re just burying the results.

The public needs to understand what Facebook and Instagram’s algorithms prioritize. Our government is equipped to study clinical trials of products that can physically harm the public. Researchers can study what Facebook privileges and the impact those decisions have on our minds. How do we know this? Because Facebook is already doing it — they’re just burying the results.

In November 2020, as Cecilia Kang and Sheera Frenkel report in “An Ugly Truth,” Facebook made an emergency change to its News Feed, putting more emphasis on “News Ecosystem Quality” scores (NEQs). High NEQ sources were trustworthy sources; low were untrustworthy. Facebook altered the algorithm to privilege high NEQ scores. As a result, for five days around the election, users saw a “nicer News Feed” with less fake news and fewer conspiracy theories. But Mark Zuckerberg reversed this change because it led to less engagement and could cause a conservative backlash. The public suffered for it.

Facebook likewise has studied what happens when the algorithm privileges content that is “good for the world” over content that is “bad for the world.” Lo and behold, engagement decreases. Facebook knows that its algorithm has a remarkable impact on the minds of the American public. How can the government let one man decide the standard based on his business imperatives, not the general welfare?

Upton Sinclair memorably uncovered dangerous abuses in “The Jungle,” which led to a public outcry. The free market failed. Consumers needed protection. The 1906 Pure Food and Drug Act for the first time promulgated safety standards, regulating consumable goods impacting our physical health. Today, we need to regulate the algorithms that impact our mental health. Teen depression has risen alarmingly since 2007. Likewise, suicide among those 10 to 24 is up nearly 60% between 2007 and 2018.

It is of course impossible to prove that social media is solely responsible for this increase, but it is absurd to argue it has not contributed. Filter bubbles distort our views and make them more extreme. Bullying online is easier and constant. Regulators must audit the algorithm and question Facebook’s choices.

When it comes to the biggest issue Facebook poses — what the product does to us — regulators have struggled to articulate the problem. Section 230 is correct in its intent and application; the internet cannot function if platforms are liable for every user utterance. And a private company like Facebook loses the trust of its community if it applies arbitrary rules that target users based on their background or political beliefs. Facebook as a company has no explicit duty to uphold the First Amendment, but public perception of its fairness is essential to the brand.

Thus, Zuckerberg has equivocated over the years before belatedly banning Holocaust deniers, Donald Trump, anti-vaccine activists and other bad actors. Deciding what speech is privileged or allowed on its platform, Facebook will always be too slow to react, overcautious and ineffective. Zuckerberg cares only for engagement and growth. Our hearts and minds are caught in the balance.

The most frightening part of “The Ugly Truth,” the passage that got everyone in Silicon Valley talking, was the eponymous memo: Andrew “Boz” Bosworth’s 2016 “The Ugly.”

In the memo, Bosworth, Zuckerberg’s longtime deputy, writes:

“So we connect more people. That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.”

Zuckerberg and Sheryl Sandberg made Bosworth walk back his statements when employees objected, but to outsiders, the memo represents the unvarnished id of Facebook, the ugly truth. Facebook’s monopoly, its stranglehold on our social and political fabric, its growth at all costs mantra of “connection,” is not de facto good. As Bosworth acknowledges, Facebook causes suicides and allows terrorists to organize. This much power concentrated in the hands of one corporation, run by one man, is a threat to our democracy and way of life.

Critics of FDA regulation of social media will claim this is a Big Brother invasion of our personal liberties. But what is the alternative? Why would it be bad for our government to demand that Facebook accounts to the public its internal calculations? Is it safe for the number of sessions, time spent and revenue growth to be the only results that matters? What about the collective mental health of the country and world?

Refusing to study the problem does not mean it does not exist. In the absence of action, we are left with a single man deciding what is right. What is the price we pay for “connection”? This is not up to Zuckerberg. The FDA should decide.

Chinese crackdown on tech giants threatens its cloud market growth

As Chinese tech companies come under regulatory scrutiny at home, concerns and pressures are escalating among investors and domestic tech companies including China’s four big cloud companies, BATH (Baidu AI, Alibaba Cloud, Tencent Cloud and Huawei Cloud), according to an analyst report.

Despite a series of antitrust and internet-related regulation crackdowns, the four leading cloud companies have been growing steadily. As the current scrutiny is not particularly focused on the cloud sector and the demand for digital transformation, artificial intelligence and smart industries remains firm, China’s cloud infrastructure market size mounted to $6.6 billion, which is an increase of 54% compared with the previous year, in the second quarter of 2021.

Nonetheless, share prices of three of them–Baidu, Alibaba and Tencent– have fallen between 18% and 30% over the last 6 month, which could make investors cautious on betting on the Chinese tech companies.

“Chinese tech companies could always rely on their local market, especially when access to lucrative Western markets was blocked. But increasing domestic regulatory pressures over the past nine months have been a frustrating headwind for those companies that have seen their cloud businesses grow significantly over the past years,” said Canalys Vice President Alex Smith.

The four big cloud titans dominate the Chinese cloud market, accounting for 80% of total cloud spending. Alibaba Cloud maintained its frontrunner status with a 33.8% market share, in the second quarter of this year. Huawei, which had 19.3% of China’s market size in 2Q21, is the one that has avoided regulatory measures so far.

“Huawei is an infrastructure and device company that also happens to have developed a strong cloud business. When it comes to cloud infrastructure, we focus on the BATH companies, not just BAT. Huawei is in a strong position to drive growth, particularly in the public sector where it has a good standing and long-term relationship with the government,” Canalys Chief Analyst Matthew Ball said.

While Chinese regulators intensify scrutiny of its technology companies, the crackdowns wreak havoc on its own markets and the shares of China-based companies.

Beijing passed the Data Security Law in June that started to go into effect early September for protecting critical data related to national security and issued draft guidelines on regulating the algorithms companies, targeting ByteDance, Alibaba Group, Tencent and DiDi and others, in late August.

Should we care about the lives of our kids’ kids’ kids’ kids’…

We live during a time of live, real-time culture. Telecasts, spontaneous tweetstorms, on-the-scene streams, rapid-response analysis, war rooms, Clubhouses, vlogging. We have to interact with the here and now, feel that frisson of action. It’s a compulsion: we’re enraptured by the dangers that are terrorizing whole segments of the planet.

Just this past month, we saw Hurricane Ida strike New Orleans and the Eastern Seaboard, with some of the fiercest winds in the Gulf of Mexico since Hurricane Katrina. In Kabul, daily videos and streams show up-to-the-minute horrors of a country in the throes of chaos. Dangers are omnipresent. Intersect these pulses to the amygdala with the penchant for live coverage, and the alchemy is our modern media.

Yet, watching live events is not living, and it cannot substitute for introspection of both our own condition and the health of the world around us. The dangers that sprawl across today’s headlines and chyrons are often not the dangers we should be spending our time thinking about. That divergence between real-time risks and real risks has gotten wider over time — and arguably humanity has never been closer to the precipice of true disaster even as we are subsumed by disasters that will barely last a screen scroll on our phones.

Toby Ord, in his prophetic book The Precipice, argues that we aren’t seeing the existential risks that can realistically extinguish human life and flourishing. So he has delivered a rigorous guide and compass to help irrational humans understand what risks truly matter — and which we need to accept and move on.

Ord’s canvas is cosmic, dating from the birth of the universe to tens of billions of years into the future. Humanity is but the smallest blip in the universal timeline, and the extreme wealth and advancement of our civilization dates to only a few decades of contemporary life. Yet, what progress we have made so quickly, and what progress we are on course to continue in the millennia ahead!

All that potential could be destroyed though if certain risks today aren’t considered and ameliorated. The same human progress that has delivered so much beauty and improvement has also democratized tools for immense destruction, including destructiveness that could eliminate humanity or “merely” lead to civilizational collapse. Among Ord’s top concerns are climate change, nuclear winter, designer pandemics, artificial general intelligence and more.

There are plenty of books on existential risks. What makes The Precipice unique is its forging in the ardent rationality of the effective altruism movement, of which Ord is one of its many leaders. This is not a superlative dystopic analysis of everything that can go wrong in the coming centuries, but rather a coldly calculated comparison of risks and where society should invest its finite resources. Asteroids are horrific but at this point, well-studied and deeply unlikely. Generalized AI is much more open to terrifying outcomes, particularly when we extend our analysis into the decades and centuries.

While the book walks through various types of risks from natural to anthropogenic to future hypothetical ones, Ord’s main goal is to get humanity to take a step back and consider how we can incorporate the lives of billions — maybe even trillions — of future beings into our calculations on risk. The decisions we make today don’t just affect ourselves or our children, but potentially thousands of generations of our descendants as well, not to mention the other beings that call Earth home. In short, he’s asking the reader for a bold leap to see the world in geological and astronomical time, rather than in real-time.

It’s a mission that’s stunning, audacious, delirious and enervating at times, and occasionally all at the same time. Ord knows that objections will come from nearly every corner, and half the book’s heft is made up of appendices and footnotes to deflect arrows from critics while further deepening the understanding of the curious reader or specialist. If you allow yourself to be submerged in the philosophy and the rigorous mental architecture required to think through long-termism and existential risks, The Precipice really can lead to an awakening of just how precarious most of our lives are, and just how interwoven to the past and future we are.

Humanity is on The Precipice, but so are individuals. Each of us is on the edge of understanding, but can we make the leap? And should we?

Here the rigor and tenacity of the argument proves a bit more elusive. There isn’t much of a transition available from our live, reality-based daily philosophy to one predicated on seeing existential risks in all the work that we do. You either observe the existential risks and attempt to mitigate them, or you don’t (or worse, you see them and give up on protecting humanity’s fate). As Ord points out, that doesn’t always mean sacrifice — some technologies can lower our existential risk, which means that we should accelerate their development as quickly as possible.

Yet, in a complicated world filled with the daily crises and trauma of people whose pained visages are etched into our smartphone displays, it’s challenging to set aside that emotional input for the deductive and reductive frameworks presented here. In this, the criticism isn’t so much on the book as on the wider field of effective altruism, which attempts to rationalize assistance even as it effaces often the single greatest compulsion for humans to help one another: the emotional connection they feel to another being. The Precipice delivers a logical ethical framework for the already converted, but only offers modest guidance to persuade anyone outside the tribe to join in its momentum.

That’s a shame, because the book’s message is indeed prophetic. Published on March 24, 2020, it discusses pandemics, gain-of-function research, and the risks of modern virology — issues that have migrated from obscure academic journals to the front pages. There really are existential risks, and we really do need to confront them.

As the last year has shown, however, even well-known and dangerous risks like pandemics are difficult for governments to build up capacity to handle. Few humans can spend their entire lives moored to phenomenon that happen once in 100,000 years, and few safety cultures can remain robust to the slow degradation of vigilance that accompanies any defense that never gets used.

The Precipice provides an important and deeply thought-provoking framework for thinking about the risks to our future. Yet, it’s lack of engagement with the social means that it will have little influence on how to slake our obsession for the risks right before us. Long-termism is hard, and TikTok is always a tap away.


The Precipice: Existential Risk and the Future of Humanity by Toby Ord
Hachette, 2020, 480 pages

See Also

What’s happening in venture law in 2021?

The venture world is growing faster than ever, with more funding rounds, bigger funding rounds, and higher valuations than pretty much any point in history. That’s led to an exponential growth in the number of unicorns walking around, and has also forced regulators and venture law researchers to confront a slew of challenging problems.

The obvious one, of course, is that with so many companies staying private, retail investors are mostly blocked from participating in one of the most dynamic sectors of the global economy. That’s not all though — concerns about disclosures and board transparency, diversity among leaders as well as employees, whistleblower protections for fraud, and more have increasingly percolated in legal circles as unicorns multiply and push the boundaries of what our current regulations were designed to accomplish.

To explore where the cutting edge of venture law is today, TechCrunch invited four law professors who specialize in the field and securities more generally to talk about what they are seeing in their work this year, and argue for how they would change regulations going forward.

Our participants and their arguments:

  • Yifat Aran, an assistant law professor at Haifa University, argues in “A new coalition for ‘Open Cap Table’ presents an opportunity for equity transparency” that we need better formats for cap table data to allow for portability. That will increase transparency for shareholders including employees, who are often left in the dark about the true nature of a startup’s capital structure.
  • Matthew Wansley, an assistant law professor at Cardozo School of Law, argues in “The next Theranos should be shortable” that private company shares of unicorns should be able to be scrutinized and traded by short sellers. Since venture investors have little incentive to sniff out frauds post-investment, short sellers could bring a valuable perspective into the market and increase capital efficiency.
  • Jennifer Fan, an assistant law professor at the University of Washington, argues in “Diversifying startups and VC power corridors” that in addition to board mandates related to diversity (which have passed in a number of states), startups need to create more incentives around diversity in all their relationships, including with their employees, with VCs, and with the LPs of their VCs. A more comprehensive and systematic approach will better open the tech world to the many folks it overlooks.
  • Finally, Alexander I. Platt, an associate law professor at the University of Kansas, argues in “The legal world needs to shed its ‘unicorniphobia’” that we should scrutinize the rush to change our securities regulations when we’ve created so much value with startups. For every Theranos, there is a Moderna, and adding more rules and disclosures may not prevent the problems of the former, and may actually stop the progress of the latter.

The once quiet research literature of venture law has been energized with the arrival of a reform-minded camp in the halls of power in DC. TechCrunch will continue to report and bring diverse perspectives on some of the most challenging legal and regulatory issues facing the tech and startup world.

The legal world needs to shed its ‘unicorniphobia’

Once upon a time, a successful startup that reached a certain maturity would “go public” — selling securities to ordinary investors, perhaps listing on a national stock exchange and taking on the privileges and obligations of a “public company” under federal securities regulations.

Times have changed. Successful startups today are now able to grow quite large without public capital markets. Not so long ago, a private company valued at more than $1 billion was rare enough to warrant the nickname “unicorn.” Now, over 800 companies qualify.

Legal scholars are worried. A recent wave of academic papers makes the case that because unicorns are not constrained by the institutional and regulatory forces that keep public companies in line, they are especially prone to risky and illegal activities that harm investors, employees, consumers and society at large.

The proposed solution, naturally, is to bring these forces to bear on unicorns. Specifically, scholars are proposing mandatory IPOs, significantly expanded disclosure obligations, regulatory changes designed to dramatically increase secondary-market trading of unicorn shares, expanded whistleblower protections for unicorn employees and stepped-up Securities and Exchange Commission enforcement against large private companies.

This position has also been gaining traction outside the ivory tower. One leader of this intellectual movement was recently appointed director of the SEC’s Division of Corporation Finance. Big changes may be coming soon.

In a new paper titled “Unicorniphobia” (forthcoming in the Harvard Business Law Review), I challenge this suddenly dominant view that unicorns are especially dangerous and should be “tamed” with bold new securities regulations. I raise three main objections.

First, pushing unicorns toward public company status may not help and may actually make problems worse. According to the vast academic literature on “market myopia” or “stock-market short-termism,” it is public company managers who have especially dangerous incentives to take on excessive leverage and risk; to underinvest in compliance; to sacrifice product quality and safety; to slash R&D and other forms of corporate investment; to degrade the environment; and to engage in accounting fraud and other corporate misconduct, among many other things.

The dangerous incentives that produce this parade of horrible outcomes allegedly flow from a constellation of market, institutional, cultural and regulatory features that operate distinctly on public companies, not unicorns, including executive compensation linked to short-term stock performance, pressure to meet quarterly earnings projections (aka “quarterly capitalism”) and the persistent threat (and occasional reality) of a hedge fund activist attack. To the extent this literature is correct, the proposed unicorn reforms would merely amount to forcing companies to shed one set of purportedly dangerous incentives for another.

Second, proponents of new unicorn regulations rely on rhetorical sleight of hand. To show that unicorns pose unique dangers, these advocates rely heavily on anecdotes and case studies of well-known “bad” unicorns, especially the cases of Uber and Theranos, in their papers. Yet the authors make few or no attempts to show how their proposed reforms would have mitigated any significant harm caused by either of these companies — a highly questionable proposition, as I show in great detail in my paper.

Take Theranos, whose founder and CEO Elizabeth Holmes is currently facing trial on charges of criminal fraud and, if convicted, faces a possible sentence of up to 20 years in federal prison. Would any of the proposed securities regulation reforms have plausibly made a positive difference in this case? Allegations that Holmes and others lied extensively to the media, doctors, patients, regulators, investors, business partners and even their own board of directors make it hard to believe they would have been any more truthful had they been forced to make some additional securities disclosures.

As to the proposal to enhance trading of unicorn shares in order to incentivize short sellers and market analysts to sniff out potential frauds, the fact is that these market players already had the ability and incentive to make these plays against Theranos indirectly by taking a short position in its public company partners like Walgreens, or a long position in its public company competitors, like LabCorp and Quest Diagnostics. They failed to do so. Proposals to expand whistleblower protections and SEC enforcement in this domain seem equally unlikely to have made any difference.

Finally, the proposed reforms risk doing more harm than good. Successful unicorns today benefit not only their investors and managers, but also their employees, consumers and society at large. And they do so precisely because of the features of current regulations that are now up on the regulatory chopping block. Altering this regime as these papers propose would put these benefits in jeopardy and thus may do more harm than good.

Consider one company that recently generated an enormous social benefit: Moderna. Before going public in December 2018, Moderna was a secretive, controversial, overhyped biotech unicorn without a single product on the market (or even in Phase 3 clinical trials), barely any scientific peer-reviewed publications, a history of turnover among high-level scientific personnel, a CEO with a penchant for over-the-top claims about the company’s potential and a toxic work culture.

Had these proposed new securities regulations been in place during Moderna’s “corporate adolescence,” it’s quite plausible that they would have significantly disrupted the company’s development. In fact, Moderna might not have been in a position to develop its highly effective COVID-19 vaccine as rapidly as it did. Our response to the coronavirus pandemic has benefited, in part, from our current approach to securities regulation of unicorns.

The lessons from Moderna also bear on efforts to use securities regulation to combat climate change. According to a recent report, 43 unicorns are operating in “climate tech,” developing products and services designed to mitigate or adapt to global climate change. These companies are risky. Their technologies may fail; most probably will. Some are challenging entrenched incumbents that have powerful incentives to do whatever is necessary to resist the competitive threat. Some may be trying to change well-established consumer preferences and behaviors. And they all face an uncertain regulatory environment, varying widely across and within jurisdictions.

Like other unicorns, they may have highly empowered founder CEOs who are demanding, irresponsible or messianic. They may also have core investors who do not fully understand the science underlying their products, are denied access to basic information and who press the firm to take risks to achieve astronomical results.

And yet, one or more of these companies may represent an important resource for our society in dealing with disruptions from climate change. As policymakers and scholars work out how securities regulation can be used to address climate change, they should not overlook the potentially important role unicorn regulation can play.

20 years later, unchecked data collection is part of 9/11’s legacy

Almost every American adult remembers, in vivid detail, where they were the morning of September 11, 2001. I was on the second floor of the West Wing of the White House, at a National Economic Council Staff meeting — and I will never forget the moment the Secret Service agent abruptly entered the room, shouting: “You must leave now. Ladies, take off your high heels and go!”

Just an hour before, as the National Economic Council White House technology adviser, I was briefing the deputy chief of staff on final details of an Oval Office meeting with the president, scheduled for September 13. Finally, we were ready to get the president’s sign-off to send a federal privacy bill to Capitol Hill — effectively a federal version of the California Privacy Rights Act, but stronger. The legislation would put guardrails around citizens’ data — requiring opt-in consent for their information to be shared, governing how their data could be collected and how it would be used.

But that morning, the world changed. We evacuated the White House and the day unfolded with tragedy after tragedy sending shockwaves through our nation and the world. To be in D.C. that day was to witness and personally experience what felt like the entire spectrum of human emotion: grief, solidarity, disbelief, strength, resolve, urgency … hope.

Much has been written about September 11, but I want to spend a moment reflecting on the day after.

When the National Economic Council staff came back into the office on September 12, I will never forget what Larry Lindsey, our boss at the time, told us: “I would understand it if some of you don’t feel comfortable being here. We are all targets. And I won’t appeal to your patriotism or faith. But I will — as we are all economists in this room — appeal to your rational self-interest. If we back away now, others will follow, and who will be there to defend the pillars of our society? We are holding the line here today. Act in a way that will make this country proud. And don’t abandon your commitment to freedom in the name of safety and security.”

There is so much to be proud of about how the country pulled together and how our government responded to the tragic events on September 11. First, however, as a professional in the cybersecurity and data privacy field, I reflect on Larry’s advice, and many of the critical lessons learned in the years that followed — especially when it comes to defending the pillars of our society.

Even though our collective memories of that day still feel fresh, 20 years have passed, and we now understand the vital role that data played in the months leading up to the 9/11 terrorist attacks. But, unfortunately, we failed to connect the dots that could have saved thousands of lives by holding intelligence data too closely in disparate locations. These data silos obscured the patterns that would have been clear if only a framework had been in place to share information securely.

So, we told ourselves, “Never again,” and government officials set out to increase the amount of intelligence they could gather — without thinking through significant consequences for not only our civil liberties but also the security of our data. So, the Patriot Act came into effect, with 20 years of surveillance requests from intelligence and law enforcement agencies crammed into the bill. Having been in the room for the Patriot Act negotiations with the Department of Justice, I can confidently say that, while the intentions may have been understandable — to prevent another terrorist attack and protect our people — the downstream negative consequences were sweeping and undeniable.

Domestic wiretapping and mass surveillance became the norm, chipping away at personal privacy, data security and public trust. This level of surveillance set a dangerous precedent for data privacy, meanwhile yielding marginal results in the fight against terrorism.

Unfortunately, the federal privacy bill that we had hoped to bring to Capitol Hill the very week of 9/11 — the bill that would have solidified individual privacy protections — was mothballed.

Over the subsequent years, it became easier and cheaper to collect and store massive amounts of surveillance data. As a result, tech and cloud giants quickly scaled up and dominated the internet. As more data was collected (both by the public and the private sectors), more and more people gained visibility into individuals’ private data — but no meaningful privacy protections were put in place to accompany that expanded access.

Now, 20 years later, we find ourselves with a glut of unfettered data collection and access, with behemoth tech companies and IoT devices collecting data points on our movements, conversations, friends, families and bodies. Massive and costly data leaks — whether from ransomware or simply misconfiguring a cloud bucket — have become so common that they barely make the front page. As a result, public trust has eroded. While privacy should be a human right, it’s not one that’s being protected — and everyone knows it.

This is evident in the humanitarian crisis we have seen in Afghanistan. Just one example: Tragically, the Taliban have seized U.S. military devices that contain biometric data on Afghan citizens who supported coalition forces — data that would make it easy for the Taliban to identify and track down those individuals and their families. This is a worst-case scenario of sensitive, private data falling into the wrong hands, and we did not do enough to protect it.

This is unacceptable. Twenty years later, we are once again telling ourselves, “Never again.” 9/11 should have been a reckoning of how we manage, share and safeguard intelligence data, but we still have not gotten it right. And in both cases — in 2001 and 2021 — the way we manage data has a life-or-death impact.

This is not to say we aren’t making progress: The White House and U.S. Department of Defense have turned a spotlight on cybersecurity and Zero Trust data protection this year, with an executive order to spur action toward fortifying federal data systems. The good news is that we have the technology we need to safeguard this sensitive data while still making it shareable. In addition, we can put contingency plans in place to prevent data that falls into the wrong hands. But, unfortunately, we just aren’t moving fast enough — and the slower we solve this problem of secure data management, the more innocent lives will be lost along the way.

Looking ahead to the next 20 years, we have an opportunity to rebuild trust and transform the way we manage data privacy. First and foremost, we have to put some guardrails in place. We need a privacy framework that gives individuals autonomy over their own data by default.

This, of course, means that public- and private-sector organizations have to do the technical, behind-the-scenes work to make this data ownership and control possible, tying identity to data and granting ownership back to the individual. This is not a quick or simple fix, but it’s achievable — and necessary — to protect our people, whether U.S. citizens, residents or allies worldwide.

To accelerate the adoption of such data protection, we need an ecosystem of free, accessible and open source solutions that are interoperable and flexible. By layering data protection and privacy in with existing processes and solutions, government entities can securely collect and aggregate data in a way that reveals the big picture without compromising individuals’ privacy. We have these capabilities today, and now is the time to leverage them.

Because the truth is, with the sheer volume of data that’s being gathered and stored, there are far more opportunities for American data to fall into the wrong hands. The devices seized by the Taliban are just a tiny fraction of the data that’s currently at stake. As we’ve seen so far this year, nation-state cyberattacks are escalating. This threat to human life is not going away.

Larry’s words from September 12, 2001, still resonate: If we back away now, who will be there to defend the pillars of our society? It’s up to us — public- and private-sector technology leaders — to protect and defend the privacy of our people without compromising their freedoms.

It’s not too late for us to rebuild public trust, starting with data. But, 20 years from now, will we look back on this decade as a turning point in protecting and upholding individuals’ right to privacy, or will we still be saying, “Never again,” again and again?

UK dials up the spin on data reform, claiming ‘simplified’ rules will drive ‘responsible’ data sharing

The U.K. government has announced a consultation on plans to shake up the national data protection regime, as it looks at how to diverge from European Union rules following Brexit.

It’s also a year since the U.K. published a national data strategy in which said it wanted pandemic levels of data sharing to become Britain’s new normal.

The Department for Digital, Culture, Media and Sport (DCPS) has today trailed an incoming reform of the information commissioner’s office — saying it wants to broaden the ICO’s remit to “champion sectors and businesses that are using personal data in new, innovative and responsible ways to benefit people’s lives”; and promising “simplified” rules to encourage the use of data for research which “benefit’s people’s lives”, such as in the field of healthcare.

It also wants a new structure for the regulator — including the creation of an independent board and chief executive for the ICO, to mirror the governance structures of other regulators such as the Competition and Markets Authority, Financial Conduct Authority and Ofcom.

Additionally, it said the data reform consultation will consider how the new regime can help mitigate the risks around algorithmic bias — something the EU is already moving to legislate on, setting out a risk-based proposal for regulating applications of AI back in April.

Which means the U.K. risks being left lagging if it’s only going to concern itself with a narrow focus on “bias mitigation”, rather than considering the wider sweep of how AI is intersecting with and influencing its citizens’ lives.

In a press release announcing the consultation, DCMS highlights an artificial intelligence partnership involving Moorfields Eye Hospital and the University College London Institute of Ophthalmology, which kicked off back in 2016, as an example of the kinds of beneficial data sharing it wants to encourage. Last year the researchers reported that their AI had been able to predict the development of wet age-related macular degeneration more accurately than clinicians.

The partnership also involved (Google-owned) DeepMind and now Google Health — although the government’s PR doesn’t make mention of the tech giant’s involvement. It’s an interesting omission, given that DeepMind’s name is also attached to a notorious U.K. patient data-sharing scandal, which saw another London-based NHS Trust (the Royal Free) sanctioned by the ICO, in 2017, for improperly sharing patient data with the Google-owned company during the development phase of a clinician support app (which Google is now in the process of discontinuing).

DCMS may be keen to avoid spelling out that its goal for the data reforms — aka to “remove unnecessary barriers to responsible data use” — could end up making it easier for commercial entities like Google to get their hands on U.K. citizens’ medical records.

The sizeable public backlash over the most recent government attempt to requisition NHS users’ medical records — for vaguely defined “research” purposes (aka the “General Practice Data for Planning and Research”, or GPDPR, scheme) — suggests that a government-enabled big-health-data-free-for-all might not be so popular with U.K. voters.

“The government’s data reforms will provide clarity around the rules for the use of personal data for research purposes, laying the groundwork for more scientific and medical breakthroughs,” is how DCMS’ PR skirts the sensitive health data sharing topic.

Elsewhere there’s talk of “reinforc[ing] the responsibility of businesses to keep personal information safe, while empowering them to grow and innovate” — so that sounds like a yes to data security but what about individual privacy and control over what happens to your information?

The government seems to be saying that will depend on other aims — principally economic interests attached to the U.K.’s ability to conduct data-driven research or secure trade deals with other countries that don’t have the same (current) high U.K. standards of data protection.

There are some purely populist flourishes here too — with DCMS couching its ambition for a data regime “based on common sense, not box ticking” — and flagging up plans to beef up penalties for nuisance calls and text messages. Because, sure, who doesn’t like the sound of a crackdown on spam?

Except spam text messages and nuisance calls are a pretty quaint concern to zero in on in an era of apps and data-driven, democracy-disrupting mass surveillance — which was something the outgoing information commissioner raised as a major issue of concern during her tenure at the ICO.

The same populist anti-spam messaging has already been deployed by ministers to attack the need to obtain internet users’ consent for dropping tracking cookies — which the digital minister Oliver Dowden recently suggested he wants to do away with — for all but “high risk” purposes.

Having a system of rights wrapping people’s data that gives them a say over (and a stake in) how it can be used appears to be being reframed in the government’s messaging as irresponsible or even non-patriotic — with DCMS pushing the notion that such rights stand in the way of more important economic or highly generalized “social” goals.

Not that it has presented any evidence for that — or even that the U.K.’s current data protection regime got in the way of (the very ample) data sharing during COVID-19… While negative uses of people’s information are being condensed in DCMS’ messaging to the narrowest possible definition — of spam that’s visible to an individual — never mind how that person got targeted with the nuisance calls/spam texts in the first place.

The government is taking its customary “cake and eat it” approach to spinning its reform plan — claiming it will both “protect” people’s data while also trumpeting the importance of making it really easy for citizens’ information to be handed off to anyone who wants it, so long as they can claim they’re doing some kind of “innovation”, while also larding its PR with canned quotes dubbing the plan “bold” and “ambitious”.

So while DCMS’ announcement says the reform will “maintain” the U.K.’s (currently) world-leading data protection standards, it directly rows back — saying the new regime will (merely) “build on” a few broad-brush “key elements” of the current rules (specifically it says it will keep “principles around data processing, people’s data rights and mechanisms for supervision and enforcement”).

Clearly the devil will be in the detail of the proposals which are due to be published tomorrow morning. So expect more analysis to debunk the spin soon.

But in one specific trailed change DCMS says it wants to move away from a “one-size-fits-all” approach to data protection compliance — and “allow organisations to demonstrate compliance in ways more appropriate to their circumstances, while still protecting citizens’ personal data to a high standard”.

That implies that smaller data-mining operations — DCMS’s PR uses the example of a hairdresser’s but plenty of startups can employ fewer staff than the average barber’s shop — may be able to expect to get a pass to ignore those ‘high standards’ in the future.

Which suggests the U.K.’s “high standards” may, under Dowden’s watch, end up resembling more of a Swiss Cheese…

Data protection is a “how to, not a don’t do”…

The man who is likely to become the U.K.’s next information commissioner, New Zealand’s privacy commissioner John Edwards, was taking questions from a parliamentary committee earlier today, as MPs considered whether to support his appointment to the role.

If he’s confirmed in the job, Edwards will be responsible for implementing whatever new data regime the government cooks up.

Under questioning, he rejected the notion that the U.K.’s current data protection regime presents a barrier to data sharing — arguing that laws like GDPR should rather be seen as a “how to” and an “enabler” for innovation.

“I would take issue with the dichotomy that you presented [about privacy vs data-sharing],” he told the committee chair. “I don’t believe that policymakers and businesses and governments are faced with a choice of share or keep faith with data protection. Data protection laws and privacy laws would not be necessary if it wasn’t necessary to share information. These are two sides of the same coin.

“The UK DPA [data protection act] and UK GDPR they are a ‘how to’ — not a ‘don’t do’. And I think the UK and many jurisdictions have really finally learned that lesson through the COVID-19 crisis. It has been absolutely necessary to have good quality information available, minute by minute. And to move across different organizations where it needs to go, without friction. And there are times when data protection laws and privacy laws introduce friction and I think that what you’ve seen in the UK is that when it needs to things can happen quickly.”

He also suggested that plenty of economic gains could be achieved for the U.K. with some minor tweaks to current rules, rather than a more radical reboot being necessary. (Though clearly setting the rules won’t be up to him; his job will be enforcing whatever new regime is decided.)

“If we can, in the administration of a law which at the moment looks very much like the UK GDPR, that gives great latitude for different regulatory approaches — if I can turn that dial just a couple of points that can make the difference of billions of pounds to the UK economy and thousands of jobs so we don’t need to be throwing out the statute book and starting again — there is plenty of scope to be making improvements under the current regime,” he told MPs. “Let alone when we start with a fresh sheet of paper if that’s what the government chooses to do.”

TechCrunch asked another Edwards (no relation) — Newcastle University’s Lilian Edwards, professor of law, innovation and society — for her thoughts on the government’s direction of travel, as signalled by DCMS’ pre-proposal-publication spin, and she expressed similar concerns about the logic driving the government to argue it needs to rip up the existing standards.

“The entire scheme of data protection is to balance fundamental rights with the free flow of data. Economic concerns have never been ignored, and the current scheme, which we’ve had in essence since 1998, has struck a good balance. The great things we did with data during COVID-19 were done completely legally — and with no great difficulty under the existing rules — so that isn’t a reason to change them,” she told us.

She also took issue with the plan to reshape the ICO “as a quango whose primary job is to ‘drive economic growth’ ” — pointing out that DCMS’ PR fails to include any mention of privacy or fundamental rights, and arguing that “creating an entirely new regulator isn’t likely to do much for the ‘public trust’ that’s seen as declining in almost every poll.”

She also suggested the government is glossing over the real economic damage that would hit the U.K. if the EU decides its “reformed” standards are no longer essentially equivalent to the bloc’s. “[It’s] hard to see much concern for adequacy here; which will, for sure, be reviewed, to our detriment — prejudicing 43% of our trade for a few low value trade deals and some hopeful sell offs of NHS data (again, likely to take a wrecking ball to trust judging by the GPDPR scandal).”

She described the goal of regulating algorithmic bias as “applaudable” — but also flagged the risk of the U.K. falling behind other jurisdictions which are taking a broader look at how to regulate artificial intelligence.

Per DCMS’ press release, the government seems to be intending for an existing advisory body, called the Centre for Data Ethics and Innovation (CDEI), to have a key role in supporting its policymaking in this area — saying that the body will focus on “enabling trustworthy use of data and AI in the real-world”. However it has still not appointed a new CDEI chair to replace Roger Taylor — with only an interim chair appointment (and some new advisors) announced today.

“The world has moved on since CDEI’s work in this area,” argued Edwards. “We realise now that regulating the harmful effects of AI has to be considered in the round with other regulatory tools not just data protection. The proposed EU AI Regulation is not without flaw but goes far further than data protection in mandating better quality training sets, and more transparent systems to be built from scratch. If the UK is serious about regulating it has to look at the global models being floated but right now it looks like its main concerns are insular, short-sighted and populist.”

Patient data privacy advocacy group MedConfidential, which has frequently locked horns with the government over its approach to data protection, also queried DCMS’ continued attachment to the CDEI for shaping policymaking in such a crucial area — pointing to last year’s biased algorithm exam grading scandal, which happened under Taylor’s watch.

(NB: Taylor was also the Ofqual chair, and his resignation from that post in December cited a “difficult summer”, even as his departure from the CDEI leaves an awkward hole now… )

“The culture and leadership of CDEI led to the A-Levels algorithm, why should anyone in government have any confidence in what they say next?” said MedConfidential’s Sam Smith.

UK offers cash for CSAM detection tech targeted at E2E encryption

The U.K. government is preparing to spend over half a million dollars to encourage the development of detection technologies for child sexual exploitation material (CSAM) that can be bolted on to end-to-end encrypted messaging platforms to scan for the illegal material, as part of its ongoing policy push around internet and child safety.

In a joint initiative today, the Home Office and the Department for Digital, Media, Culture and Sport (DCMS) announced a “Tech Safety Challenge Fund” — which will distribute up to £425,000 (~$584,000) to five organizations (£85,000/$117,000 each) to develop “innovative technology to keep children safe in environments such as online messaging platforms with end-to-end encryption”.

A Challenge statement for applicants to the program adds that the focus is on solutions that can be deployed within E2E-encrypted environments “without compromising user privacy”.

“The problem that we’re trying to fix is essentially the blindfolding of law enforcement agencies,” a Home Office spokeswoman told us, arguing that if tech platforms go ahead with their “full end-to-end encryption plans, as they currently are… we will be completely hindered in being able to protect our children online”.

While the announcement does not name any specific platforms of concern, Home Secretary Priti Patel has previously attacked Facebook’s plans to expand its use of E2E encryption — warning in April that the move could jeopardize law enforcement’s ability to investigate child abuse crime.

Facebook-owned WhatsApp also already uses E2E encryption so that platform is already a clear target for whatever “safety” technologies might result from this taxpayer-funded challenge.

Apple’s iMessage and FaceTime are among other existing mainstream messaging tools which use E2E encryption.

So there is potential for very widespread application of any “child safety tech” developed through this government-backed challenge. (Per the Home Office, technologies submitted to the Challenge will be evaluated by “independent academic experts”. The department was unable to provide details of who exactly will assess the projects.)

Patel, meanwhile, is continuing to apply high-level pressure on the tech sector on this issue — including aiming to drum up support from G7 counterparts.

Writing in a paywalled op-ed in a Tory-friendly newspaper, The Telegraph, she trails a meeting she’ll be chairing today where she says she’ll push the G7 to collectively pressure social media companies to do more to address “harmful content on their platforms”.

“The introduction of end-to-end encryption must not open the door to even greater levels of child sexual abuse. Hyperbolic accusations from some quarters that this is really about governments wanting to snoop and spy on innocent citizens are simply untrue. It is about keeping the most vulnerable among us safe and preventing truly evil crimes,” she adds.

“I am calling on our international partners to back the UK’s approach of holding technology companies to account. They must not let harmful content continue to be posted on their platforms or neglect public safety when designing their products. We believe there are alternative solutions, and I know our law enforcement colleagues agree with us.”

In the op-ed, the Home Secretary singles out Apple’s recent move to add a CSAM detection tool to iOS and macOS to scan content on user’s devices before it’s uploaded to iCloud — welcoming the development as a “first step”.

“Apple state their child sexual abuse filtering technology has a false positive rate of 1 in a trillion, meaning the privacy of legitimate users is protected whilst those building huge collections of extreme child sexual abuse material are caught out. They need to see th[r]ough that project,” she writes, urging Apple to press ahead with the (currently delayed) rollout.

Last week the iPhone maker said it would delay implementing the CSAM detection system — following a backlash led by security experts and privacy advocates who raised concerns about vulnerabilities in its approach, as well as the contradiction of a “privacy-focused” company carrying out on-device scanning of customer data. They also flagged the wider risk of the scanning infrastructure being seized upon by governments and states that might order Apple to scan for other types of content, not just CSAM.

Patel’s description of Apple’s move as just a “first step” is unlikely to do anything to assuage concerns that once such scanning infrastructure is baked into E2E encrypted systems it will become a target for governments to widen the scope of what commercial platforms must legally scan for.

However the Home Office’s spokeswoman told us that Patel’s comments on Apple’s CSAM tech were only intended to welcome its decision to take action in the area of child safety — rather than being an endorsement of any specific technology or approach. (And Patel does also write: “But that is just one solution, by one company. Greater investment is essential.”)

The Home Office spokeswoman wouldn’t comment on which types of technologies the government is aiming to support via the Challenge fund, either, saying only that they’re looking for a range of solutions.

She told us the overarching goal is to support ”middleground” solutions — denying the government is trying to encourage technologists to come up with ways to backdoor E2E encryption.

In recent years in the U.K. GCHQ has also floated the controversial idea of a so-called “ghost protocol” — that would allow for state intelligence or law enforcement agencies to be invisibly CC’d by service providers into encrypted communications on a targeted basis. That proposal was met with widespread criticism, including from the tech industry, which warned it would undermine trust and security and threaten fundamental rights.

It’s not clear if the government has such an approach — albeit with a CSAM focus — in mind here now as it tries to encourage the development of “middleground” technologies that are able to scan E2E-encrypted content for specifically illegal stuff.

In another concerning development, earlier this summer, guidance put out by DCMS for messaging platforms recommended that they “prevent” the use of E2E encryption for child accounts altogether.

Asked about that, the Home Office spokeswoman told us the tech fund is “not too different” and “is trying to find the solution in between”.

“Working together and bringing academics and NGOs into the field so that we can find a solution that works for both what social media companies want to achieve and also make sure that we’re able to protect children,” she said, adding: “We need everybody to come together and look at what they can do.”

There is not much more clarity in the Home Office guidance to suppliers applying for the chance to bag a tranche of funding.

There it writes that proposals must “make innovative use of technology to enable more effective detection and/or prevention of sexually explicit images or videos of children”.

“Within scope are tools which can identify, block or report either new or previously known child sexual abuse material, based on AI, hash-based detection or other techniques,” it goes on, further noting that proposals need to address “the specific challenges posed by e2ee environments, considering the opportunities to respond at different levels of the technical stack (including client-side and server-side).”

General information about the Challenge — which is open to applicants based anywhere, not just in the U.K. — can be found on the Safety Tech Network website.

The deadline for applications is October 6.

Selected applicants will have five months, between November 2021 and March 2022 to deliver their projects.

When exactly any of the tech might be pushed at the commercial sector isn’t clear — but the government may be hoping that by keeping up the pressure on the tech sector platform giants will develop this stuff themselves, as Apple has been.

The Challenge is just the latest U.K. government initiative to bring platforms in line with its policy priorities — back in 2017, for example, it was pushing them to build tools to block terrorist content — and you could argue it’s a form of progress that ministers are not simply calling for E2E encryption to be outlawed, as they frequently have in the past.

That said, talk of “preventing” the use of E2E encryption — or even fuzzy suggestions of “in between” solutions — may not end up being so very different.

What is different is the sustained focus on child safety as the political cudgel to make platforms comply. That seems to be getting results.

Wider government plans to regulate platforms — set out in a draft Online Safety bill, published earlier this year — have yet to go through parliamentary scrutiny. But in one already baked in change, the country’s data protection watchdog is now enforcing a children’s design code which stipulates that platforms need to prioritize kids’ privacy by default, among other recommended standards.

The Age Appropriate Design Code was appended to the U.K.’s data protection bill as an amendment — meaning it sits under wider legislation that transposed Europe’s General Data Protection Regulation (GDPR) into law, which brought in supersized penalties for violations like data breaches. And in recent months a number of social media giants have announced changes to how they handle children’s accounts and data — which the ICO has credited to the code.

So the government may be feeling confident that it has finally found a blueprint for bringing tech giants to heel.

The next Theranos should be shortable

A jury will soon decide whether former Theranos CEO Elizabeth Holmes is guilty of a federal crime. But the deeper public policy questions that Theranos raised remain unanswered. How did a startup built on a technology that never worked grow to a valuation of $9 billion? How was the company able to conceal its fraud for so long? And what, if anything, can be done to prevent the next Theranos before it grows large enough to cause real harm—and burns capital that could have been invested in genuine innovations? I explore these questions in a forthcoming paper in the Indiana Law Journal titled Taming Unicorns.

While Theranos was publicly exposed in October 2015 by Wall Street Journal investigative reporter John Carreyrou, insiders knew that it was committing fraud many years earlier. For example, in 2006, Holmes gave a demo of an early prototype blood test to Novartis executives and faked the results when the device malfunctioned. When Theranos’ CFO confronted Holmes about the incident, she fired him. In 2008—roughly seven years before Carreyrou’s exposé—Theranos board members learned that Holmes had misled them about the company’s finances and the state of its technology.

Over time, a remarkable number of people, both inside and outside the company, began to suspect that Theranos was a fraud. An employee at Walgreens tasked with vetting Theranos for a potential partnership wrote in a report that the company was overselling its technology. Physicians in Arizona grew skeptical of the results their patients were receiving, and a pathologist in Missouri wrote a blog post questioning Theranos’ claims about how accurate its devices were. Stanford Professor John Ioannidis published an article in JAMA raising more doubts.

Meanwhile, rumors had spread in the VC community. Bill Maris of Google Ventures (since rebranded as GV) claimed that his fund passed on investing in Theranos in 2013. According to Maris, the firm had sent an employee to take a Theranos blood test at Walgreens. The employee was asked to give more than the single drop of blood that Theranos claimed its devices needed. After he refused a conventional venous blood draw, he was told to come back to give more blood.

So why did none of these doubts slow Theranos’ fundraising? Part of the answer is that it was a private company, and it’s nearly impossible to bet against private companies.

Until the last decade, most startups that grew to become valuable businesses chose to become public companies. Late-stage startups with reported valuations over $1 billion used to be so rare that VC Aileen Lee dubbed them “unicorns” in a 2013 article in TechCrunch. Back then, there were only 39 startups claiming billion-dollar valuations. By 2021, despite the surge in companies going public through SPACs, the number of unicorns had passed 800.

The rise of unicorns has been accompanied by corporate misconduct scandals. Of course, public companies commit misconduct too. Research hasn’t yet established whether unicorns are systematically more prone to commit misconduct than comparable public companies. However, we do know that the opportunity to profit from information about a company by trading its securities creates incentives to uncover misconduct. Since the securities of private companies aren’t widely traded, it’s easier for private company executives to conceal misconduct.

Consider the electric truck company Nikola, formerly a unicorn. In 2020, Nikola went public through a SPAC. Once it was public, short seller Nathan Anderson decided to investigate and ultimately issued a report alleging a pattern of corporate misconduct. He showed that a video Nikola had produced with its prototype truck traveling at high speed was staged—Nikola had towed the truck to the top of a hill and filmed it rolling downhill in neutral. After Anderson released his report, the SEC and federal prosecutors launched investigations into whether the Nikola misled investors. Its stock price lost more than half of its value. In 2021, Nikola’s CEO Trevor Milton was charged by the SEC and indicted by a federal grand jury. Nikola wouldn’t have been exposed so soon if it had stayed private.

Securities regulation restricts both the sale and resale of private company stock in the name of investor protection. Startups usually attach a contractual right of first refusal to their shares, which effectively requires employees to get the company’s permission to sell. Many late-stage startups practice “selective liquidity”: allowing key employees to cash out in private placements, while preventing a robust market from emerging. Consequently, those who have information about private company misconduct have little incentive to publicize it—even though the Supreme Court has held that an investor who trades on information shared for the purpose of exposing fraud can’t be convicted of insider trading.

VCs might seem well-positioned to police unicorn misconduct. But their asymmetric risk preferences undermine their incentive to expose wrongdoing. VCs invest their funds in a portfolio of startups and expect that most bets will generate modest or negative returns, and only a small number will grow exponentially. The outsize growth of the few successful startups will offset the losses of the balance of the portfolio. For VCs, the difference between a startup that implodes in scandal and the many startups that fail to develop a product or find a market is insignificant.

Venture investing is an auction with a winner’s curse problem. Startups pitch to many VC firms in each fundraising round, but they only need to accept funding from one bidder. In public capital markets, if most investors decide that a company is fraudulent or excessively risky, its stock price will decline. In VC markets however, if most investors decide that a startup is fraudulent, the startup can still raise funds from a credulous contrarian. VCs who pass won’t share their negative assessment with the public because they want to maintain a founder-friendly reputation. Maris only told the press that GV had passed on Theranos after Carreyrou’s article.

Congress and the SEC could strengthen deterrence of unicorn misconduct by creating a market for trading private company securities, in three steps.

First, the regulations constraining the secondary trading of private company securities should be liberalized. The SEC should eliminate Rule 144’s holding period for resales as to accredited investors—the individual and institutional investors that the SEC deems sophisticated and most able to bear risk. Congress should eliminate section 12(g)’s requirement that effectively forces companies to go public if they acquire 2,000 record shareholders who are accredited investors—a rule that leads private companies to limit trading.

Second, the SEC should attach a regulatory most favored nation (MFN) clause to all securities sold through the safe harbors commonly used for private placements. An MFN clause would require that, if a company allows any of its securities to be resold, it must allow all its securities to be resold, as long as the resale is otherwise legal. A regulatory MFN clause would ban the practice of selective liquidity and nudge companies to allow their shares to be traded.

Third, the SEC should require that all private companies that let their securities be widely traded make limited public disclosures about their operations and finances. A limited disclosure mandate would protect investors by ensuring they had basic information about the companies in which they could invest, without saddling unicorns with the costly disclosure obligations placed on public companies.

The net effect of these reforms would be to create a robust market for trading unicorn stock among accredited investors. Most large, private companies would likely decide to allow their shares to be traded. Short sellers, analysts, and financial journalists would be attracted to the markets. Their investigations would strengthen deterrence of unicorn misconduct. The limited disclosure mandate, combined with the requirement that investors be accredited, would protect investors.

When large, private companies commit misconduct, the natural response is to increase the penalty for the underlying misconduct, not to interfere with the tradability of the company’s securities. But the problem isn’t lax penalties. Holmes is facing 20 years in prison—a punishment more brutal than anyone deserves. The problem is that penalties only work when wrongdoers expect to be caught. Trading creates incentives to expose misconduct faster.

ProtonMail logged IP address of French activist after order by Swiss authorities

ProtonMail, a hosted email service with a focus on end-to-end encrypted communications, has been facing criticism after a police report showed that French authorities managed to obtain the IP address of a French activist who was using the online service. The company has communicated widely about the incident, stating that it doesn’t log IP addresses by default and it only complies with local regulation — in that case Swiss law. While ProtonMail didn’t cooperate with French authorities, French police sent a request to Swiss police via Europol to force the company to obtain the IP address of one of its users.

For the past year, a group of people have taken over a handful of commercial premises and apartments near Place Sainte Marthe in Paris. They want to fight against gentrification, real estate speculation, Airbnb and high-end restaurants. While it started as a local conflict, it quickly became a symbolic campaign. They attracted newspaper headlines when they started occupying premises rented by Le Petit Cambodge — a restaurant that was targeted by the November 13th, 2015 terrorist attacks in Paris.

On September 1st, the group published an article on Paris-luttes.info, an anticapitalist news website, summing up different police investigations and legal cases against some members of the group. According to their story, French police sent an Europol request to ProtonMail in order to uncover the identity of the person who created a ProtonMail account — the group was using this email address to communicate. The address has also been shared on various anarchist websites.

The next day, @MuArF on Twitter shared an abstract of a police report detailing ProtonMail’s reply. According to @MuArF, the police report is related to the ongoing investigation against the group who occupied various premises around Place Sainte-Marthe. It says that French police received a message on Europol. That message contains details about the ProtonMail account.

Here’s what the report says:

  • The company PROTONMAIL informs us that the email address has been created on … The IP address linked to the account is the following: …
  • The device used is a … device identified with the number …
  • The data transmitted by the company is limited to that due to the privacy policy of PROTONMAIL TECHNOLOGIES.”

ProtonMail’s founder and CEO Andy Yen reacted to the police report on Twitter without mentioning the specific circumstances of that case in particular. “Proton must comply with Swiss law. As soon as a crime is committed, privacy protections can be suspended and we’re required by Swiss law to answer requests from Swiss authorities,” he wrote.

In particular, Andy Yen wants to make it clear that his company didn’t cooperate with French police nor Europol. It seems like Europol acted as the communication channel between French authorities and Swiss authorities. At some point, Swiss authorities took over the case and sent a request to ProtonMail directly. The company references these requests as “foreign requests approved by Swiss authorities” in its transparency report.

TechCrunch contacted ProtonMail founder and CEO Andy Yen with questions about the case.

One key question is exactly when the targeted account holder was notified that their data had been requested by Swiss authorities since — per ProtonMail — notification is obligatory under Swiss law.

However, Yen told us that — “for privacy and legal reasons” — he is unable to comment on specific details of the case or provide “non-public information on active investigations”, adding: “You would have to direct these inquiries to the Swiss authorities.”

At the same time, he did point us to this public page, where ProtonMail provides information for law enforcement authorities seeking data about users of its end-to-end encrypted email service, including setting out a “ProtonMail user notification policy”.

Here the company reiterates that Swiss law “requires a user to be notified if a third party makes a request for their private data and such data is to be used in a criminal proceeding” — however it also notes that “in certain circumstances” a notification “can be delayed”.

Per this policy, Proton says delays can affect notifications if: There is a temporary prohibition on notice by the Swiss legal process itself, by Swiss court order or “applicable Swiss law”; or where “based on information supplied by law enforcement, we, in our absolute discretion, believe that providing notice could create a risk of injury, death, or irreparable damage to an identifiable individual or group of individuals.”

“As a general rule though, targeted users will eventually be informed and afforded the opportunity to object to the data request, either by ProtonMail or by Swiss authorities,” the policy adds.

So, in the specific case, it looks likely that ProtonMail was either under legal order to delay notification to the account holder — given what appears to be up to eight months between the logging being instigated and disclosure of it — or it had been provided with information by the Swiss authorities which led it to conclude that delaying notice was essential to avoid a risk of “injury, death, or irreparable damage” to a person or persons (NB: it is unclear what “irreparable damage” means in this context, and whether it could be interpreted figuratively — as ‘damage’ to a person’s/group’s interests, for example, such as to a criminal investigation, not solely bodily harm — which would make the policy considerably more expansive).

In either scenario the level of transparency being afforded to individuals by Swiss law having a mandatory notification requirement when a person’s data has been requested looks severely limited if the same law authorities can, essentially, gag notifications — potentially for long periods (seemingly more than half a year in this specific case).

ProtonMail’s public disclosures also log an alarming rise in requests for data by Swiss authorities.

According to its transparency report, ProtonMail received 13 orders from Swiss authorities back in 2017 — but that had swelled to over three and a half thousand (3,572!) by 2020.

The number of foreign requests to Swiss authorities which are being approved has also risen, although not as steeply — with ProtonMail reporting receiving 13 such requests in 2017 — rising to 195 in 2020.

The company says it complies with lawful requests for user data but it also says it contests orders where it does not believe them to be lawful. And its reporting shows an increase in contested orders — with ProtonMail contesting three orders back in 2017 but in 2020 it pushed back against 750 of the data requests it received.

Per ProtonMail’s privacy policy, the information it can provide on a user account in response to a valid request under Swiss law may include account information provided by the user (such as an email address); account activity/metadata (such as sender, recipient email addresses; IP addresses incoming messages originated from; the times messages were sent and received; message subjects etc); total number of messages, storage used and last login time; and unencrypted messages sent from external providers to ProtonMail. As an end-to-end encrypted email provider, it cannot decrypt email data so is unable to provide information on the contents of email, even when served with a warrant.

However in its transparency report, the company also signals an additional layer of data collection which it may be (legally) obligated to carry out — writing that: “In addition to the items listed in our privacy policy, in extreme criminal cases, ProtonMail may also be obligated to monitor the IP addresses which are being used to access the ProtonMail accounts which are engaged in criminal activities.”

In general though, unless you are based 15 miles offshore in international waters, it is not possible to ignore court orders Andy Yen

It’s that IP monitoring component which has caused such alarm among privacy advocates now — and no small criticism of Proton’s marketing claims as a ‘user privacy centric’ company.

It has faced particular criticism for marketing claims of providing “anonymous email” and for the wording of the caveat in its transparency disclosure — where it talks about IP logging only occurring in “extreme criminal cases”.

Few would agree that anti-gentrification campaigners meet that bar.

At the same time, Proton does provide users with an onion address — meaning activists concerned about tracking can access its encrypted email service using Tor which makes it harder for their IP address to be tracked. So it is providing tools for users to protect themselves against IP monitoring (as well as protect the contents of their emails from being snooped on), even though its own service can, in certain circumstances, be turned into an IP monitoring tool by Swiss law enforcement.

In the backlash around the revelation of the IP logging of the French activists, Yen said via Twitter that ProtonMail will be providing a more prominent link to its onion address on its website:

Proton does also offer a VPN service of its own — and Yen has claimed that Swiss law does not allow it to log its VPN users’ IP addresses. So it’s interesting to speculate whether the activists might have been able to evade the IP logging if they had been using both Proton’s end-to-end encrypted email and its VPN service…

“If they were using Tor or ProtonVPN, we would have been able to provide an IP, but it would be the IP of the VPN server, or the IP of the Tor exit node,” Yen told TechCrunch when we asked about this.

“We do protect against this threat model via our Onion site (protonmail.com/tor),” he added. “In general though, unless you are based 15 miles offshore in international waters, it is not possible to ignore court orders.”

“The Swiss legal system, while not perfect, does provide a number of checks and balances, and it's worth noting that even in this case, approval from three authorities in two countries was required, and that's a fairly high bar which prevents most (but not all) abuse of the system.”

In a public response on Reddit, Proton also writes that it is “deeply concerned” about the case — reiterating that it was unable to contest the order in this instance.

“The prosecution in this case seems quite aggressive,” it added. “Unfortunately, this is a pattern we have increasingly seen in recent years around the world (for example in France where terror laws are inappropriately used). We will continue to campaign against such laws and abuses.”

Zooming out, in another worrying development that could threaten the privacy of internet users in Europe, European Union lawmakers have signaled they want to work to find ways to enable lawful access to encrypted data — even as they simultaneously claim to support strong encryption.

Again, privacy campaigners are concerned.

ProtonMail and a number of other end-to-end encrypted services warned in an open letter in January that EU lawmakers risk setting the region on a dangerous path toward backdooring encryption if they continue in this direction.