After colleges sue, ICE backs down from student visa rule change

The Trump administration has backed down from plans to revoke visas of international students studying in the U.S., whose schools planned to take their classes exclusively online in the fall because of the coronavirus pandemic.

The reversal comes as over a dozen universities and colleges threatened legal action against the administration’s order. The multi-faceted effort also was led by attorneys general in 17 states, including D.C., led by Massachusetts Attorney General Maura Healey.

On Tuesday, Harvard and MIT had a remote hearing to share a case against ICE’s rule, which would have put the lives of millions of international students in jeopardy. Within minutes of the hearing, Homeland Security agreed to revoke its initial plans to only allow international students to stay in the country if they are taking in-person classes.

The new guidance, which is based on March 9 guidelines, will only benefit students who are currently enrolled. This leaves new students or individuals set to come to the United States in the fall in flux.

The rule, announced last Monday, was broadly met with fury from the academic community. Yale Law School’s Dean, Heather Gerken, posted a statement in opposition to the rule. One professor said that “I will teach outside in the snow if I have to,” if it means keeping students in the country.

Developing… more soon

Decrypted: As tech giants rally against Hong Kong security law, Apple holds out

It’s not often Silicon Valley gets behind a single cause. Supporting net neutrality was one, reforming government surveillance another. Last week, Big Tech took up its latest: halting any cooperation with Hong Kong police.

Facebook, Google, Microsoft, Twitter, and even China-headquartered TikTok said last week they would no longer respond to demands for user data from Hong Kong law enforcement — read: Chinese authorities — citing the new unilaterally imposed Beijing national security law. Critics say the law, ratified on June 30, effectively kills China’s “one country, two systems” policy allowing Hong Kong to maintain its freedoms and some autonomy after the British handed over control of the city-state back to Beijing in 1997.

Noticeably absent from the list of tech giants pulling cooperation was Apple, which said it was still “assessing the new law.” What’s left to assess remains unclear, given the new powers explicitly allow warrantless searches of data, intercept and restrict internet data, and censor information online, things that Apple has historically opposed if not in so many words.

Facebook, Google and Twitter can live without China. They already do — both Facebook and Twitter are banned on the mainland, and Google pulled out after it accused Beijing of cyberattacks. But Apple cannot. China is at the heart of its iPhone and Mac manufacturing pipeline, and accounts for over 16% of its revenue — some $9 billion last quarter alone. Pulling out of China would be catastrophic for Apple’s finances and market position.

The move by Silicon Valley to cut off Hong Kong authorities from their vast pools of data may be a largely symbolic move, given any overseas data demands are first screened by the Justice Department in a laborious and frequently lengthy legal process. But by holding out, Apple is also sending its own message: Its ardent commitment to human rights — privacy and free speech — stops at the border of Hong Kong.

Here’s what else is in this week’s Decrypted.


THE BIG PICTURE

Police used Twitter-backed Dataminr to snoop on protests

Societal upheaval during the COVID-19 pandemic underscores need for new AI data regulations

As a long-time proponent of AI regulation that is designed to protect public health and safety while also promoting innovation, I believe Congress must not delay in enacting, on a bipartisan basis, Section 102(b) of The Artificial Intelligence Data Protection Act — my proposed legislation and now a House of Representatives Discussion Draft Bill. Guardrails in the form of Section 102(b)’s ethical AI legislation are necessary to maintain the dignity of the individual.

What does Section 102(b) of The AI Data Protection Act provide and why the urgent need for the federal government to enact it now?

To answer these questions, it is first necessary to understand how artificial intelligence (AI) is being used during this historic moment when our democratic society is confronting two simultaneous existential threats. Only then can the risks that AI poses to our individual dignity be recognized, and Section 102(b) be understood as one of the most important remedies to protect the liberties that Americans hold dear and that serve as the bedrock of our society.

America is now experiencing mass protests demanding an end to racism and police brutality, and watching as civil unrest unfolds in the midst of trying to quell the deadly COVID-19 pandemic. Whether we are aware of or approve of it, in both contexts — and in every other facet of our lives — AI technologies are being deployed by government and private actors to make critical decisions about us. In many instances, AI is being utilized to assist society and to get us as quickly as practical to the next normal.

But so far, policymakers have largely overlooked a critical AI-driven public health and safety concern. When it comes to AI, most of the focus has been on the issues of fairness, bias and transparency in data sets used to train algorithms. There is no question that algorithms have yielded bias; one only need to look to employee recruiting and loan underwriting for examples of unfair exclusion of women and racial minorities.

We’ve also seen AI generate unintended, and sometimes unexplainable, outcomes from the data. Consider the recent example of an algorithm that was supposed to assist judges with fair sentencing of nonviolent criminals. For reasons that have yet to be explained, the algorithm assigned higher risk scores to defendants younger than 23, resulting in 12% longer sentences than their older peers who had been incarcerated more frequently, while neither reducing incarceration nor recidivism.

But the current twin crises expose another more vexing problem that has been largely overlooked — how should society address the scenario where the AI algorithm got it right but from an ethical standpoint, society is uncomfortable with the results? Since AI’s essential purpose is to produce accurate predictive data from which humans can make decisions, the time has arrived for lawmakers to resolve not what is possible with respect to AI, but what should be prohibited.

Governments and private corporations have a never-ending appetite for our personal data. Right now, AI algorithms are being utilized around the world, including in the United States, to accurately collect and analyze all kinds of data about all of us. We have facial recognition to surveil protestors in a crowd or to determine whether the general public is observing proper social distancing. There is cell phone data for contact tracing, as well as public social media posts to model the spread of coronavirus to specific zip codes and to predict location, size and potential violence associated with demonstrations. And let’s not forget drone data that is being used to analyze mask usage and fevers, or personal health data used to predict which patients hospitalized with COVID have the greatest chance of deteriorating.

Only through the use of AI can this quantity of personal data be compiled and analyzed on such a massive scale.

This access by algorithms to create an individualized profile of our cell phone data, social behavior, health records, travel patterns and social media content — and many other personal data sets — in the name of keeping the peace and curtailing a devastating pandemic can, and will, result in various governmental actors and corporations creating frighteningly accurate predictive profiles of our most private attributes, political leanings, social circles and behaviors.

Left unregulated, society risks these AI-generated analytics being used by law enforcement, employers, landlords, doctors, insurers — and every other private, commercial and governmental enterprise that can collect or purchase it — to make predictive decisions, be they accurate or not, that impact our lives and strike a blow to the most fundamental notions of a liberal democracy. AI continues to assume an ever-expanding role in the employment context to decide who should be interviewed, hired, promoted and fired. In the criminal justice context, it is used to determine who to incarcerate and what sentence to impose. In other scenarios, AI restrict people to their homes, limit certain treatment at the hospital, deny loans and penalize those who disobey social distancing regulations.

Too often, those who eschew any type of AI regulation seek to dismiss these concerns as hypothetical and alarmist. But just a few weeks ago, Robert Williams, a Black man and Michigan resident, was wrongfully arrested because of a false face recognition match. According to news reports and an ACLU press release, Detroit police handcuffed Mr. Williams on his front lawn in front of his wife and two terrified girls, ages two and five. The police took him to a detention center about 40 minutes away, where he was locked up overnight. After an officer acknowledged during an interrogation the next afternoon that “the computer must have gotten it wrong,” Mr. Williams was finally released — nearly 30 hours after his arrest.

While widely believed to be the first confirmed case of AI’s incorrect facial recognition leading to the arrest of an innocent citizen, it seems clear this won’t be the last. Here, AI served as the primary basis for a critical decision that impacted the individual citizen — being arrested by law enforcement. But we must not only focus on the fact that the AI failed by identifying the wrong person, denying him his freedom. We must identify and proscribe those instances where AI should not be used as the basis for specified critical decisions — even when it gets it “right.”

As a democratic society, we should be no more comfortable with being arrested for a crime we contemplated but did not commit, or being denied medical treatment for a disease that will undoubtedly end in death over time, as we are with Mr. Williams’ mistaken arrest. We must establish an AI “no-fly zone” to preserve our individual freedoms. We must not allow certain key decisions to be left solely to the predictive output of artificially intelligent algorithms.

To be clear, this means that even in situations where every expert agrees that the data in and out is completely unbiased, transparent and accurate, there must be a statutory prohibition on utilizing it for any type of predictive or substantive decision-making. This is admittedly counter-intuitive in a world where we crave mathematical certainty, but necessary.

Section 102(b) of the Artificial Intelligence Data Protection Act properly and rationally accomplishes this in the context of both scenarios — where AI generates correct and/or incorrect outcomes. It does this in two key ways.

First, Section 102(b) specifically identifies those decisions which can never be made in whole or in part by AI. For example, it enumerates specific misuses of AI that would prohibit covered entities’ sole reliance on artificial intelligence to make certain decisions. These include recruitment, hiring and discipline of individuals, the denial or limitation of medical treatment, or medical insurance issuers making decisions regarding coverage of a medical treatment. In light of what society has recently witnessed, the prohibited areas should likely be expanded to further minimize the risk that AI will be used as a tool for racial discrimination and harassment of protected minorities.

Second, for certain other specific decisions based on AI analytics that are not outright prohibited, Section 102(b) define those instances where a human must be involved in the decision-making process.

By enacting Section 102(b) without delay, legislators can maintain the dignity of the individual by not allowing the most critical decisions that impact the individual to be left solely to the predictive output of artificially intelligent algorithms.

NASA injects $17M into four small companies with Artemis ambitions

NASA awards millions of dollars a year to small businesses through the SBIR program, but generally it’s a lot of small awards to hundreds of companies. Breaking with precedent, today the agency announced a new multi-million-dollar funding track and its four first recipients, addressing urgent needs for the Artemis program.

The Small Business Innovation Research program has various forms throughout the federal government, but it generally provides non-dilutive funding on the order of a few hundred thousand dollars over a couple years to nudge a nascent technology towards commercialization.

NASA has found, however, that there is a gap between the medium-size Phase II awards and Phase III, which is more like a full-on government contract; There are already “Extended” and “Pilot” programs that can provide up to an additional $1M to promising companies. But the fact is space is expensive and time consuming, and some need larger sums to complete the tech that NASA has already indicated confidence in or a need for.

Therefore the creation of this new tier of Phase II award: less than a full contract would amount to, but up to $5M — nothing to sneeze at, and it comes with relatively few strings attached.

The first four companies to collect a check from this new, as yet unnamed program are all pursuing technologies that will be of particular use during the Artemis lunar missions:

  • Fibertek: Optical communications for small spacecraft that would help relay large amounts of data from lunar landers to Earth
  • Qualtech Systems: Autonomous monitoring, fault-prevention, and health management systems for spacecraft like the proposed Lunar Gateway and possibly other vehicles and habitats
  • Pioneer Astronautics: Hardware to produce oxygen and steel from lunar regolith — if achieved, an incredibly useful form of high-tech alchemy
  • Protoinnovations: Traction control to improve handling of robotic and crewed rovers on lunar terrain

It’s important to note that these companies aren’t new to the game — they have a long and ongoing relationship with NASA, as SBIR grants take place over multiple years. “Each business has a track record of success with NASA, and we believe their technologies will have a direct impact on the Artemis program,” said NASA’s Jim Reuter in a news release.

The total awarded is $17M, but NASA, citing ongoing negotiations, could not be more specific about the breakdown except that the amounts awarded fall between $2.5M and $5M per company.

I asked the agency for a bit more information on the new program and how companies already in the SBIR system can apply to it or otherwise take advantage of the opportunity, and will update this post if I hear back.

The tech industry comes to grips with Hong Kong’s national security law

Scott Salandy-Defour used to make frequent stops at a battery manufacturer in southern China for his energy startup based in Hong Kong. The appeal of Hong Kong, he said, is its adjacency to the plentiful electronics suppliers in the Pearl River Delta, as well as the city’s amenities for foreign entrepreneurs, be it its well-established financial and legal system or a culture blending the East and West.

“It’s got the best of both worlds,” Salandy-Defour told TechCrunch. “But it’s not going to be the same.”

On July 1, Hong Kong’s sweeping new national security law came into effect, spelling the most profound change to the city’s way of life since the former British colony returned to Chinese rule in 1997.

The legislation will see Beijing set up an official security apparatus in the city to suppress what the authority defines as subversion, terrorism, separatism and collusion with foreign forces. Non-permanent residents can be expelled and companies can face fines if suspected of contravening the law.

Though the law doesn’t target the technology sector per se, speculation is rife about how it may affect entrepreneurs and larger companies as they go about their day-to-day operations and long-term plans. We talked to a handful of individuals in an attempt to parse out the ramifications of the law on internet freedom, data control, entrepreneurship, venture capital and other aspects pertaining to the tech industry. Several of our sources requested to have their names withheld in order to speak freely, an example of the law’s effect in action.

Part of the concern arises from the vagueness of the legislation. “We do not know anything concrete,” a China-based lawyer specializing in cross-border corporate cases told TechCrunch. “The national security law passed in Macau 11 years ago, but I heard there have been no enforcement cases. Hong Kong might be different. Police already prepared and carried banners warning against speech or gathering in violation of the new law.”

The bottom line is that the law impacts everyone in Hong Kong. “[It] will have a chilling effect as people try to understand its implementation,” reckoned Jeremy Daum, a senior research fellow at the Yale Law School Paul Tsai China Center.

Internet freedom

An outstanding concern is that the new rules could curtail internet freedom in the freewheeling city. Specifically, Article 9 stipulates that the Hong Kong government “shall employ necessary measures to strengthen publicity, guidance, oversight and management in schools, social organizations, media, networks and other matters related to national security,” with ‘networks’ here referring to the internet.

There are already signs of self-censorship. Some residents have started to delete their Twitter accounts and messages “out of fear of the national security law,” a Hong Kong-based media professor pointed out to TechCrunch.

While the law doesn’t give rise to “a Great Firewall situation overnight, it will be insidious nonetheless,” said a Hong Kong-based digital rights expert. “Platforms, publishers, and content hosts are likely to self-censor broadly given the vagueness of the law, and even then we’ll likely see more takedown requests and the like from the government.”

Shortly after the law took effect, an app called Eat With You, which labels local eateries supportive of the Hong Kong protesters, terminated its service. A source close to the app told us that the takedown was voluntary. Though the developer didn’t say whether it made the decision to preempt internet crackdown, it has “put other plans on hold.”

AppleCensorship.com told TechCrunch it’s monitoring potential removal of apps by Apple in Hong Kong, where the giant commands a 44% market share in the mobile handset market. The site is a project created by researchers at GreatFire.org, an organization that monitors internet censorship in China, to track what apps are unavailable in various App Stores.

“Apple has shown over and over again that they are willing to censor apps on their platform at the behest of government authorities,” said GreatFire.org’s Charlie Smith of Apple’s recent removal of TikTok in India.

A week after the law’s enactment, tech giants have come to reckon with the city’s new circumstances. Facebook and Twitter said they have suspended data requests from the Hong Kong authority. TikTok, on the other hand, announced it would exit Hong Kong. Reddit, which received an outsize investment from Tencent, provided a more evasive response: “All legal requests from Hong Kong are bound by careful review for validity and with a special attention to human rights implications.”

Residents in the city of seven million people have been bracing for censorship in recent weeks. Demand for virtual private networks (VPNs), which let users access otherwise banned apps, surged in Hong Kong after Beijing passed the national security law in late May.

“But a VPN is not a magic bullet,” the media professor argued. The tool has proven to be a short-lived solution. Back in 2017, Apple removed hundreds of VPNs from its Chinese App Store, stating it did so to comply with Chinese regulations.

Others who are more attuned to the Chinese internet are less wary. Hugo Cheuk, co-founder and chief operating officer of viAct.ai, a Hong Kong-based startup using computer vision to manage construction safety, said he already uses a wide range of apps, both Chinese and overseas ones, and can easily switch to alternatives.

“Let’s say if for whatever reasons WhatsApp cannot be used in Hong Kong one day, you still have other options like Messenger, Line, Dingtalk, WeChat,” he said. “Even apps like Slack or Snapchat weren’t popular just a few years ago, but we still communicate well back then.”

Data control

Some worry that the enforcement of the security law could lead to requests of user data by Beijing, making Hong Kong a less attractive place for tech companies resistant to China’s data review policies. As Daum noted, several provisions directly allow for the search of electronic devices and request service providers to delete information.

According to Article 43:

“When handling cases of crimes endangering national security, the Hong Kong Special Administrative Region government police department for the preservation of national security may employ the various measures that the extant laws of the Hong Kong Special Administrative Region allow the police and other law enforcement departments to take when investigating serious crimes, and may employ the following measures:

(1) search premises, vehicles, boats, aircraft and other relevant places and electronic devices that may contain evidence of an offence.

(4) Requiring persons who published information or the related service providers to remove information or provide assistance.”

“When setting up their APAC headquarters, foreign headquarters may no longer choose Hong Kong because the law overrides the original legal system,” partner of a Hong Kong venture capital firm told TechCrunch.

While Hong Kong is primarily known as a free trade and financial center, many international tech firms have set up offices there as a conduit into the APAC market.

Facebook and Twitter, whose main services are unavailable to mainland users, employ marketing staff in Hong Kong to court Chinese exporters with overseas advertising needs. Unicorns like delivery service Lalamove, logistics firm Gogovan and travel platform Klook, put their headquarters in Hong Kong for its strategic geographical location to attract customers across Asia.

“As a historic trading center, with ease of currency exchange, data and logistic flows, Hong Kong has played a key role in cross-border e-commerce. Many start-up tech companies service clients across Southeast Asia from a base in Hong Kong,” said Napoleon Biggs, a digital marketing consultant with over two decade’s experience in the region.

Though the new regulation may hit these sectors in terms of requests for government access to data, it will not affect their businesses otherwise, he reasoned.

Being in a key geographic location, as an internet hub for submarine cables and satellite dishes, Hong Kong also acts a top data center destination for multinationals, Biggs observed. The question now, he said, is how multinationals will perceive this new law and how it will affect their daily operations, if at all.

Startup hopes

Many entrepreneurs see Hong Kong as a springboard to its nearby resources rather than their main market. “Hong Kong investors are super risk-averse. The risk of being an entrepreneur doesn’t have the same level of respect here as in the U.S.,” reckoned Salandy-Defour, whose company Liquidstar deploys smart batteries primarily in Africa.

“But there are opportunities to network quickly,” he added. “We are also so close to Shenzhen and can speak to people [in tech] there who know what they are doing.”

Some Hong Kong entrepreneurs are hopeful that the law could accelerate the Greater Bay Area (GBA) initiative, which aims to stitch together Hong Kong, Macau and other cities around the Pearl River Delta, including economic powerhouses like Shenzhen and Guangzhou.

With its own set of laws and economic system in line with Western practices, Hong Kong has long been a top destination for multinational financial services. The special status was, however, not beneficial for technology companies targeting the Chinese market.

“If we want to do business in China, the first concern is the adaptation of different laws of China. Now, with the newly established national security law plus the GBA initiatives, more resources will be allocated to the 9+2 cities in the market and business perspectives, so we can more easily access the China market,” suggested Cheuk.

The integration can extend the potential reach of Hong Kong companies from seven million customers to 70 million in the GBA region, the entrepreneur said. “It’s good for startups trying to attract investment.”

His optimism is echoed by a Hong Kong-based investor for a Chinese venture capital firm. “After the law came into effect, there may be fewer technological exchanges between Hong Kong and the U.S. or Europe, but the GBA is more important to Hong Kong’s future development.”

For Hong Kong-based entrepreneurs who uphold freedom of information, the law may not bode well. Salandy-Defour, an American citizen, said he’s mulling a move to Singapore or Australia. In the long term, he plans to diversify his supply chain in other countries like Japan or Germany for sustainable batteries.

Relocation is less realistic for entrepreneurs who generate most of their revenues from the mainland. Several of them voiced concerns about the law’s adverse effect on freedom of speech, but have declined our interview requests due to concerns that their comment may violate the new law.

Decoupling spillover

The divide between Washington and Beijing is spilling into Hong Kong as the security law is seen as undermining the territory’s autonomy. In response, the U.S. declared Hong Kong is no longer autonomous from China and suspended the export of sensitive technologies to the city.

The impact of the split was evident. Shortly after China passed the national security for Hong Kong in late May, Hong Kong-based staff of China Mobile lost access to a piece of IBM data software, an employee at the Chinese telecom giant told TechCrunch. The staff has since switched to a Huawei substitute called TaiShan, which the source said comes with a user interface “very similar” to the IBM product.

China Mobile and IBM have not responded to our request for comment.

When it comes to picking promising local startups, the Hong Kong venture partner said he will avoid industries deemed ‘sensitive’ or susceptible to sanctions by the U.S. He’s also advised portfolio companies with an international plan to diversify their supply chain from China to nearby regions like Southeast Asia. Limited partners from the U.S. may start to shy away from Hong Kong VC funds, he speculated, as the city gets caught in the crossfire of trade tensions.

It’s notable that one of the most prominent VCs in Hong Kong, Horizons Ventures, which backs a lot of startups globally and is led by one of Asia’s richest men Li Ka-shing, has long kept a low profile. It continues to do now, perhaps very wisely. Some of the big names in its expansive portfolio include Spotify, Slack, Zoom, Impossible Foods and Skype. The firm did not respond to requests for comment for this article.

An unintended implication of Hong Kong’s loss of its special status is the potential inconvenience to mainland companies. It’s a common practice for Chinese companies to maintain a Hong Kong entity as a gateway to purchase U.S. technologies, tapping the region’s favorable trading terms, the venture partner said. Many Chinese exporters also take advantage of Hong Kong’s well-developed financial system and currency stability to handle international fund transfers.

“If that expediency is gone, Hong Kong is just another Chinese city,” said the investor.

TikTok faces ban in the US; pulls out of Hong Kong

The world’s most popular short video app continues to be in the crosshairs of politicians globally.

On Monday night, Secretary of State Mike Pompeo told Fox News that the United States is “certainly looking at” banning TikTok over concerns that it could be used by the Beijing government as a surveillance and propaganda tool.

The potential ban would deal another blow to TikTok after it recently went down in its biggest market, India.

On the heel of Pompeo’s statement, TikTok announced that it would pull out of Hong Kong, which is facing an unprecedented wave of control from the Beijing government after the promulgation of the national security law.

“In light of recent events, we’ve decided to stop operations of the TikTok app in Hong Kong,” said a TikTok spokesperson. The company declined further comment on the decision.

The vagueness of the statement leaves many questions unanswered. One has to wonder whether ByteDance will relaunch a censored version of the app in Hong Kong, presumably replacing it with its sister app Douyin that’s operated by ByteDance’s Chinese team.

ByteDance, founded by Chinese serial entrepreneur Zhang Yiming, has been working to disassociate TikTok from its Chinese ownership and Beijing censorship. Efforts have ranged from keeping an overseas data center for TikTok that’s supposedly out of reach by the Chinese authority, giving outside experts a glimpse into its moderation process, through to hiring Disney’s Kevin Mayer as the app’s new global face.

But its response to Hong Kong’s circumstances, presumably made by Mayer who is now the app’s chief executive, is a stark contrast to the decisions by Western tech giants. Facebook, Google, Twitter, and Telegram uniformly said this week they would either stop or suspend data review requests from the Hong Kong government.

Many see their move as an outright rejection of Chinese censorship and surveillance, while others think they are simply buying time to ponder their next step in Hong Kong: exit voluntarily, wait and get banned, or comply with Beijing rules — which seems the least likely.

TikTok said it had 150,000 users in Hong Kong as of last September, a nearly neglectable share given the app had 2 billion downloads globally by April. TechCrunch understands that the app operates a very small team in Hong Kong, so the impact of this regional exit on staff looks to be limited across the company.

How Have I Been Pwned became the keeper of the internet’s biggest data breaches

When Troy Hunt launched Have I Been Pwned in late 2013, he wanted it to answer a simple question: Have you fallen victim to a data breach?

Seven years later, the data-breach notification service processes thousands of requests each day from users who check to see if their data was compromised — or pwned with a hard ‘p’ — by the hundreds of data breaches in its database, including some of the largest breaches in history. As it’s grown, now sitting just below the 10 billion breached-records mark, the answer to Hunt’s original question is more clear.

“Empirically, it’s very likely,” Hunt told me from his home on Australia’s Gold Coast. “For those of us that have been on the internet for a while it’s almost a certainty.”

What started out as Hunt’s pet project to learn the basics of Microsoft’s cloud, Have I Been Pwned quickly exploded in popularity, driven in part by its simplicity to use, but largely by individuals’ curiosity.

As the service grew, Have I Been Pwned took on a more proactive security role by allowing browsers and password managers to bake in a backchannel to Have I Been Pwned to warn against using previously breached passwords in its database. It was a move that also served as a critical revenue stream to keep down the site’s running costs.

But Have I Been Pwned’s success should be attributed almost entirely to Hunt, both as its founder and its only employee, a one-man band running an unconventional startup, which, despite its size and limited resources, turns a profit.

As the workload needed to support Have I Been Pwned ballooned, Hunt said the strain of running the service without outside help began to take its toll. There was an escape plan: Hunt put the site up for sale. But, after a tumultuous year, he is back where he started.

Ahead of its next big 10-billion milestone mark, Have I Been Pwned shows no signs of slowing down.

‘Mother of all breaches’

Even long before Have I Been Pwned, Hunt was no stranger to data breaches.

By 2011, he had cultivated a reputation for collecting and dissecting small — for the time — data breaches and blogging about his findings. His detailed and methodical analyses showed time and again that internet users were using the same passwords from one site to another. So when one site was breached, hackers already had the same password to a user’s other online accounts.

Then came the Adobe breach, the “mother of all breaches” as Hunt described it at the time: Over 150 million user accounts had been stolen and were floating around the web.

Hunt obtained a copy of the data and, with a handful of other breaches he had already collected, loaded them into a database searchable by a person’s email address, which Hunt saw as the most common denominator across all the sets of breached data.

And Have I Been Pwned was born.

It didn’t take long for its database to swell. Breached data from Sony, Snapchat and Yahoo soon followed, racking up millions more records in its database. Have I Been Pwned soon became the go-to site to check if you had been breached. Morning news shows would blast out its web address, resulting in a huge spike in users — enough at times to briefly knock the site offline. Hunt has since added some of the biggest breaches in the internet’s history: MySpace, Zynga, Adult Friend Finder, and several huge spam lists.

As Have I Been Pwned grew in size and recognition, Hunt remained its sole proprietor, responsible for everything from organizing and loading the data into the database to deciding how the site should operate, including its ethics.

Hunt takes a “what do I think makes sense” approach to handling other people’s breached personal data. With nothing to compare Have I Been Pwned to, Hunt had to write the rules for how he handles and processes so much breach data, much of it highly sensitive. He does not claim to have all of the answers, but relies on transparency to explain his rationale, detailing his decisions in lengthy blog posts.

His decision to only let users search for their email address makes logical sense, driven by the site’s only mission, at the time, to tell a user if they had been breached. But it was also a decision centered around user privacy that helped to future-proof the service against some of the most sensitive and damaging data he would go on to receive.

In 2015, Hunt obtained the Ashley Madison breach. Millions of people had accounts on the site, which encourages users to have an affair. The breach made headlines, first for the breach, and again when several users died by suicide in its wake.

The hack of Ashley Madison was one of the most sensitive entered into Have I Been Pwned, and ultimately changed how Hunt approached data breaches that involved people’s sexual preferences and other personal data. (AP Photo/Lee Jin-man, File)

Hunt diverged from his usual approach, acutely aware of its sensitivities. The breach was undeniably different. He recounted a story of one person who told him how their local church posted a list of the names of everyone in the town who was in the data breach.

“It’s clearly casting a moral judgment,” he said, referring to the breach. “I don’t want Have I Been Pwned to enable that.”

Unlike earlier, less sensitive breaches, Hunt decided that he would not allow anyone to search for the data. Instead, he purpose-built a new feature allowing users who had verified their email addresses to see if they were in more sensitive breaches.

“The purposes for people being in that data breach were so much more nuanced than what anyone ever thought,” Hunt said. One user told him he was in there after a painful break-up and had since remarried but was labeled later as an adulterer. Another said she created an account to catch her husband, suspected of cheating, in the act.

“There is a point at which being publicly searchable poses an unreasonable risk to people, and I make a judgment call on that,” he explained.

The Ashely Madison breach reinforced his view on keeping as little data as possible. Hunt frequently fields emails from data breach victims asking for their data, but he declines every time.

“It really would not have served my purpose to load all of the personal data into Have I Been Pwned and let people look up their phone numbers, their sexualities, or whatever was exposed in various data breaches,” said Hunt.

“If Have I Been Pwned gets pwned, it’s just email addresses,” he said. “I don’t want that to happen, but it’s a very different situation if, say, there were passwords.”

But those remaining passwords haven’t gone to waste. Hunt also lets users search more than half a billion standalone passwords, allowing users to search to see if any of their passwords have also landed in Have I Been Pwned.

Anyone — even tech companies — can access that trove of Pwned Passwords, he calls it. Browser makers and password managers, like Mozilla and 1Password, have baked-in access to Pwned Passwords to help prevent users from using a previously breached and vulnerable password. Western governments, including the U.K. and Australia, also rely on Have I Been Pwned to monitor for breached government credentials, which Hunt also offers for free.

“It’s enormously validating,” he said. “Governments, for the most part, are trying to do things to keep countries and individuals safe — working under extreme duress and they don’t get paid much,” he said.

“There have been similar services that have popped up. They’ve been for-profit — and they’ve been indicted.”
Troy Hunt

Hunt recognizes that Have I Been Pwned, as much as openness and transparency is core to its operation, lives in an online purgatory under which any other circumstances — especially in a commercial enterprise — he would be drowning in regulatory hurdles and red tape. And while the companies whose data Hunt loads into his database would probably prefer otherwise, Hunt told me he has never received a legal threat for running the service.

“I’d like to think that Have I Been Pwned is at the far-legitimate side of things,” he said.

Others who have tried to replicate the success of Have I Been Pwned haven’t been as lucky.

“There have been similar services that have popped up,” said Hunt. “They’ve been for-profit — and they’ve been indicted,” he said.

LeakedSource was, for a time, one of the largest sellers of breach data on the web. I know, because my reporting broke some of their biggest gets: music streaming service Last.fm, adult dating site AdultFriendFinder, and Russian internet giant Rambler.ru to name a few. But what caught the attention of federal authorities was that LeakedSource, whose operator later pleaded guilty to charges related to trafficking identity theft information, indiscriminately sold access to anyone else’s breach data.

“There is a very legitimate case to be made for a service to give people access to their data at a price.”

Hunt said he would “sleep perfectly fine” charging users a fee to access their data. “I just wouldn’t want to be accountable for it if it goes wrong,” he said.

Project Svalbard

Five years into Have I Been Pwned, Hunt could feel the burnout coming.

“I could see a point where I would be if I didn’t change something,” he told me. “It really felt like for the sustainability of the project, something had to change.”

He said he went from spending a fraction of his time on the project to well over half. Aside from juggling the day-to-day — collecting, organizing, deduplicating and uploading vast troves of breached data — Hunt was responsible for the entirety of the site’s back office upkeep — its billing and taxes — on top of his own.

The plan to sell Have I Been Pwned was codenamed Project Svalbard, named after the Norweigian seed vault that Hunt likened Have I Been Pwned to, a massive stockpile of “something valuable for the betterment of humanity,” he wrote announcing the sale in June 2019. It would be no easy task.

Hunt said the sale was to secure the future of the service. It was also a decision that would have to secure his own. “They’re not buying Have I Been Pwned, they’re buying me,” said Hunt. “Without me, there’s just no deal.” In his blog post, Hunt spoke of his wish to build out the service and reach a larger audience. But, he told me, it was not about the money

As its sole custodian, Hunt said that as long as someone kept paying the bills, Have I Been Pwned would live on. “But there was no survivorship model to it,” he admitted. “I’m just one person doing this.”

By selling Have I Been Pwned, the goal was a more sustainable model that took the pressure off him, and, he joked, the site wouldn’t collapse if he got eaten by a shark, an occupational hazard for living in Australia.

But chief above all, the buyer had to be the perfect fit.

Hunt met with dozens of potential buyers, and many in Silicon Valley. He knew what the buyer would look like, but he didn’t yet have a name. Hunt wanted to ensure that whomever bought Have I Been Pwned upheld its reputation.

“Imagine a company that had no respect for personal data and was just going to abuse the crap out of it,” he said. “What does that do for me?” Some potential buyers were driven by profits. Hunt said any profits were “ancillary.” Buyers were only interested in a deal that would tie Hunt to their brand for years, buying the exclusivity to his own recognition and future work — that’s where the value in Have I Been Pwned is.

Hunt was looking for a buyer with whom he knew Have I Been Pwned would be safe if he were no longer involved. “It was always about a multiyear plan to try and transfer the confidence and trust people have in me to some other organizations,” he said.

Hunt testifies to the House Energy Subcommittee on Capitol Hill in Washington, Thursday, Nov. 30, 2017. (AP Photo/Carolyn Kaster)

The vetting process and due diligence was “insane,” said Hunt. “Things just drew out and drew out,” he said. The process went on for months. Hunt spoke candidly about the stress of the year. “I separated from my wife early last year around about the same time as the [sale process],” he said. They later divorced. “You can imagine going through this at the same time as the separation,” he said. “It was enormously stressful.”

Then, almost a year later, Hunt announced the sale was off. Barred from discussing specifics thanks to non-disclosure agreements, Hunt wrote in a blog post that the buyer, whom he was set on signing with, made an unexpected change to their business model that “made the deal infeasible.”

“It came as a surprise to everyone when it didn’t go through,” he told me. It was the end of the road.

Looking back, Hunt maintains it was “the right thing” to walk away. But the process left him back at square one without a buyer and personally down hundreds of thousands in legal fees.

After a bruising year for his future and his personal life, Hunt took time to recoup, clambering for a normal schedule after an exhausting year. Then the coronavirus hit. Australia fared lightly in the pandemic by international standards, lifting its lockdown after a brief quarantine.

Hunt said he will keep running Have I Been Pwned. It wasn’t the outcome he wanted or expected, but Hunt said he has no immediate plans for another sale. For now it’s “business as usual,” he said.

In June alone, Hunt loaded over 102 million records into Have I Been Pwned’s database. Relatively speaking, it was a quiet month.

“We’ve lost control of our data as individuals,” he said. But not even Hunt is immune. At close to 10 billion records, Hunt has been ‘pwned’ more than 20 times, he said.

Earlier this year Hunt loaded a massive trove of email addresses from a marketing database — dubbed ‘Lead Hunter’ — some 68 million records fed into Have I Been Pwned. Hunt said someone had scraped a ton of publicly available web domain record data and repurposed it as a massive spam database. But someone left that spam database on a public server, without a password, for anyone to find. Someone did, and passed the data to Hunt. Like any other breach, he took the data, loaded it in Have I Been Pwned, and sent out email notifications to the millions who have subscribed.

“Job done,” he said. “And then I got an email from Have I Been Pwned saying I’d been pwned.”

He laughed. “It still surprises me the places that I turn up.”

Related stories:

US plans to rollback special status may erode Hong Kong’s startup ecosystem

For two months, the people of Hong Kong waited in suspense after China’s legislature approved a new national security law. The legislation’s details were finally made public yesterday and almost immediately went into effect. As many Hong Kong residents feared, the broadly written new law gives Beijing extensive authority over the Special Administrative Region and has the potential to sharply curtail civil liberties.

In response, the United States began the first measures to end the special status it gives to Hong Kong, with the Commerce and State Departments suspending export license exceptions for sensitive U.S. technology and blocking the export of defense equipment.

Much remains uncertain. Hong Kong had also previously enjoyed many freedoms that do not exist in mainland China, under the “one country, two systems” principle put into place after the United Kingdom returned control to China. After announcing the new policies, the U.S. government said further restrictions are being considered. Under special status, Hong Kong had privileges including lower trade tariffs and a separate customs and immigration designation from mainland China, but now the future of those is unclear.

Equally opaque is how the erosion of special status and the new national security law will impact Hong Kong’s startups in the future. In conversations with TechCrunch, investors and founders said they believe the region’s ecosystem is resilient, partly because many companies offer online services — especially financial services — and have already established operations in other markets. But they are also keeping an eye on further developments and preparing for the possibility that key talent will want to relocate to other countries.

Police roll up crime networks in Europe after infiltrating popular encrypted chat app

Hundreds of alleged drug dealers and other criminals are in custody today after police in Europe infiltrated an encrypted chat system reportedly used by thousands to discuss illegal operations. The total failure of this ostensibly secure method of communication will likely have a chilling effect on the shadowy industry of crime-focused tech.

“Operation Venetic” was reported by various police agencies, major local news outlets, and by Motherboard in especially vibrant form, quoting extensively people apparently from within the groups affected.

The operation involved hundreds of officers working across numerous agencies in France, the Netherlands, the U.K., and other countries. It began in 2017, and culminated two months ago when a service called EncroChat was hacked and the messages of tens of thousands of users exposed to police scrutiny.

EncroChat is a step up in some ways from encrypted chat apps like Signal and WhatsApp. Rather like Blackberry once did, EncroChat provided customized hardware, a dedicated OS, and its own servers to users, providing an expensive service costing thousands per year rather than a one-time purchase or download.

Messages on the service were supposedly very secure and had deniability built in by letting conversations be edited later — so theoretically a user could claim after the fact they never said something. Motherboard’s Joseph Cox has been following the company for some time and has far more details on its claims and operations.

Image Credits: EncroChat /

Needless to say those claims were not entirely true, as at some point in early 2020 police managed to introduce malware into the EncroChat system that completely exposed the conversations and images of its users. Because of the trusted nature of the app, people would openly discuss drug deals, murders, and other crimes, making them sitting ducks for law enforcement.

Throughout the spring criminal operations were being cracked open with alarming (to them) regularity, but it wasn’t until May that users and EncroChat managed to put the pieces together. The company attempted to warn its users and issue an update, but the cat was out of the bag. Seeing that its operation was now exposed, the Operation Venetic teams struck.

Arrests across the several countries involved (there were numerous sub-operations but France and the Netherlands were the primary investigators) total near a thousand, but exact numbers are not clear. Dozens of guns, tons (metric, naturally) of drugs and the equivalent of tens of millions of dollars in cash were seized. More importantly, the chat logs seem to have provided access to people higher up the food chain than ordinary busts would have.

That the reportedly most popular of encrypted chat companies focused on illegal activities could be so completely subverted by international authorities will likely put a damper on its competition. But like other, more domestic challenges to encryption, such as the perennial complaints by the FBI, this event is more likely to strengthen the tools in the long run.

Apple and Google block dozens of Chinese apps in India

Two days after India blocked 59 apps developed by Chinese firms, Google and Apple have started to comply with New Delhi’s order and are preventing users in the world’s second largest internet market from accessing those apps.

UC Browser, Shareit, and Club Factory and other apps that India has blocked are no longer listed on Apple’s App Store and Google Play Store. In a statement, a Google spokesperson said that the company had “temporarily blocked access to the apps”on Google Play Store as it reviews New Delhi’s interim order.

Apple, which has taken a similar approach as Google in complying with New Delhi’s order, did not respond to a request for comment. Some developers including ByteDance have voluntarily made their apps inaccessible in India, a person familiar with the matter told TechCrunch. India’s Department of Telecommunications has also ordered telecom networks and other internet service providers to block access to those 59 apps.

Thursday’s move from Apple and Google, whose software power nearly every smartphone on the planet, is the latest escalation in an unprecedented tension in recent times between  China and India.

A skirmish between the two neighbouring nations at a disputed Himalayan border site last month left 20 Indian soldiers dead, stoking historical tension between them. Earlier this week, India ordered to block 59 Chinese apps including ByteDance’s TikTok citing national security concerns.

The Indian government has invited executives at these companies to give them an opportunity to answer their concerns. Kevin Mayer, the chief executive of TikTok, said on Wednesday that his app was in compliance with Indian privacy and security requirements and he was looking forward to meeting with various stakeholders.

On Thursday, Chinese social network Weibo said it had deleted Indian Prime Minister Narendra Modi’s account at the request of the Indian embassy. Modi had amassed about 200,000 followers on Weibo before his account was deleted.

India has emerged as the biggest open playground for Silicon Valley and Chinese firms in recent years. Like American technology groups Google, Facebook, and Amazon, several Chinese firms including Tencent, ByteDance, and Alibaba Group also aggressively expanded their presence in India in the last decade. TikTok, which has 200 million users in India, counts Asia’s third largest economy as its biggest overseas market.

The 59 blocked apps that include Likee, Xiaomi’s Mi Community, and Tencent’s WeChat, had a combined monthly active user base of over 500 million users in India last month, according to mobile insights firm App Annie — data of which an industry executive shared with TechCrunch. (A significant number of smartphone users in India use several of these apps so there’s a lot of overlap.)

More to follow…