How to respond to a data breach

I cover a lot of data breaches. From inadvertent exposures to data-exfiltrating hacks, I’ve seen it all. But not every data breach is the same. How a company responds to a data breach — whether it was their fault — can make or break its reputation.

I’ve seen some of the worst responses: legal threats, denials and pretending there isn’t a problem at all. In fact, some companies claim they take security “seriously” when they clearly don’t, while other companies see it merely as an exercise in crisis communications.

But once in a while, a company’s response almost makes up for the daily deluge of hypocrisy, obfuscation and downright lies.

Last week, Assist Wireless, a U.S. cell carrier that provides free government-subsidized cell phones and plans to low-income households, had a security lapse that exposed tens of thousands of customer IDs — driver’s licenses, passports and Social Security cards — used to verify a person’s income and eligibility.

A misconfigured plugin for resizing images on the carrier’s website was blamed for the inadvertent data leak of customer IDs to the open web. Security researcher John Wethington found the exposed data through a simple Google search. He reported the bug to TechCrunch so we could alert the company.

Make no mistake, the bug was bad and the exposure of customer data was far from ideal. But the company’s response to the incident was one of the best I’ve seen in years.

Take notes, because this is how to handle a data breach.

Their response was quick. Assist immediately responded to acknowledge the receipt of my initial email. That’s already a positive sign, knowing that the company was looking into the issue.

A new technique can detect newer 4G ‘stingray’ cell phone snooping

Security researchers say they have developed a new technique to detect modern cell-site simulators.

Cell site simulators, known as “stingrays,” impersonate cell towers and can capture information about any phone in its range — including in some cases calls, messages and data. Police secretly deploy stingrays hundreds of times a year across the United States, often capturing the data on innocent bystanders in the process.

Little is known about stingrays, because they are deliberately shrouded in secrecy. Developed by Harris Corp. and sold exclusively to police and law enforcement, stingrays are covered under strict nondisclosure agreements that prevent police from discussing how the technology works. But what we do know is that stingrays exploit flaws in the way that cell phones connect to 2G cell networks.

Most of those flaws are fixed in the newer, faster and more secure 4G networks, though not all. Newer cell site simulators, called “Hailstorm” devices, take advantage of similar flaws in 4G that let police snoop on newer phones and devices.

Some phone apps claim they can detect stingrays and other cell site simulators, but most produce wrong results.

But now researchers at the Electronic Frontier Foundation have discovered a new technique that can detect Hailstorm devices.

Enter the EFF’s latest project, dubbed “Crocodile Hunter” — named after Australian nature conservationist Steve Irwin who was killed by a stingray’s barb in 2006 — helps detect cell site simulators and decodes nearby 4G signals to determine if a cell tower is legitimate or not.

Every time your phone connects to the 4G network, it runs through a checklist — known as a handshake — to make sure that the phone is allowed to connect to the network. It does this by exchanging a series of unencrypted messages with the cell tower, including unique details about the user’s phone — such as its IMSI number and its approximate location. These messages, known as the master information block (MIB) and the system information block (SIB), are broadcast by the cell tower to help the phone connect to the network.

“This is where the heart of all of the vulnerabilities lie in 4G,” said Cooper Quintin, a senior staff technologist at the EFF, who headed the research.

Quintin and fellow researcher Yomna Nasser, who authored the EFF’s technical paper on how cell site simulators work, found that collecting and decoding the MIB and SIB messages over the air can identify potentially illegitimate cell towers.

This became the foundation of the Crocodile Hunter project.

A rare public photo of a stingray, manufactured by Harris Corp. Image Credits: U.S. Patent and Trademark Office

Crocodile Hunter is open-source, allowing anyone to run it, but it requires a stack of both hardware and software to work. Once up and running, Crocodile Hunter scans for 4G cellular signals, begins decoding the tower data, and uses trilateration to visualize the towers on a map.

But the system does require some thought and human input to find anomalies that could identify a real cell site simulator. Those anomalies can look like cell towers appearing out of nowhere, towers that appear to move or don’t match known mappings of existing towers, or are broadcasting MIB and SIB messages that don’t seem to make sense.

That’s why verification is important, Quintin said, and stingray-detecting apps don’t do this.

“Just because we find an anomaly, doesn’t mean we found the cell site simulator. We actually need to go verify,” he said.

In one test, Quintin traced a suspicious-looking cell tower to a truck outside a conference center in San Francisco. It turned out to be a legitimate mobile cell tower, contracted to expand the cell capacity for a tech conference inside. “Cells on wheels are pretty common,” said Quintin. “But they have some interesting similarities to cell site simulators, namely in that they are a portable cell that isn’t usually there and suddenly it is, and then leaves.”

In another test carried out earlier this year at the ShmooCon security conference in Washington, D.C. where cell site simulators have been found before, Quintin found two suspicious cell towers using Crocodile Hunter: One tower that was broadcasting a mobile network identifier associated with a Bermuda cell network and another tower that didn’t appear to be associated with a cell network at all. Neither made much sense, given Washington, D.C. is nowhere near Bermuda.

Quintin said that the project was aimed at helping to detect cell site simulators, but conceded that police will continue to use cell site simulators for as long as the cell networks are vulnerable to their use, an effort that could take years to fix.

Instead, Quintin said that the phone makers could do more at the device level to prevent attacks by allowing users to switch off access to legacy 2G networks, effectively allowing users to opt-out of legacy stingray attacks. Meanwhile, cell networks and industry groups should work to fix the vulnerabilities that Hailstorm devices exploit.

“None of these solutions are going to be foolproof,” said Quintin. “But we’re not even doing the bare minimum yet.”


Send tips securely over Signal and WhatsApp to +1 646-755-8849 or send an encrypted email to: [email protected]

A new technique can detect newer 4G ‘stingray’ cell phone snooping

Security researchers say they have developed a new technique to detect modern cell-site simulators.

Cell site simulators, known as “stingrays,” impersonate cell towers and can capture information about any phone in its range — including in some cases calls, messages and data. Police secretly deploy stingrays hundreds of times a year across the United States, often capturing the data on innocent bystanders in the process.

Little is known about stingrays, because they are deliberately shrouded in secrecy. Developed by Harris Corp. and sold exclusively to police and law enforcement, stingrays are covered under strict nondisclosure agreements that prevent police from discussing how the technology works. But what we do know is that stingrays exploit flaws in the way that cell phones connect to 2G cell networks.

Most of those flaws are fixed in the newer, faster and more secure 4G networks, though not all. Newer cell site simulators, called “Hailstorm” devices, take advantage of similar flaws in 4G that let police snoop on newer phones and devices.

Some phone apps claim they can detect stingrays and other cell site simulators, but most produce wrong results.

But now researchers at the Electronic Frontier Foundation have discovered a new technique that can detect Hailstorm devices.

Enter the EFF’s latest project, dubbed “Crocodile Hunter” — named after Australian nature conservationist Steve Irwin who was killed by a stingray’s barb in 2006 — helps detect cell site simulators and decodes nearby 4G signals to determine if a cell tower is legitimate or not.

Every time your phone connects to the 4G network, it runs through a checklist — known as a handshake — to make sure that the phone is allowed to connect to the network. It does this by exchanging a series of unencrypted messages with the cell tower, including unique details about the user’s phone — such as its IMSI number and its approximate location. These messages, known as the master information block (MIB) and the system information block (SIB), are broadcast by the cell tower to help the phone connect to the network.

“This is where the heart of all of the vulnerabilities lie in 4G,” said Cooper Quintin, a senior staff technologist at the EFF, who headed the research.

Quintin and fellow researcher Yomna Nasser, who authored the EFF’s technical paper on how cell site simulators work, found that collecting and decoding the MIB and SIB messages over the air can identify potentially illegitimate cell towers.

This became the foundation of the Crocodile Hunter project.

A rare public photo of a stingray, manufactured by Harris Corp. Image Credits: U.S. Patent and Trademark Office

Crocodile Hunter is open-source, allowing anyone to run it, but it requires a stack of both hardware and software to work. Once up and running, Crocodile Hunter scans for 4G cellular signals, begins decoding the tower data, and uses trilateration to visualize the towers on a map.

But the system does require some thought and human input to find anomalies that could identify a real cell site simulator. Those anomalies can look like cell towers appearing out of nowhere, towers that appear to move or don’t match known mappings of existing towers, or are broadcasting MIB and SIB messages that don’t seem to make sense.

That’s why verification is important, Quintin said, and stingray-detecting apps don’t do this.

“Just because we find an anomaly, doesn’t mean we found the cell site simulator. We actually need to go verify,” he said.

In one test, Quintin traced a suspicious-looking cell tower to a truck outside a conference center in San Francisco. It turned out to be a legitimate mobile cell tower, contracted to expand the cell capacity for a tech conference inside. “Cells on wheels are pretty common,” said Quintin. “But they have some interesting similarities to cell site simulators, namely in that they are a portable cell that isn’t usually there and suddenly it is, and then leaves.”

In another test carried out earlier this year at the ShmooCon security conference in Washington, D.C. where cell site simulators have been found before, Quintin found two suspicious cell towers using Crocodile Hunter: One tower that was broadcasting a mobile network identifier associated with a Bermuda cell network and another tower that didn’t appear to be associated with a cell network at all. Neither made much sense, given Washington, D.C. is nowhere near Bermuda.

Quintin said that the project was aimed at helping to detect cell site simulators, but conceded that police will continue to use cell site simulators for as long as the cell networks are vulnerable to their use, an effort that could take years to fix.

Instead, Quintin said that the phone makers could do more at the device level to prevent attacks by allowing users to switch off access to legacy 2G networks, effectively allowing users to opt-out of legacy stingray attacks. Meanwhile, cell networks and industry groups should work to fix the vulnerabilities that Hailstorm devices exploit.

“None of these solutions are going to be foolproof,” said Quintin. “But we’re not even doing the bare minimum yet.”


Send tips securely over Signal and WhatsApp to +1 646-755-8849 or send an encrypted email to: [email protected]

Decrypted: As tech giants rally against Hong Kong security law, Apple holds out

It’s not often Silicon Valley gets behind a single cause. Supporting net neutrality was one, reforming government surveillance another. Last week, Big Tech took up its latest: halting any cooperation with Hong Kong police.

Facebook, Google, Microsoft, Twitter, and even China-headquartered TikTok said last week they would no longer respond to demands for user data from Hong Kong law enforcement — read: Chinese authorities — citing the new unilaterally imposed Beijing national security law. Critics say the law, ratified on June 30, effectively kills China’s “one country, two systems” policy allowing Hong Kong to maintain its freedoms and some autonomy after the British handed over control of the city-state back to Beijing in 1997.

Noticeably absent from the list of tech giants pulling cooperation was Apple, which said it was still “assessing the new law.” What’s left to assess remains unclear, given the new powers explicitly allow warrantless searches of data, intercept and restrict internet data, and censor information online, things that Apple has historically opposed if not in so many words.

Facebook, Google and Twitter can live without China. They already do — both Facebook and Twitter are banned on the mainland, and Google pulled out after it accused Beijing of cyberattacks. But Apple cannot. China is at the heart of its iPhone and Mac manufacturing pipeline, and accounts for over 16% of its revenue — some $9 billion last quarter alone. Pulling out of China would be catastrophic for Apple’s finances and market position.

The move by Silicon Valley to cut off Hong Kong authorities from their vast pools of data may be a largely symbolic move, given any overseas data demands are first screened by the Justice Department in a laborious and frequently lengthy legal process. But by holding out, Apple is also sending its own message: Its ardent commitment to human rights — privacy and free speech — stops at the border of Hong Kong.

Here’s what else is in this week’s Decrypted.


THE BIG PICTURE

Police used Twitter-backed Dataminr to snoop on protests

Decrypted: The tech police use against the public

There is a darker side to cybersecurity that’s frequently overlooked.

Just as you have an entire industry of people working to keep systems and networks safe from threats, commercial adversaries are working to exploit them. We’re not talking about red-teamers, who work to ethically hack companies from within. We’re referring to exploit markets that sell details of security vulnerabilities and the commercial spyware companies that use those exploits to help governments and hackers spy on their targets.

These for-profit surveillance companies flew under the radar for years, but have only recently gained notoriety. But now, they’re getting unwanted attention from U.S. lawmakers.

In this week’s Decrypted, we look at the technologies police use against the public.


THE BIG PICTURE

Secrecy over protest surveillance prompts call for transparency

Last week we looked at how the Justice Department granted the Drug Enforcement Administration new powers to covertly spy on protesters. But that leaves a big question: What kind of surveillance do federal agencies have, and what happens to people’s data once it is collected?

While some surveillance is noticeable — from overhead drones and police helicopters overhead — others are worried that law enforcement are using less than obvious technologies, like facial recognition and access to phone records, CNBC reports. Many police departments around the U.S. also use “stingray” devices that spoof cell towers to trick cell phones into turning over their call, message and location data.

U.S. intelligence bill takes aim at commercial spyware makers

A newly released draft intelligence bill, passed by the Senate Intelligence Committee last week, would require the government to detail the threats posed by commercial spyware and surveillance technology.

The annual intelligence authorization bill, published Thursday, would take aim at private sector spyware makers, like NSO Group and Hacking Team, which build spyware and hacking tools designed to surreptitiously break into a victim’s devices for conducting surveillance. Both NSO Group and Hacking Team say they only sell their hacking tools to governments, but critics say that its customers have included despotic and authoritarian regimes like Saudi Arabia and Bahrain.

If passed, the bill would instruct the Director of National Intelligence to submit a report to both House and Senate intelligence committees within six months on the “threats posed by the use by foreign governments and entities of commercially available cyber intrusion and other surveillance technology” against U.S. citizens, residents, and federal employees.

The report would also have to note if any spyware or surveillance technology is built by U.S. companies, and what export controls should apply to prevent that technology from getting into the hands of unfriendly foreign governments.

Sen. Ron Wyden (D-OR) was the only member of the Senate Intelligence Committee to vote against the bill, citing a “broken, costly declassification system, but praised the inclusion of the commercial spyware provision.

Commercial spyware and surveillance technology became a mainstream talking point two years ago after the murder of Washington Post columnist Jamal Khashoggi, which U.S. intelligence concluded was personally ordered by Saudi crown prince Mohammed bin Salman, the country’s de facto leader. A lawsuit filed by a Saudi dissident and friend of Khashoggi accuses NSO Group of selling its mobile hacking tool, dubbed Pegasus, to the Saudi regime, which allegedly used the technology to spy on him shortly before Khashoggi’s murder. NSO denies the claims.

NSO is currently embroiled in a legal battle with Facebook for allegedly exploiting a now-fixed vulnerability in WhatsApp to deliver its spyware to the cell phones of 1,400 users, including government officials, journalists and human rights activists, using Amazon cloud servers based in the U.S. and Frankfurt.

In a separate incident, human rights experts at the United Nations have called for an investigation into allegations that the Saudi government used its spyware to hack into the phone of Amazon chief executive Jeff Bezos. NSO has denied the claims.

NSO has repeatedly denied the allegations.

John Scott-Railton, a senior researcher at the Citizen Lab, part of the Munk School at the University of Toronto, told TechCrunch that the bill’s draft provisions “couldn’t come at a more important time.”

“Reporting throughout the security industry, as well as actions taken by Apple, Google, Facebook and others have made it clear that [spyware] is a problem at scale and is dangerous to U.S. national security and these companies,” said Scott-Railton. “Commercial spyware, when used by governments, is the ‘next Huawei’ in terms of the security of Americans and needs to be treated as a serious security threat,” he said.

“They brought this on themselves by claiming fo years that everything was fine while evidence mounted in every sector of U.S. and global society that there was a problem,” he said.

U.S. government reportedly in talks with tech companies on how to use location data in COVID-19 fight

U.S. government officials are currently in discussion with a number of tech companies, including Facebook and Google, around how data from cell phones might provide methods for combatting the ongoing coronavirus pandemic, according to a new Washington Post report. The talks also include health experts tracking the pandemic and its transmission, and one possible way in which said data could be useful is through aggregated, anonymized location data, per the report’s sources.

Location data taken from the smartphones of Americans could help public health experts track and map the general spread of the infection, the group has theorized, though of course the prospect of any kind of location tracking is bound to leave people uncomfortable, especially when it’s done at scale and involves not only private companies with which they have a business relationship, but also the government.

These efforts, however, would be strictly aimed at helping organizations like the Centers for Disease Control and Prevention (CDC) get an overview of patterns, decoupled from any individual user identity. The Post’s sources stress that this would not involve the generation of any kind of government database, and would instead focus on anodized, aggregated data to inform modelling of the COVID-19 transmission and spread.

Already, we’ve seen unprecedented collaboration among some of the largest tech companies in the world on matters related to the coronavirus pandemic. Virtually every large tech company that operates a product involved in information dissemination came together on Monday to issue a statement about working closely together in order to fight the spread of fraud and disinformation about the virus.

The White House has also been consulting with tech companies around the virus and the U.S. response, including via a meeting last week that included Amazon, Apple, Facebook, Google, Microsoft and Twitter, and Amazon CEO Jeff Bezos has been in regular contact with the current administration as his company is increasingly playing a central and important role in how people are dealing with essentially global guidelines of isolation, social distancing, quarantine and even shelter-in-place orders.

Earlier this week, an open letter co-signed by a lengthy list of epidemiologists, excecutives, physicians and academics also sought to outline what tech companies could contribute to the ongoing effort to stem the COVID-19 pandemic, and one of the measures suggested (directed at mobile OS providers Apple and Google specifically) is an “opt-in, privacy preserving OS feature to support contact tracing” for individuals who might have been exposed to someone with the virus.

Of course, regardless of assurances to the contrary, it’s natural to be suspicious of any widespread effort to collect personal data. Especially when it’s historically been the case that in times of extreme duress, people have made trade-offs about personal freedoms and protections that have subsequently backfired. The New York Times also reported this week on an initiative to track the location data of people who have contracted the virus using an existing, previously undisclosed database of cellphone data from Israeli cellphone selfie providers and their customers.

Still, there’s good reason not to instantly dismiss the idea of trying to find some kind of privacy-protecting way of harnessing the information available to tech companies, since it does seem like a way to potentially provide a lot of benefit – particularly when it comes to measuring the impact of social distancing measures currently in place.

Show off your startup at TC Sessions: Mobility 2020

Remember when “mobility” meant laptops and cell phones? Those were quaint times. Now the category encompasses the future of transportation — everything from flying cars and autonomous vehicles to delivery bots and beyond. There’s no better place to explore this rapidly moving industry than TC Sessions: Mobility 2020, our day-long conference in San Jose on May 14.

And there’s no better place to showcase your early-stage mobility startup. Consider this: more than 1,000 of mobility’s brightest technologists, engineers, founders and investors will be on hand to explore the future of this rapidly evolving technology. So why not buy an Early-Stage Startup Exhibitor Package and plant your business squarely in the path of this group of enthusiastic influencers?

Your exhibitor package includes a 30-inch high-boy table, power, linen, signage — and four tickets to the event. You and your team can strut your startup stuff, take advantage of hyper-focused networking and still enjoy the event’s presentations and workshops.

We’re building our agenda, and we just started announcing speakers on a rolling basis. If you know someone who should be onstage at this event? Hit us up and nominate a speaker here.

We already told you that Waymo’s Boris Sofman and Ike Robotics’ Nancy Sun will join us. And we’re thrilled that Reilly Brennan, founding general partner of Trucks VC, a seed-stage venture capital fund for entrepreneurs, will also grace our stage. Brennan’s many investments include May Mobility, Nauto, nuTonomy, Joby Aviation, Skip and Roadster.

Will your startup be his next investment? Stranger things have happened.

TC Sessions: Mobility 2020 takes place on May 14 in San Jose, Calif. Spend a full day of exploring the art and science of mobility, and don’t miss your chance to introduce your startup to influential movers and shakers. These are heady times in the mobility industry, and it’s moving faster than the race to market a viable flying car. Buy an Early-Stage Startup Exhibitor Package, and you might just transport your business to a whole new level.

Is your company interested in sponsoring or exhibiting at TC Sessions: Mobility 2020? Contact our sponsorship sales team by filling out this form.

Arm focuses on AI with its new Cortex-M CPU and Ethos-U NPU

Arm today announced two new processors — or one and a half, depending on how you look at it. The company, which designs the chips that power the majority of the world’s cell phones and smart devices, launched both the newest Cortex-M processor (the M55) and the Arm Ethos-U55 micro neural processing unit (NPU).

Like its predecessors, the new Cortex-M55 is Arm’s processor for embedded  devices. By now, its partners have manufactured over 50 billion chips based on the Cortex-M design. This latest version is obviously faster and more power-efficient, but Arm is mostly putting the emphasis on the chip’s machine learning performance. It says the M55, which is the first CPU based on Arm’s Helium technology for speeding up vector calculation, can run ML models up to 15 times faster than the previous version.

For many use cases, the M55 is indeed fast enough, but if you need more ML power, the Ethos-U55 will often be able to give device manufacturers that without having to step up into the Cortex-A ecosystem.  Like Arm’s stand-alone Ethos NPUs, these chips are meant to speed up machine learning workloads. The U55, however, is a simpler design that only works in concert with recent Cortex-M processors like the M55, M33, M7 and M4. The combination of both designs can speed up machine learning performance by up to 480 times.


“When you look back at the most recent years, artificial intelligence has revolutionized how data analytics runs in the cloud and especially with the smartphones, it’s simply augmenting the user experience today,” Thomas Lorenser, Arm’s Director of Product Management, told me. “But the next thing or the next step is even more exciting to me: to get AI everywhere. And here we are talking about bringing the benefits of AI down to the IoT endpoints, including microcontrollers and therefore to a much larger scale of users and applications — literally billions more.”

That’s very much what this combination of Cortex-M and Ethos-U is all about. The idea here is to bring more power to the edge. For a lot of use cases, sending data to the cloud isn’t feasible, after all, and as Loresner stressed, oftentimes turning on the radio and sending the data to the cloud uses more energy than running an AI model locally.

“Although I think a lot of the early discussions in AI were dominated by very loud voices in the cloud space, what we’ve been seeing is the innovation, the actual implementation and deployment, in the IoT space is massive and some of the use cases are fantastic.,” Dennis Laudick, VP Commercial and Marketing for Machine Learning at Arm, added.

We found a massive spam operation — and sunk its server

For ten days in March, millions were caught in the same massive spam campaign.

Each email looked like it came from someone the recipient knew: the spammer took stolen email addresses and passwords, quietly logged into their email account, scraped their recently sent emails and pushed out personalized emails to the recipient of that sent email with a link to a fake site pushing a weight loss pill or a bitcoin scam.

The emails were so convincing more than 100,000 people clicked through.

We know this because a security researcher found the server leaking the entire operation. The spammer had forgotten to set a password.

Security researcher Bob Diachenko found the leaking data and with help from TechCrunch analyzed the server. At the time of the discovery, the spammer’s rig was no longer running. It had done its job, and the spammer had likely moved onto another server — likely in an effort to avoid getting blacklisted by anti-spam providers. But the server was primed to start spamming again.

Given there were more than three million unique exposed credentials sitting on this spammer’s server, we wanted to secure the data as soon as possible. With no contact information for the spammer — surprise, surprise — we asked the hosting provider, Awknet, to pull the server offline. Within a few hours of making contact, the provider nullrouted the server, forcing all its network traffic into a sinkhole.

TechCrunch provided a copy of the database to Troy Hunt. Anyone can now check breach notification site Have I Been Pwned to see if their email was misused.

But the dormant server — while it was still active — offered a rare opportunity to understand how a spam operation works.

The one thing we didn’t have was the spam email itself. We reached out to dozens of people to ask about the email they received. Two replied — but only one still had a copy of the email.

The email sent by the spammer. (Image: supplied)

“The same mail appeared on three occasions,” said one of the recipients in an email to TechCrunch. “The subject was related to an email I had sent previously to that person so the attacker had clearly got access to his mailbox or the mail server,” the victim said.

The email, when clicked, would direct the recipient through several websites in quick succession to determine where they were located, based off their IP address. If the recipient was in the U.S., they’d be pushed to a fake CNN site promoting a bogus health remedy. In this case, the spammer was targeting U.K. residents — and most were directed to a fake BBC page promoting a bitcoin scam.

One of the fake page.s (Screenshot: TechCrunch)

The spammer had other servers that we had no visibility into, but the exposed server revealed many of the cogs and machinery to the operation. The server, running an Elasticsearch database, was well-documented enough that we found one of the three spam emails sent to our recipient.

This entry alone tells us a lot about how the spam operation worked.

A database record of one email sent by the spammer. (Screenshot: TechCrunch)

Here’s how it works. The spammer logs into a victim’s @btinternet.com email account using their stolen email address and password. The scammer pulls a recently sent email from their victim’s email server, which feeds into another server — like inbox87.host and viewmsgcs.live — tasked with generating the personalized spam email. That email incorporates the subject line of the sent email and the target recipient’s email address to make it look like it’s being sent from the real person.

Once the message is ready to send, it’s pushed through a proxy connection, designed to mask where the email has come from. The proxy server is made up of several cell phones, each connecting to the internet over their cellular connection.

Each spam message is routed through one of the phones, which occasionally rotates its IP address to prevent detection or being flagged as a spammer.

Here’s what that proxy server looks like.

The proxy server comprised of several cell phones with rotating IP addresses. (Screenshot: TechCrunch)

Once the spam message leaves the proxy server, the spam message is pushed through the victim’s own email provider using their email address and password, making it look like a genuine email to both the email provider and the recipient.

Now imagine that hundreds of times a second.

Not only was the spammer’s Elasticsearch database leaking, its Kibana user interface was also exposed. That gave the spammer a detailed at-a-glance look at the operation in action. It was so granular that you could see which spam-sending domains were the most efficient in tricking a recipient into clicking the link in the spam email.

The spammer’s Kibana dashboard, displaying the operation at a glance. (Screenshot: TechCrunch)

Each spam email includes a tracker in the link that fed information back to the spammer. In bulk, that allows the spammer to figure out which email domain — like outlook.com or yahoo.com users — is more likely to click on a spam email. That can also indicate how an email provider’s spam filter acts. The greater number of clicks, the more likelihood of its spam going through — allowing the spammer to target specific email domains in the future.

The dashboard also contained other information related to the spam campaign, such as how many emails were successfully sent and how many bounced. That helps the spammer home in on the most valuable logins in the future, allowing them to send more spam for lower bandwidth and server costs.

In all, some 5.1 million emails were sent during the 10-day campaign — between March 8 and March 18, with some 162,980 people clicking on the spam email, according to the data on the dashboard.

It’s not the first time we’ve seen a spam operation in action, but it’s rare to see how successful it is.

“This case reminds me on several other occasions I reported at some points in the past — when malicious actors create a sophisticated system of proxying and logging, leaving so much tracks to identify their patterns for authorities in the investigations to come,” Diachenko told TechCrunch. “This shows us — again! — how important a proper cyber hygiene should be.”

What’s clear is that the spammer knows how to cover their tracks.

The language settings in the Kibana instance suggested the spammer may be based in Belgium. We found several other associated spamming domains using data collected by RiskIQ, a cyberthreat intelligence firm, which scours the web for information. Of the domains we found, all were registered with fake names and addresses.

As for the server itself, the provider said it was possibly hacked.

“This was a resold box and the customer already responded to the abuse forward saying it was supposed to have been terminated long ago,” said Awknet’s Justin Robertson in an email to TechCrunch.

Since the hosting provider pulled the spammer’s server offline, several of their fake sites and domains associated with the spam campaign no longer load.

But given the spread of domains and servers propping up the campaign, we suspect the sunken server is only a single casualty in an otherwise ongoing spam campaign.


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.