Millions of Venmo transactions scraped in warning over privacy settings

A computer science student has scraped seven million Venmo transactions to prove that users’ public activity can still be easily obtained, a year after a privacy researcher downloaded hundreds of millions of Venmo transactions in a similar feat.

Dan Salmon said he scraped the transactions during a cumulative six months to raise awareness and warn users to set their Venmo payments to private.

The peer-to-peer mobile payments service faced criticism last year after Hang Do Thi Duc, a former Mozilla fellow, downloaded 207 million transactions. The scraping effort was possible because Venmo payments between users are public by default. The scrapable data inspired several new projects — including a bot that tweeted out every time someone bought drugs.

A year on, Salmon showed little has changed and that it’s still easy to download millions of transactions through the company’s developer API without obtaining user permission or needing the app.

Using that data, anyone can look at an entire user’s public transaction history, who they shared money with, when, and in some cases for what reason — including illicit goods and substances.

“There’s truly no reason to have this API open to unauthenticated requests,” he told TechCrunch. “The API only exists to provide like a scrolling feed of public transactions for the home page of the app, but if that’s your goal then you should require a token with each request to verify that the user is logged in.”

He published the scraped data on his GitHub page.

Venmo has done little to curb the privacy issue for its 40 million users since the scraping effort blew up a year ago. Venmo reacted by changing its privacy guide and, and later updated its app to remove a warning when users went to change their default privacy settings from public to private.

How to change your Venmo privacy settings.

Instead, Venmo has focused its effort on making the data more difficult to scrape rather than focusing on the underlying privacy issues.

When Dan Gorelick first sounded the alarm on Venmo’s public data in 2016, few limits on the API meant anyone could scrape data in bulk and at speed. Other researchers like Johnny Xmas have since said that Venmo restricted its API to limit what historical data can be collected. But Venmo’s most recent limits still allowed Salmon to spit out 40 transactions per minute. That amounts to about 57,600 scraped transactions each day, he said.

Last year, PayPal — which owns Venmo — settled with the Federal Trade Commission over privacy and security violations. The company was criticized for misleading users over its privacy settings. The FTC said users weren’t properly informed that some transactions would be shared publicly, and that Venmo misrepresented the app’s security by saying it was “bank-grade,” which the FTC disputed.

Juliet Niczewicz, a spokesperson for PayPal, did not return a request for comment.

After Equifax breach, US watchdog says agencies aren’t properly verifying identities

A federal watchdog says the government should stop relying on the credit agencies to verify the identifies of those using government services.

In a report out this week, the the Government Accountability Office said several government departments still rely on the credit agencies — Equifax, Experian and TransUnion — to check if a person is who they say they are before they can access their services online.

Agencies like the U.S. Postal Service, the Social Security Administration, Veterans Affairs, and the Centers for Medicare and Medicaid Services ask several questions of a new user and match their answers to information held in an individual’s credit file. The logic is that these credit files have information only the person signing up for services can know.

But following the Equifax breach in 2017 those answers are no longer safe, the watchdog said.

The Equifax breach resulted in the theft of 148 million consumers. Much of the consumer financial data had been collected without the explicit permission of those whose data it held. An investigation later found the breach was “entirely preventable” had the credit agency employed basic security measures.

“The risk that an attacker could obtain and use an individual’s personal information to answer knowledge-based verification questions and impersonate that individual led the National Institute of Standards and Technology (NIST) to issue guidance in 2017 that effectively prohibits agencies from using knowledge-based verification for sensitive applications,” wrote the watchdog.

In response, the named agencies said the cost of new verification systems are too high and may exclude certain demographics from the population.

Only Veterans Affairs implemented a new system but still relies on knowledge-based verification in some cases.

The other downside is that if you have no credit, you simply don’t show up in these systems. You need a credit card or some kind of loan in order to “appear” in the eyes of credit agencies. That’s a major problem for the millions who have no credit file, like foreign nationals working in the U.S. on a visa. In 2015, some 26 million people were estimated to be “credit invisible.”

“Nevertheless, until these agencies take steps to eliminate their use of knowledge-based verification, the individuals they serve will remain at increased risk of identity fraud,” wrote the watchdog.

Black Hat scraps Rep. Will Hurd as keynote speaker amid voting record controversy

Rep. Will Hurd will no longer give the keynote address at the Black Hat security conference amid questions about his voting record on women’s rights.

Hurd, a Texas Republican congressman was scheduled to headline the conference later this year but the organizers decided to walk back the decision a day later.

“Black Hat has chosen to remove U.S. Representative Will Hurd as our 2019 Black Hat USA Keynote. We misjudged the separation of technology and politics,” said a statement. “We will continue to focus on technology and research, however we recognize that Black Hat USA is not the appropriate platform for the polarizing political debate resulting from our choice of speaker.”

“We are still fully dedicated to providing an inclusive environment and apologize that this decision did not reflect that sentiment,” the statement added.

A new keynote speaker has not yet been announced.

We reported yesterday that some in the security community felt uncomfortable and described their unease with the decision to appoint Hurd as keynote speaker. Hurd has consistently voted against legislation supporting women’s rights, including a bill that would financially support women in STEM fields, but also voting in favor of allowing states to restrict access and coverage to abortions and defunding access to women’s health organizations like Planned Parenthood.

Critics said the move alienated women at a time where diversity in security remains a challenge. Others criticized the choice of speaker on his views, calling access to women’s healthcare a human right.

Several long-time Black Hat attendees said on Twitter that they would not attend the conference following news of Hurd’s keynote.

Hurd’s communications director Katie Thompson did not respond to a request for comment.

Thousands of medical injury claim records exposed by ad agency

An internet advertising company specializing in helping law firms sign up potential clients has exposed close to 150,000 records from a database that was left unsecured.

The database contained submissions as part of a lead-generation effort by X Social Media, a Florida-based ad firm that largely uses Facebook to advertise various campaigns for its law firm customers. Law firms pay the ad company to set up individual websites that aim to sign up victims from specific categories of harm and injuries — from medical implants, malpractice, sexual abuse and more — who submit their information in the hope of receiving legal relief.

But the database was left unprotected and without a password, allowing anyone to look inside.

Security researchers Noam Rotem and Ran Locar found the database and reported it to the company, which pulled the database offline. The researchers also shared their discovery exclusively with TechCrunch and posted their findings on vpnMentor.

The database contained names, addresses, phone numbers, the date and time of a person’s submission and the circumstances and explanation of their accident, injury or illness. Often this included personal health information, sensitive medical information, details of procedures or the consumption of certain medications or specifics of traumatic events.

Several records seen by TechCrunch include records from campaigns targeting combat veterans who were injured on duty. Other campaigns sought to sign up those who suffered illnesses from pesticides or medications.

Other campaigns included soliciting claims for sexual abuse. We found several names, postal and email addresses and phone numbers of victims, many of which also described their sexual abuse as part of filling out the website form.

One of the records in the database. (Image: supplied)

The researchers said the exposed data could be “easily traced” back to the individuals who filled out the website forms.

The exposed database also contained a list of more than 300 law firms who paid X Social Media to set up the lead-generation operation. It also contained records of how much each law firm paid the ad company — in some cases amounting to tens of thousands of dollars. The database also contained the bank routing and account numbers of the ad company, which law firms used to pay the company for its services.

In reporting this story, we found a second, smaller database. In an effort to get the database secured, we provided the IP address to Jacob Malherbe, founder of X Social Media, in an email. Within an hour, the database had been pulled offline.

Despite this, Malherbe denied that the company stored medical data, described the findings as “inaccurate” and asked we “direct all other emails to our company lawyers.”

When presented with several files containing the data, Malherbe responded:

After being notified by TechCrunch about a security problems in MongoDB the X Social Media developer team immediately shut down the vulnerability create [sic] by a MongoDB database and did a night long log file review and we only found the two IP addresses, associated with TechCrunch accessing our database. Our log files show that nobody else accesses the database while in transit. We will continue to investigating this incident and work closely with state and Federal agencies as more information becomes available.

When asked, Malherbe declined to provide the logs to verify his claims. The company also wouldn’t say how long the database was exposed.

This is the latest exposed database found by the researchers in recent months.

The researchers have previously found data leaking on Fortune 500 firm Tech Data, exposed user records and private messages of Jewish dating app JCrush and leaking data from Canadian cell network Freedom Mobile and online retailer Gearbest.

Read more:

Rep. Will Hurd to keynote Black Hat draws ire for women’s rights voting record

A decision to confirm Rep. Will Hurd as the keynote speaker at the Black Hat security conference this year has prompted anger and concern by some long-time attendees because of his voting record on women’s rights.

Hurd, an outspoken Texas Republican who has drawn fire from his own party for regularly opposing the Trump administration, was confirmed as keynote speaker at the conference Thursday for his background in cybersecurity. Since taking office in Texas’ 23rd district, the congressman has introduced several bills that would aim to secure Internet of Things devices and pushed to reauthorize the role of a federal chief information officer.

But several people we’ve spoken to have described their unease that Black Hat organizers have asked Hurd, a self-described pro-life lawmaker, given his consistent opposition to bills supporting women’s rights.

An analysis of Hurd’s voting record shows he supports bills promoting women’s rights only two percent of the time. He has voted against a bill that would financially support women in STEM fields, voted in favor of allowing states to restrict access and coverage to abortions, and voted to defund Planned Parenthood.

Many of those we spoke to asked to be kept anonymous amid worries of retaliation or personal attacks. One person who we asked for permission to quote said Hurd’s voting record was “simply awful” for women’s rights. Others in tweets said the move doesn’t reflect well on companies sponsoring the event.

Black Hat says it aims to create an “inclusive environment,” but others have questioned how a political figure with views that cause harm to an entire gender can be considered inclusive. But at a time when women’s rights — including right to access abortions — is being all but outlawed by controversial measures in several states, some have found Hurd’s selection tone-deaf and offensive.

When asked, a spokesperson for Black Hat defended the decision for Hurd to speak:

“Hurd has a strong background in computer science and information security and has served as an advocate for specific cybersecurity initiatives in Congress,” said the spokesperson. “He will offer the Black Hat audience a unique perspective of the infosec landscape and its effect on the government.”

Although previous keynote speakers have included senior government figures, this is the first time Black Hat has confirmed a lawmaker to keynote the conference.

Although abortion rights and cybersecurity are unrelated topics, it’s becoming increasingly difficult to separate social issues from technology and gatherings. It’s also valid for attendees to express concern that the keynote speaker at a professional security conference opposes what many will consider a human right.

Kat Fitzgerald, chief operating officer of the Diana Initiative, a conference for women in cybersecurity, said Hurd’s choosing was a “painfully poor choice” for a keynote speaker. “Simply put, in 2019 women and minorities continue to be ignored,” she said. “This keynote selection, regardless of the voting record, is just another indication of ignoring the InfoSec community that exists today.”

The Diana Initiative, which hosts its annual conference in August, is “about inclusion at all levels, especially in today’s charged environment of excluding women and minorities in so many areas, said Fitzgerald.

Hurd’s office did not return a request for comment.

Read more:

A widely used infusion pump can be remotely hijacked, say researchers

A hospital infusion pump widely used in hospitals and medical facilities has critical security flaws that allow it to be remotely hijacked and controlled, according to security researchers.

Researchers at healthcare security firm CyberMDX found two vulnerabilities in the Alaris Gateway Workstation, developed by medical device maker Becton Dickinson.

Infusion pumps are one of the most common bits of kit in a hospital. These devices control the dispensing of intravenous fluids and medications, like painkillers or insulin. They’re often hooked up to a central monitoring station so medical staff can check on multiple patients at the same time.

But the researchers found that an attacker could install malicious firmware on a pump’s onboard computer, which powers, monitors and controls the infusion pumps. The pumps run on Windows CE, commonly used in pocket PCs before smartphones.

In the worst case scenario, the researchers said it would be possible to adjust specific commands on the pump — including the infusion rate — on certain versions of the device by installing modified firmware.

The researchers said it was also possible to remotely brick the onboard computer, knocking the pump offline.

The bug was scored a rare maximum score of 10.0 on the industry standard common vulnerability scoring system, according to Homeland Security’s advisory. A second vulnerability, scored at a lesser 7.5 out of 10.0 could allow an attacker to gain access to the workstation’s monitoring and configuration interfaces through the web browser.

The researchers said creating an attack kit was “quite easy” and “worked consistently,” said Elad Luz, CyberMDX’s head of research, in an email to TechCrunch. But the attack chain is complex and requires multiple steps, access to the hospital network, knowledge of the workstation’s IP address, and the capability to write custom malicious code.

In other words, there are far easier ways to kill a patient than exploiting these bugs.

CyberMDX disclosed the vulnerabilities to Becton Dickinson in November and to federal regulators.

Becton Dickinson said device owners should update to the latest firmware, which contains fixes for the vulnerabilities. Spokesperson Troy Kirkpatrick said the pump is not sold in the U.S., but would not say how many devices were vulnerable “for competitive reasons.”

“There are about 50 countries that have these devices,” said Kirkpatrick. He confirmed that eight countries that have more than 1,000 devices, three countries have more than 2,000 devices, but no country has more than 3,000 devices.

The flaws are another reminder that security issues can exist in any device — particularly life-saving equipment in the medical space.

Earlier this year, Homeland Security warned about a set of critical-rated vulnerabilities in Medtronic defibrillators. The government-issued alert said the device’s proprietary radio communications protocol did not require authentication, allowing a nearby attacker in certain circumstances to intercept and modify commands over-the-air.

Every secure messaging app needs a self-destruct button

The growing presence of encrypted communications apps makes a lot of communities safer and stronger. But the possibility of physical device seizure and government coercion is growing as well, which is why every such app should have some kind of self-destruct mode to protect its user and their contacts.

End to end encryption like that you see in Signal and (if you opt into it) WhatsApp is great at preventing governments and other malicious actors from accessing your messages while they are in transit. But as with nearly all cybersecurity matters, physical access to either device or user or both changes things considerably.

For example, take this Hong Kong citizen who was forced to unlock their phone and reveal their followers and other messaging data to police. It’s one thing to do this with a court order to see if, say, a person was secretly cyberstalking someone in violation of a restraining order. It’s quite another to use as a dragnet for political dissidents.

This particular protestor ran a Telegram channel that had a number of followers. But it could just as easily be a Slack room for organizing a protest, or a Facebook group, or anything else. For groups under threat from oppressive government regimes it could be a disaster if the contents or contacts from any of these were revealed to the police.

Just as you should be able to choose exactly what you say to police, you should be able to choose how much your phone can say as well. Secure messaging apps should be the vanguard of this capability.

There are already some dedicated “panic button” type apps, and Apple has thoughtfully developed an “emergency mode” (activated by hitting the power button five times quickly) that locks the phone to biometrics and will wipe it if it is not unlocked within a certain period of time. That’s effective against “Apple pickers” trying to steal a phone or during border or police stops where you don’t want to show ownership by unlocking the phone with your face.

Those are useful and we need more like them — but secure messaging apps are a special case. So what should they do?

The best-case scenario, where you have all the time in the world and internet access, isn’t really an important one. You can always delete your account and data voluntarily. What needs work is deleting your account under pressure.

The next best-case scenario is that you have perhaps a few seconds or at most a minute to delete or otherwise protect your account. Signal is very good about this: The deletion option is front and center in the options screen, and you don’t have to input any data. WhatsApp and Telegram require you to put in your phone number, which is not ideal — fail to do this correctly and your data is retained.

Signal, left, lets you get on with it. You’ll need to enter your number in WhatsApp (right) and Telegram.

Obviously it’s also important that these apps don’t let users accidentally and irreversibly delete their account. But perhaps there’s a middle road whereby you can temporarily lock it for a preset time period, after which it deletes itself if not unlocked manually. Telegram does have self-destructing accounts, but the shortest time you can delete after is a month.

What really needs improvement is emergency deletion when your phone is no longer in your control. This could be a case of device seizure by police, or perhaps being forced to unlock the phone after you have been arrested. Whatever the case, there need to be options for a user to delete their account outside the ordinary means.

Here are a couple options that could work:

  • Trusted remote deletion: Selected contacts are given the ability via a one-time code or other method to wipe each other’s accounts or chats remotely, no questions asked and no notification created. This would let, for instance, a friend who knows you’ve been arrested remotely remove any sensitive data from your device.
  • Self-destruct timer: Like Telegram’s feature, but better. If you’re going to a protest, or have been “randomly” selected for additional screening or questioning, you can just tell the app to delete itself after a certain duration (as little as a minute perhaps) or at a certain time of the day. Deactivate any time you like, or stall for the five required minutes for it to trigger.
  • Poison PIN: In addition to a normal unlock PIN, users can set a poison PIN that when entered has a variety of user-selectable effects. Delete certain apps, clear contacts, send prewritten messages, unlock or temporarily hard-lock the device, etc.
  • Customizable panic button: Apple’s emergency mode is great, but it would be nice to be able to attach conditions like the poison PIN’s. Sometimes all someone can do is smash that button.

Obviously these open new avenues for calamity and abuse as well, which is why they will need to be explained carefully and perhaps initially hidden in “advanced options” and the like. But overall I think we’ll be safer with them available.

Eventually these roles may be filled by dedicated apps or by the developers of the operating systems on which they run, but it makes sense for the most security-forward app class out there to be the first in the field.

Newly public CrowdStrike wants to become the Salesforce of cybersecurity

Like many good ideas, CrowdStrike, a seller of subscription-based software that protects companies from breaches, began as a few notes scribbled on a napkin in a hotel lobby.

The idea was to leverage new technology to create an endpoint protection platform powered by artificial intelligence that would blow incumbent solutions out of the water. McAfee, Palo Alto Networks and Symantec, long-time leaders in the space, had been too slow to embrace new technologies and companies were suffering, the CrowdStrike founding team surmised.

Co-founders George Kurtz and Dmitri Alperovitch, a pair of former McAfee executives, weren’t strangers to legacy cybersecurity tools. McAfee had for years been a dominant player in endpoint protection and antivirus. At least, until the emergence of cloud computing.

Since 2012, CrowdStrike’s Falcon Endpoint Protection platform has been pushing those incumbents into a new era of endpoint protection. By helping enterprises across the globe battle increasingly complex attack scenarios more efficiently, CrowdStrike, as well as other fast-growing cybersecurity upstarts, has redefined company security standards much like Salesforce redefined how companies communicate with customers.

“I think we had the foresight that [CrowdStrike] was going to be a foundational element for security,” CrowdStrike chief executive officer George Kurtz told TechCrunch this morning. The full conversation can be read further below.

CrowdStrike co-founder and CEO George Kurtz.

Facebook collected device data on 187,000 users using banned snooping app

Facebook obtained personal and sensitive device data on about 187,000 users of its now-defunct Research app, which Apple banned earlier this year after the app violated its rules.

The social media giant said in a letter to Sen. Richard Blumenthal’s office — which TechCrunch obtained — that it collected data on 31,000 users in the U.S., including 4,300 teenagers. The rest of the collected data came from users in India.

Earlier this year, a TechCrunch investigation found both Facebook and Google were abusing their Apple-issued enterprise developer certificates, designed to only allow employees to run iPhone and iPad apps used only inside the company. The investigation found the companies were building and providing apps for consumers outside Apple’s App Store, in violation of Apple’s rules. The apps paid users in return for collecting data on how participants used their devices and to understand app habits by gaining access to all of the network data in and out of their device.

Apple banned the apps by revoking Facebook’s enterprise developer certificate — and later Google’s enterprise certificate. In doing so, the revocation knocked offline both companies’ fleet of internal iPhone or iPad apps that relied on the same certificates.

But in response to lawmakers’ questions, Apple said it didn’t know how many devices installed Facebook’s rule-violating app.

“We know that the provisioning profile for the Facebook Research app was created on April 19, 2017, but this does not necessarily correlate to the date that Facebook distributed the provisioning profile to end users,” said Timothy Powderly, Apple’s director of federal affairs, in his letter.

Facebook said the app dated back to 2016.

TechCrunch also obtained the letters sent by Apple and Google to lawmakers in early March, but were never made public.

These “research” apps relied on willing participants to download the app from outside the app store and use the Apple-issued developer certificates to install the apps. Then, the apps would install a root network certificate, allowing the app to collect all the data out of the device — like web browsing histories, encrypted messages and mobile app activity — potentially also including data from their friends — for competitive analysis.

A response by Facebook about the number of users involved in Project Atlas (Image: TechCrunch)

In Facebook’s case, the research app — dubbed Project Atlas — was a repackaged version of its Onavo VPN app, which Facebook was forced to remove from Apple’s App Store last year for gathering too much device data.

Just this week, Facebook relaunched its research app as Study, only available on Google Play and for users who have been approved through Facebook’s research partner, Applause. Facebook said it would be more transparent about how it collects user data.

Facebook’s vice president of public policy Kevin Martin defended the company’s use of enterprise certificates, saying it “was a relatively well-known industry practice.” When asked, a Facebook spokesperson didn’t quantify this further. Later, TechCrunch found dozens of apps that used enterprise certificates to evade the app store.

Facebook previously said it “specifically ignores information shared via financial or health apps.” In its letter to lawmakers, Facebook stuck to its guns, saying its data collection was focused on “analytics,” but confirmed “in some isolated circumstances the app received some limited non-targeted content.”

“We did not review all of the data to determine whether it contained health or financial data,” said a Facebook spokesperson. “We have deleted all user-level market insights data that was collected from the Facebook Research app, which would include any health or financial data that may have existed.”

But Facebook didn’t say what kind of data, only that the app didn’t decrypt “the vast majority” of data sent by a device.

Facebook describing the type of data it collected — including “limited, non-targeted content” (Image: TechCrunch)

Google’s letter, penned by public policy vice president Karan Bhatia, did not provide a number of devices or users, saying only that its app was a “small scale” program. When reached, a Google spokesperson did not comment by our deadline.

Google also said it found “no other apps that were distributed to consumer end users,” but confirmed several other apps used by the company’s partners and contractors, which no longer rely on enterprise certificates.

Google explaining which of its apps were improperly using Apple-issued enterprise certificates (Image: TechCrunch)

Apple told TechCrunch that both Facebook and Google “are in compliance” with its rules as of the time of publication. At its annual developer conference last week, the company said it now “reserves the right to review and approve or reject any internal use application.”

Facebook’s willingness to collect this data from teenagers — despite constant scrutiny from press and regulators — demonstrates how valuable the company sees market research on its competitors. With its restarted paid research program but with greater transparency, the company continues to leverage its data collection to keep ahead of its rivals.

Facebook and Google came off worse in the enterprise app abuse scandal, but critics said in revoking enterprise certificates Apple retains too much control over what content customers have on their devices.

The Justice Department and the Federal Trade Commission are said to be examining the big four tech giants — Apple, Amazon, Facebook and Google-owner Alphabet — for potentially falling afoul of U.S. antitrust laws.

UK carriers warn over ongoing Huawei 5G uncertainty: Report

UK mobile network operators have drafted a letter urging the government for greater clarity on Chinese tech giant Huawei’s involvement in domestic 5G infrastructure, according to a report by the BBC.

Huawei remains under a cloud of security suspicion attached to its relationship with the Chinese state, which in 2017 passed legislation that gives authorities more direct control over the operations of internet-based companies — leading to fears it could repurpose network kit supplied by Huawei as a conduit for foreign spying.

Back in April, press reports emerged suggesting the UK government was intending to give Huawei a limited role in 5G infrastructure — for ‘non-core’ parts of the network — despite multiple cabinet ministers apparently raising concerns about any role for the Chinese tech giant. The UK government did not officially confirmed the leaks.

In the draft letter UK operators warn the government that the country risks losing its position as a world leader in mobile connectivity as a result of ongoing uncertainty attached to Huawei and 5G, per the BBC’s report.

The broadcaster says it has reviewed the letter which is intended to be sent to cabinet secretary, Mark Sedwill, as soon as this week.

It also reports that operators have asked for an urgent meeting between industry leaders and the government to discuss their concerns — saying they can can’t invest in 5G infrastructure while uncertainty over the use of Chinese tech persists.

The BBC’s report does not name which operators have put their names to the draft letter.

We reached out to the major UK mobile network operators for comment.

A spokesperson for BT, which owns the mobile brand EE — and was the first to go live with a consumer 5G service in the UK last month — told us: “We are in regular contact with UK government around this topic, and continue to discuss the impact of possible regulation on UK telecoms networks.”

A Vodafone spokesperson added: “We do not comment on draft documents. We would ask for any decision regarding the future use of Huawei equipment in the UK not to be rushed but based on all the facts.”

At the time of writing Orange, O2 and 3 had not yet responded to requests for comment.

A report in March by a UK oversight body set up to evaluate Huawei’s security was damning — describing “serious and systematic defects” in its software engineering and cyber security competence, although it resisted calls for an outright ban.

Reached for comment on the draft letter, a spokesperson for the Department for Digital, Culture, Media and Sport told us it has not yet received it — but sent the following statement:

The security and resilience of the UK’s telecoms networks is of paramount importance. We have robust procedures in place to manage risks to national security and are committed to the highest possible security standards.

The Telecoms Supply Chain Review will be announced in due course. We have been clear throughout the process that all network operators will need to comply with the Government’s decision.

The spokesperson added that the government has undertaken extensive consultation with industry as part of its review of the 5G supply chain, in addition to regular engagement, and emphasized that it is for network operators to confirm the details of any steps they have taken in upgrading their networks.

Carriers are aware they must comply with the government’s final decision, the spokesperson added.

At the pan-Europe level, the European Commission has urged member states to step up individual and collective attention on network security to mitigate potential risks as they roll out 5G networks.

The Commission remains very unlikely to try to impose 5G supplier bans itself. Its interventions so far call for EU member states to pay close attention to network security, and help each other by sharing more information, with the Commission also warning of the risk of fragmentation to its flagship “digital single market” project if national governments impose individual bans on Chinese kit vendors.