Using AI responsibly to fight the coronavirus pandemic

The emergence of the novel coronavirus has left the world in turmoil. COVID-19, the disease caused by the virus, has reached virtually every corner of the world, with the number of cases exceeding a million and the number of deaths more than 50,000 worldwide. It is a situation that will affect us all in one way or another.

With the imposition of lockdowns, limitations of movement, the closure of borders and other measures to contain the virus, the operating environment of law enforcement agencies and those security services tasked with protecting the public from harm has suddenly become ever more complex. They find themselves thrust into the middle of an unparalleled situation, playing a critical role in halting the spread of the virus and preserving public safety and social order in the process. In response to this growing crisis, many of these agencies and entities are turning to AI and related technologies for support in unique and innovative ways. Enhancing surveillance, monitoring and detection capabilities is high on the priority list.

For instance, early in the outbreak, Reuters reported a case in China wherein the authorities relied on facial recognition cameras to track a man from Hangzhou who had traveled in an affected area. Upon his return home, the local police were there to instruct him to self-quarantine or face repercussions. Police in China and Spain have also started to use technology to enforce quarantine, with drones being used to patrol and broadcast audio messages to the public, encouraging them to stay at home. People flying to Hong Kong airport receive monitoring bracelets that alert the authorities if they breach the quarantine by leaving their home.

In the United States, a surveillance company announced that its AI-enhanced thermal cameras can detect fevers, while in Thailand, border officers at airports are already piloting a biometric screening system using fever-detecting cameras.

Isolated cases or the new norm?

With the number of cases, deaths and countries on lockdown increasing at an alarming rate, we can assume that these will not be isolated examples of technological innovation in response to this global crisis. In the coming days, weeks and months of this outbreak, we will most likely see more and more AI use cases come to the fore.

While the application of AI can play an important role in seizing the reins in this crisis, and even safeguard officers and officials from infection, we must not forget that its use can raise very real and serious human rights concerns that can be damaging and undermine the trust placed in government by communities. Human rights, civil liberties and the fundamental principles of law may be exposed or damaged if we do not tread this path with great caution. There may be no turning back if Pandora’s box is opened.

In a public statement on March 19, the monitors for freedom of expression and freedom of the media for the United Nations, the Inter-American Commission for Human Rights and the Representative on Freedom of the Media of the Organization for Security and Co-operation in Europe issued a joint statement on promoting and protecting access to and free flow of information during the pandemic, and specifically took note of the growing use of surveillance technology to track the spread of the coronavirus. They acknowledged that there is a need for active efforts to confront the pandemic, but stressed that “it is also crucial that such tools be limited in use, both in terms of purpose and time, and that individual rights to privacy, non-discrimination, the protection of journalistic sources and other freedoms be rigorously protected.”

This is not an easy task, but a necessary one. So what can we do?

Ways to responsibly use AI to fight the coronavirus pandemic

  1. Data anonymization: While some countries are tracking individual suspected patients and their contacts, Austria, Belgium, Italy and the U.K. are collecting anonymized data to study the movement of people in a more general manner. This option still provides governments with the ability to track the movement of large groups, but minimizes the risk of infringing data privacy rights.
  2. Purpose limitation: Personal data that is collected and processed to track the spread of the coronavirus should not be reused for another purpose. National authorities should seek to ensure that the large amounts of personal and medical data are exclusively used for public health reasons. The is a concept already in force in Europe, within the context of the European Union’s General Data Protection Regulation (GDPR), but it’s time for this to become a global principle for AI.
  3. Knowledge-sharing and open access data: António Guterres, the United Nations Secretary-General, has insisted that “global action and solidarity are crucial,” and that we will not win this fight alone. This is applicable on many levels, even for the use of AI by law enforcement and security services in the fight against COVID-19. These agencies and entities must collaborate with one another and with other key stakeholders in the community, including the public and civil society organizations. AI use case and data should be shared and transparency promoted.
  4. Time limitation:  Although the end of this pandemic seems rather far away at this point in time, it will come to an end. When it does, national authorities will need to scale back their newly acquired monitoring capabilities after this pandemic. As Yuval Noah Harari observed in his recent article, “temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon.” We must ensure that these exceptional capabilities are indeed scaled back and do not become the new norm.

Within the United Nations system, the United Nations Interregional Crime and Justice Research Institute (UNICRI) is working to advance approaches to AI such as these. It has established a specialized Centre for AI and Robotics in The Hague and is one of the few international actors dedicated to specifically looking at AI vis-à-vis crime prevention and control, criminal justice, rule of law and security. It assists national authorities, in particular law enforcement agencies, to understand the opportunities presented by these technologies and, at the same time, to navigate the potential pitfalls associated with these technologies.

Working closely with International Criminal Police Organization (INTERPOL), UNICRI has set up a global platform for law enforcement, fostering discussion on AI, identifying practical use cases and defining principles for responsible use. Much work has been done through this forum, but it is still early days, and the path ahead is long.

While the COVID-19 pandemic has illustrated several innovative use cases, as well as the urgency for the governments to do their utmost to stop the spread of the virus, it is important to not let consideration of fundamental principles, rights and respect for the rule of law be set aside. The positive power and potential of AI is real. It can help those embroiled in fighting this battle to slow the spread of this debilitating disease. It can help save lives. But we must stay vigilant and commit to the safe, ethical and responsible use of AI.

It is essential that, even in times of great crisis, we remain conscience of the duality of AI and strive to advance AI for good.

A former chaos engineer offers 5 tips for handling online disasters remotely

I recently had a scheduled video conference call with a Fortune 100 company.

Everything on my end was ready to go; my presentation was prepared and well-practiced. I was set to talk to 30 business leaders who were ready to learn more about how they could become more resilient to major outages.

Unfortunately, their side hadn’t set up the proper permissions in Zoom to add new people to a trusted domain, so I wasn’t able to share my slides. We scrambled to find a workaround at the last minute while the assembled VPs and CTOs sat around waiting. I ended up emailing my presentation to their coordinator, calling in from my mobile and verbally indicating to the coordinator when the next slide needed to be brought up. Needless to say, it wasted a lot of time and wasn’t the most effective way to present.

At the end of the meeting, I said pointedly that if there was one thing they should walk away with, it’s that they had a vital need to run an online fire drill with their engineering team as soon as possible. Because if a team is used to working together in an office — with access to tools and proper permissions in place — it can be quite a shock to find out in the middle of a major outage that they can’t respond quickly and adequately. Issues like these can turn a brief outage into one that lasts for hours.

Quick context about me: I carried a pager for a decade at Amazon and Netflix, and what I can tell you is that when either of these services went down, a lot of people were unhappy. There were many nights where I had to spring out of bed at 2 a.m., rub the sleep from my eyes and work with my team to quickly identify the problem. I can also tell you that working remotely makes the entire process more complicated if teams are not accustomed to it.

There are many articles about best practices aimed at a general audience, but engineering teams have specific challenges as the ones responsible for keeping online services up and running. And while leading tech companies already have sophisticated IT teams and operations in place, what about financial institutions and hospitals and other industries where IT is a tool, but not a primary focus? It’s often the small things that can make all the difference when working remotely; things that seem obvious in the moment, but may have been overlooked.

So here are some tips for managing incidents remotely:

There were many nights where I had to spring out of bed at 2 a.m., rub the sleep from my eyes and work with my team to quickly identify the problem… working remotely makes the entire process more complicated if teams are not accustomed to it.

Daily Crunch: Zoom faces security scrutiny

Researchers reveal a number of security issues with videoconferencing app Zoom, investors warn Indian startups of tough times ahead and Uber Eats expands its grocery options internationally. Here’s your Daily Crunch for April 1, 2020.

1. Maybe we shouldn’t use Zoom after all

Zoom’s recent popularity has shone a spotlight on the company’s security protections and privacy promises. Yesterday, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

In addition, two security researchers found a Zoom bug that can be abused to steal Windows passwords, while another researcher found two new bugs that can be used to take over a Zoom user’s Mac, including tapping into the webcam and microphone.

2. Investors tell Indian startups to ‘prepare for the worst’ as COVID-19 uncertainty continues

In an open letter to startup founders in India, 10 global and local private equity and venture capitalist firms — including Accel, Lightspeed, Sequoia Capital and Matrix Partners — cautioned that the current changes to the macro environment could make it difficult for a startup to close their next fundraising deal.

3. Uber Eats beefs up its grocery delivery offer as COVID-19 lockdowns continue

Uber’s food delivery division has inked a partnership with supermarket giant Carrefour in France to provide Parisians with 30-minute home delivery on a range of grocery products. In Spain, it’s partnered with the Galp service station brand to offer a grocery delivery service that consists of basic foods, over the counter medicines, beverages and cleaning products. And in Brazil, the company said it’s partnering with a range of pharmacies, convenience stores and pet shops in São Paulo to offer home delivery on basic supplies.

4. Grab hires Peter Oey as its chief financial officer

Prior to joining Grab, Oey was the chief financial officer at LegalZoom, an online legal services company based near Los Angeles. Before that, he served the same role at Mylife.com.

5. How to value a startup in a downturn

What’s been happening in public markets is going to trickle down into the private markets — in other words, startups are going to take a hit. To understand that dynamic, we spoke with Mary D’Onofrio, an investor with Bessemer Venture Partners. (Extra Crunch membership required.)

6. No proof of a Houseparty breach, but its privacy policy is still gatecrashing your data

Houseparty was swift to deny the reports of a breach and even go so far as to claim — without evidence — it was investigating indications that the “breach” was a “paid commercial smear to harm Houseparty,” offering a $1 million reward to whoever could prove its theory.

7. YouTube sellers found touting bogus coronavirus vaccines and masks

Researchers working for the Digital Citizens Alliance and the Coalition for a Safer Web — two online safety advocacy groups in the U.S. — undertook an 18-day investigation of YouTube in March, finding what they say were “dozens” of examples of dubious videos, including videos touting bogus vaccines the sellers claimed would protect buyers from COVID-19.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

Ex-NSA hacker drops new zero-day doom for Zoom

Zoom’s troubled year just got worse.

Now that a large portion of the world is working from home to ride out the coronavirus pandemic, Zoom’s popularity has rocketed, but also has led to an increased focus on the company’s security practices and privacy promises. Hot on the heels of two security researchers finding a Zoom bug that can be abused to steal Windows passwords, another security researcher found two new bugs that can be used to take over a Zoom user’s Mac, including tapping into the webcam and microphone.

Patrick Wardle, a former NSA hacker and now principle security researcher at Jamf, dropped the two previously undisclosed flaws on his blog Wednesday, which he shared with TechCrunch.

The two bugs, Wardle said, can be launched by a local attacker — that’s where someone has physical control of a vulnerable computer. Once exploited, the attacker can gain and maintain persistent access to the innards of a victim’s computer, allowing them to install malware or spyware.

Wardle’s first bug piggybacks off a previous finding. Zoom uses a “shady” technique — one that’s also used by Mac malware — to install the Mac app without user interaction. Wardle found that a local attacker with low-level user privileges can inject the Zoom installer with malicious code to obtain the highest level of user privileges, known as “root.”

Those root-level user privileges mean the attacker can access the underlying macOS operating system, which are typically off-limits to most users, making it easier to run malware or spyware without the user noticing.

The second bug exploits a flaw in how Zoom handles the webcam and microphone on Macs. Zoom, like any app that needs the webcam and microphone, first requires consent from the user. But Wardle said an attacker can inject malicious code into Zoom to trick it into giving the attacker the same access to the webcam and microphone that Zoom already has. Once Wardle tricked Zoom into loading his malicious code, the code will “automatically inherit” any or all of Zoom’s access rights, he said — and that includes Zoom’s access to the webcam and microphone.

“No additional prompts will be displayed, and the injected code was able to arbitrarily record audio and video,” wrote Wardle.

Because Wardle dropped detail of the vulnerabilities on his blog, Zoom has not yet provided a fix. Zoom also did not respond to TechCrunch’s request for comment.

In the meanwhile, Wardle said, “if you care about your security and privacy, perhaps stop using Zoom.”

Marriott says 5.2 million guest records stolen in another data breach

Marriott has confirmed a second data breach in three years — this time involving the personal information on 5.2 million guests.

The hotel giant said Tuesday it discovered the breach of an unspecified property system at a franchise hotel in late February. The hackers obtained the login details of two employees, a hotel statement said, which broke in weeks earlier during mid-January.

Marriott said it has “no reason” to believe payment data was stolen, but warned that names, addresses, phone numbers, loyalty member data, dates of birth and other travel information — such as linked airline loyalty numbers and room preferences — were taken in the breach.

Starwood, a subsidiary of Marriott, said in 2018 its central reservation system was hacked, exposing the personal data and guest records on 383 million guests. The data included  five million unencrypted passport numbers and eight million credit card records.

It prompted a swift response from European authorities, which issued Marriott with a fine of $123 million in the wake of the breach.

Maybe we shouldn’t use Zoom after all

Now that we’re all stuck at home thanks to the coronavirus pandemic, video calls have gone from a novelty to a necessity. Zoom, the popular videoconferencing service, seems to be doing better than most and has quickly become one of, if not the most, popular option going.

But should it be?

Zoom’s recent popularity has also shone a spotlight on the company’s security protections and privacy promises. Just today, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

And Motherboard reports that Zoom is leaking the email addresses of “at least a few thousand” people because personal addresses are treated as if they belong to the same company.

It’s the latest examples of the company having to spend the last year mopping up after a barrage of headlines examining the company’s practices and misleading marketing. To wit:

  • Apple was forced to step in to secure millions of Macs after a security researcher found Zoom failed to disclose that it installed a secret web server on users’ Macs, which Zoom failed to remove when the client was uninstalled. The researcher, Jonathan Leitschuh, said the web server meant any malicious website could activate Mac webcam with Zoom installed without the user’s permission. The researcher declined a bug bounty payout because Zoom wanted Leitschuh to sign a non-disclosure agreement, which would have prevented him from disclosing details of the bug.
  • Zoom was quietly sending data to Facebook about a user’s Zoom habits — even when the user does not have a Facebook account. Motherboard reported that the iOS app was notifying Facebook when they opened the app, the device model, which phone carrier they opened the app, and more. Zoom removed the code in response, but not fast enough to prevent a class action lawsuit or New York’s attorney general from launching an investigation.
  • Zoom came under fire again for its “attendee tracking” feature, which, when enabled, lets a host check if participants are clicking away from the main Zoom window during a call.
  • A security researcher found that the Zoom uses a “shady” technique to install its Mac app without user interaction. “The same tricks that are being used by macOS malware,” the researcher said.
  • On the bright side and to some users’ relief, we reported that it is in fact possible to join a Zoom video call without having to download or use the app. But Zoom’s “dark patterns” doesn’t make it easy to start a video call using just your browser.
  • Zoom has faced questions over its lack of transparency on law enforcement requests it receives. Access Now, a privacy and rights group, called on Zoom to release the number of requests it receives, just as Amazon, Google, Microsoft and many more tech giants report on a semi-annual basis.
  • Then there’s Zoombombing, where trolls take advantage of open or unprotected meetings and poor default settings to take over screen-sharing and broadcast porn or other explicit material. The FBI this week warned users to adjust their settings to avoid trolls hijacking video calls.
  • And Zoom tightened its privacy policy this week after it was criticized for allowing Zoom to collect information about users’ meetings — like videos, transcripts and shared notes — for advertising.

There are many more privacy-focused alternatives to Zoom. Three are several options, but they all have their pitfalls. FaceTime and WhatsApp are end-to-end encrypted, but FaceTime works only on Apple devices and WhatsApp is limited to just four video callers at a time. A lesser known video calling platform, Jitsi, is not end-to-end encrypted but it’s open source — so you can look at the code to make sure there are no backdoors — and it works across all devices and browsers. You can run Jitsi on a server you control for greater privacy.

In fairness, Zoom is not inherently bad and there are many reasons why Zoom is so popular. It’s easy to use, reliable and for the vast majority it’s incredibly convenient.

But Zoom’s misleading claims give users a false sense of security and privacy. Whether it’s hosting a virtual happy hour or a yoga class, or using Zoom for therapy or government cabinet meetings, everyone deserves privacy.

Now more than ever Zoom has a responsibility to its users. For now, Zoom at your own risk.

No proof of a Houseparty breach, but its privacy policy is still gatecrashing your data

Houseparty has been a smashing success with people staying home during the coronavirus pandemic who still want to connect with friends.

The group video chat app, interspersed with games and other bells and whistles, raises it above the more mundane Zooms and Hangouts (fun only in their names, otherwise pretty serious tools used by companies, schools and others who just need to work) when it comes to creating engaged leisure time, amid a climate where all of them are seeing a huge surge in growth.

All that looked like it could possibly fall apart for Houseparty and its new owner Epic Games when a series of reports appeared Monday claiming Houseparty was breached, and that malicious hackers were using users’ data to access their accounts on other apps such as Spotify and Netflix.

Houseparty was swift to deny the reports and even go so far as to claim — without evidence — it was investigating indications that the “breach” was a “paid commercial smear to harm Houseparty,” offering a $1 million reward to whoever could prove its theory.

For now, there is no proof that there was a breach, nor proof that there was a paid smear campaign, and when we reached out to ask Houseparty and Epic about this investigation, a spokesperson said: “We don’t have anything to add here at the moment.”

But that doesn’t mean that Houseparty doesn’t have privacy issues.

As the old saying goes, “if the product is free, you are the product.” In the case of the free app Houseparty, the publishers detail a 12,000+ word privacy policy that covers any and all uses of data that it might collect by way of you logging on to or using its service, laying out the many ways that it might use data for promotional or commercial purposes.

There are some clear lines in the policy about what it won’t use. For example, while phone numbers might get shared for tech support, with partnerships that you opt into, to link up contacts to talk with and to authenticate you, “we will never share your phone number or the phone numbers of third parties in your contacts with anyone else.”

But beyond that, there are provisions in there that could see Houseparty selling anonymized and other data, leading Ray Walsh of research firm ProPrivacy to describe it as a “privacy nightmare.”

“Anybody who decides to use the Houseparty application to stay in contact during quarantine needs to be aware that the app collects a worrying amount of personal information,” he said. “This includes geolocation data, which could, in theory, be used to map the location of each user. A closer look at Houseparty’s privacy policy reveals that the firm promises to anonymize and aggregate data before it is shared with the third-party affiliates and partners it works with. However, time and time again, researchers have proven that previously anonymized data can be re-identified.”

There are ways around this for the proactive. Walsh notes that users can go into the settings to select “private mode” to “lock” rooms they use to stop people from joining unannounced or uninvited; switch locations off; use fake names and birthdates; disconnect all other social apps; and launch the app on iOS with a long press to “sneak into the house” without notifying all your contacts.

But with a consumer app, it’s a longshot to assume that most people, and the younger users who are especially interested in Houseparty, will go through all of these extra steps to secure their information.

Security lapse exposed Republican voter firm’s internal app code

A voter contact and canvassing company, used exclusively by Republican political campaigns, mistakenly left an unprotected copy of its app’s code on its website for anyone to find.

The company, Campaign Sidekick, helps Republican campaigns canvass their districts using its iOS and Android apps, which pull in names and addresses from voter registration rolls. Campaign Sidekick says it has helped campaigns in Arizona, Montana, and Ohio — and contributed to the Brian Kemp campaign, which saw him narrowly win against Democratic rival Stacey Abrams in the Georgia gubernatorial campaign in 2018.

For the past two decades, political campaigns have ramped up their use of data to identify swing voters. This growing political data business has opened up a whole economy of startups and tech companies using data to help campaigns better understand their electorate. But that has led to voter records spilling out of unprotected servers and other privacy-related controversies — like the case of Cambridge Analytica obtaining private data from social media sites.

Chris Vickery, director of cyber risk research at security firm UpGuard, said he found the cache of Campaign Sidekick’s code by chance.

In his review of the code, Vickery found several instances of credentials and other app-related secrets, he said in a blog post on Monday, which he shared exclusively with TechCrunch. These secrets, such as keys and tokens, can typically be used to gain access to systems or data without a username or password. But Vickery did not test the password as doing so would be unlawful. Vickery also found a sampling of personally identifiable information, he said, amounting to dozens of spreadsheets packed with voter names and addresses.

Fearing the exposed credentials could be abused if accessed by a malicious actor, Vickery informed the company of the issue in mid-February. Campaign Sidekick quickly pulled the exposed cache of code offline.

One of the Campaign Sidekick mockups, using dummy data, collates a voter’s data in one place. (Image: supplied)

One of the screenshots provided by Vickery showed a mockup of a voter profile compiled by the app, containing basic information about the voter and their past voting and donor history, which can be obtained from public and voter records. The mockup also lists the voter’s “friends.”

Vickery told TechCrunch he found “clear evidence” that the app’s code was designed to pull in data from its now-defunct Facebook app, which allowed users to sign-in and pull their list of friends — a feature that was supported by Facebook at the time until limits were put on third-party developers’ access to friends’ data.

“There is clear evidence that Campaign Sidekick and related entities had and have used access to Facebook user data and APIs to query that data,” Vickery said.

Drew Ryun, founder of Campaign Sidekick, told TechCrunch that its Facebook project was from eight years prior, that Facebook had since deprecated access to developers, and that the screenshot was a “digital artifact of a mockup.” (TechCrunch confirmed that the data in the mockup did not match public records.)

Ryun said after he learned of the exposed data the company “immediately changed sensitive credentials for our current systems,” but that the credentials in the exposed code could have been used to access its databases storing user and voter data.

Saudi spies tracked phones using flaws the FCC failed to fix for years

Lawmakers and security experts have long warned of security flaws in the underbelly of the world’s cell networks. Now a whistleblower says the Saudi government is exploiting those flaws to track its citizens across the U.S. as part of a “systematic” surveillance campaign.

It’s the latest tactic by the Saudi kingdom to spy on its citizens overseas. The kingdom has faced accusations of using powerful mobile spyware to hack into the phones of dissidents and activists to monitor their activities, including those close to Jamal Khashoggi, the Washington Post columnist who was murdered by agents of the Saudi regime. The kingdom also allegedly planted spies at Twitter to surveil critics of the regime.

The Guardian obtained a cache of data amounting to millions of locations on Saudi citizens over a four-month period beginning in November. The report says the location tracking requests were made by Saudi’s three largest cell carriers — believed to be at the behest of the Saudi government — by exploiting weaknesses in SS7.

SS7, or Signaling System 7, is a set of protocols — akin to a private network used by carriers around the world — to route and direct calls and messages between networks. It’s the reason why a T-Mobile customer can call an AT&T phone, or text a friend on Verizon — even when they’re in another country. But experts say that weaknesses in the system have allowed attackers with access to the carriers — almost always governments or the carriers themselves — to listen in to calls and read text messages. SS7 also allows carriers to track the location of devices to just a few hundred feet in densely populated cities by making a “provide subscriber information” (PSI) request. These PSI requests are typically to ensure that the cell user is being billed correctly, such as if they are roaming on a carrier in another country. Requests made in bulk and excess can indicate location tracking surveillance.

But despite years of warnings and numerous reports of attacks exploiting the system, the largest U.S. carriers have done little to ensure that foreign spies cannot abuse their networks for surveillance.

One Democratic lawmaker puts the blame squarely in the Federal Communication Commission’s court for failing to compel cell carriers to act.

“I’ve been raising the alarm about security flaws in U.S. phone networks for years, but FCC chairman Ajit Pai has made it clear he doesn’t want to regulate the carriers or force them to secure their networks from foreign government hackers,” said Sen. Ron Wyden, a member of the Senate Intelligence Committee, in a statement on Sunday. “Because of his inaction, if this report is true, an authoritarian government may be reaching into American wireless networks to track people inside our country,” he said.

A spokesperson for the FCC, the agency responsible for regulating the cell networks, did not respond to a request for comment.

A long history of feet-dragging

Wyden is not the only lawmaker to express concern. In 2016, Rep. Ted Lieu, then a freshman congressman, gave a security researcher permission to hack his phone by exploiting weaknesses in SS7 for an episode of CBS’ 60 Minutes.

Lieu accused the FCC of being “guilty of remaining silent on wireless network security issues.”

The same vulnerabilities were used a year later in 2017 to drain the bank accounts of unsuspecting victims by intercepting and stealing the two-factor authentication codes necessary to log in sent by text message. The breach was one of the reasons why the U.S. government’s standards and technology units, NIST, recommended moving away from using text messages to send two-factor codes.

Months later the FCC issued a public notice, prompted by a raft of media attention, “encouraging” but not mandating that carriers make efforts to bolster their individual SS7 systems. The notice asked carriers to monitor their networks and install firewalls to prevent malicious requests abuse.

It wasn’t enough. Wyden’s office reported in 2018 that one of the major cell carriers — which was not named — reported an SS7 breach involving customer data. Verizon and T-Mobile said in letters to Wyden’s office that they were implementing firewalls that would filter malicious SS7 requests. AT&T said in its letter that it was in the process of updating its firewalls, but also warned that “unstable and unfriendly nations” with access to a cell carrier’s SS7 systems could abuse the system. Only Sprint said at the time that it was not the source of the SS7 breach, according to a spokesperson’s email to TechCrunch.

T-Mobile did not respond to a request for comment. Verizon (which owns TechCrunch) also did not comment. AT&T said at the time it “continually works with industry associations and government agencies” to address SS7 issues.

Fixing SS7

Fixing the problems with SS7 is not an overnight job. But without a regulator pushing for change, the carriers aren’t inclined to budge.

Experts say those same firewalls put in place by the cell carriers can filter potentially malicious traffic and prevent some abuse. But an FCC working group tasked with understanding the risks posed by SS7 flaws in 2016 acknowledged that the vast majority of SS7 traffic is legitimate. “Carriers need to be measured as they implement solutions in order to avoid collateral network impacts,” the report says.

In other words, it’s not a feasible solution if it blocks real carrier requests.

Cell carriers have been less than forthcoming with their plans to fix their SS7 implementations. Only AT&T provided comment, telling The Guardian that it had “security controls to block location-tracking messages from roaming partners.” To what extent remains unclear, or if those measures will even help. Few experts have expressed faith in newer systems like Diameter, a similar routing protocol for 4G and 5G, given there have already been a raft of vulnerabilities found in the newer system.

End-to-end encrypted apps, like Signal and WhatsApp, have made it harder for spies to snoop on calls and messages. But it’s not a panacea. As long as SS7 remains a fixture underpinning the very core of every cell network, tracking location data will remain fair game.

Divesting from one facial recognition startup, Microsoft ends outside investments in the tech

Microsoft is pulling out of an investment in an Israeli facial recognition technology developer as part of a broader policy shift to halt any minority investments in facial recognition startups, the company announced late last week.

The decision to withdraw its investment from AnyVision, an Israeli company developing facial recognition software, came as a result of an investigation into reports that AnyVision’s technology was being used by the Israeli government to surveil residents in the West Bank.

The investigation, conducted by former U.S. Attorney General Eric Holder and his team at Covington & Burling, confirmed that AnyVision’s technology was used to monitor border crossings between the West Bank and Israel, but did not “power a mass surveillance program in the West Bank.”

Microsoft’s venture capital arm, M12 Ventures, backed AnyVision as part of the company’s $74 million financing round which closed in June 2019. Investors who continue to back the company include DFJ Growth and OG Technology Partners, LightSpeed Venture Partners, Robert Bosch GmbH, Qualcomm Ventures, and Eldridge Industries.

Microsoft first staked out its position on how the company would approach facial recognition technologies in 2018, when President Brad Smith issued a statement calling on government to come up with clear regulations around facial recognition in the U.S.

Smith’s calls for more regulation and oversight became more strident by the end of the year, when Microsoft issued a statement on its approach to facial recognition.

Smith wrote:

We and other tech companies need to start creating safeguards to address facial recognition technology. We believe this technology can serve our customers in important and broad ways, and increasingly we’re not just encouraged, but inspired by many of the facial recognition applications our customers are deploying. But more than with many other technologies, this technology needs to be developed and used carefully. After substantial discussion and review, we have decided to adopt six principles to manage these issues at Microsoft. We are sharing these principles now, with a commitment and plans to implement them by the end of the first quarter in 2019.

The principles that Microsoft laid out included privileging: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance.

Critics took the company to task for its investment in AnyVision, saying that the decision to back a company working with the Israeli government on wide-scale surveillance ran counter to the principles it had set out for itself.

Now, after determining that controlling how facial recognition technologies are deployed by its minority investments is too difficult, the company is suspending its outside investments in the technology.

“For Microsoft, the audit process reinforced the challenges of being a minority investor in a company that sells sensitive technology, since such investments do not generally allow for the level of oversight or control that Microsoft exercises over the use of its own technology,” the company wrote in a statement on its M12 Ventures website. “Microsoft’s focus has shifted to commercial relationships that afford Microsoft greater oversight and control over the use of sensitive technologies.”