Google, Mozilla team up to block Kazakhstan’s browser spying tactics

Google and Mozilla have taken the rare step of blocking an untrusted certificate issued by the Kazakhstan government, which critics say it forced its citizens to install as part of an effort to monitor their internet traffic.

The two browser makers said in a joint statement Wednesday it deployed “technical solutions” to block the government-issued certificate.

Citizens had been told to install the government-issued certificate on their computers and devices as part of a domestic surveillance program. In doing so it gave the government ‘root’ access to the network traffic on those devices, allowing the government to intercept and snoop on citizens’ internet browsing activities.

Researchers found that only a few sites were being monitored, like Facebook, Twitter, and Google.

Although the Kazakh government is said to have stopped what it called “system testing” and allowed citizens to delete the certificate, both Google and Mozilla said its measures would stop the data-intercepting certificate from working — even if it’s still installed.

“We don’t take actions like this lightly,” said Marshall Erwin, Mozilla’s senior director of trust and security. But Google browser chief Parisa Tabriz said the company would “never tolerate any attempt, by any organization — government or otherwise — to compromise Chrome users’ data.”

The block went into effect invisibly and no action is needed by users.

Kazakhstan has a population of 18 million. Researchers said that the Kazakh government’s efforts to intercept the country’s internet traffic only hit a “fraction” of the connections passing through the country’s largest internet provider.

The Central-Asian country currently ranks as one of the least free countries on the internet freedom score, based off data collected by watchdog Freedom House, trailing just behind Russia and Iran.

A spokesperson for the Kazakhstan consulate in New York did not respond to a request for comment.

A newly funded startup, Internal, says it wants to help companies better manage their internal consoles

Uber and Facebook and countless other companies that know an awful lot about their customers have found themselves in hot water for providing broad internal access to sensitive customer information.

Now, a startup says its “out-of-the-box tools” can help protect customers’ privacy while also saving companies from themselves. How? With a software-as-a-service product that promises to help employees access the app data they need — and only the app data they need. Among the features the company, Internal, is offering, are search and filtering, auto-generated tasks and team queues, granular permissioning on every field, audit logs on every record and redacted fields for sensitive information.

Whether the startup can win the trust of enterprises is the biggest question for the company, which was created by Arisa Amano and Bob Remeika, founders who last year launched the blockchain technology company Harbor. The two also worked together previously at two other companies: Zenefits and Yammer.

All of these endeavors have another person in common, and that’s David Sacks, whose venture firm, Craft Ventures, has just led a $5 million round in Internal. Sacks also invested last year in Harbor; he was an early investor in Zenefits and took over during troubled times as its CEO for less than a year; he also founded Yammer, which sold to Microsoft for $1.2 billion in cash in 2012.

All of the aforementioned have been focused, too, on making it easier for companies to get their work done, and Amano and Remeika have built the internal console at all three companies. It’s how they arrived at their “aha” moment last year, says Amano. “So many companies build their consoles [which allow users advanced use of the computer system they’re attached to] in a half-hearted way; we realized there was an opportunity to build this as a service.”

“Companies never dedicate enough engineers to [their internal consoles], so they’re often half broken and hard to use and they do a terrible job of limiting access to sensitive customer data,” adds Remeika. “We eliminate the need to build these tools altogether, and it takes just minutes to get set up.”

Internal Screens 1

Starting today, companies can decide for themselves whether they think Internal can help their employees interact with their customer app data in a more secure and compliant way. The eight-person company has just made the product available for a free trial.

Naturally, Amano and Remeika are full of assurances why companies can trust them. “We don’t store data,” says Amano. “That resides on the [customer’s] servers. It stays in their database.” Internal’s technology instead “understands the structure of the data and will read that structure,” offers Remeika, who says not to mistake Internal for an analytics tool. “Analytics tools commonly provide a high-level overview; Internal is giving users granular access to customer data and letting you debug problems.”

As for competitors, the two say their most formidable opponent right now is developers who throw up a data model viewer that has complete access to everything in a database, which may be sloppy but happens routinely.

Internal isn’t disclosing its pricing publicly just yet, but it says its initial target is non-technical users, on operations and customer support teams, for example.

As for Harbor (we couldn’t help but wonder why they’re already starting a new company), they say it’s in good hands with CEO Josh Stein, who was previously general counsel and chief compliance officer at Zenefits (he was its first lawyer) and who joined Harbor in February of last year as its president. Stein was later named CEO.

In addition to Craft Ventures, Internal’s new seed round comes from Pathfinder, which is Founders Fund’s early-stage investment vehicle, and other, unnamed angel investors.

Developers accuse Apple of anti-competitive behavior with its privacy changes in iOS 13

A group of app developers have penned a letter to Apple CEO Tim Cook, arguing that certain privacy-focused changes to Apple’s iOS 13 operating system will hurt their business. In a report by The Information, the developers were said to have accused Apple of anti-competitive behavior when it comes to how apps can access user location data.

With iOS 13, Apple aims to curtail apps’ abuse of its location-tracking features as part of its larger privacy focus as a company.

Today, many apps ask users upon first launch to give their app the “Always Allow” location-tracking permission. Users can confirm this with a tap, unwittingly giving apps far more access to their location data than is actually necessary, in many cases.

In iOS 13, however, Apple has tweaked the way apps can request location data.

There will now be a new option upon launch presented to users, “Allow Once,” which allows users to first explore the app to see if it fits their needs before granting the app developer the ability to continually access location data. This option will be presented alongside existing options, “Allow While Using App” and “Don’t Allow.”

The “Always” option is still available, but users will have to head to iOS Settings to manually enable it. (A periodic pop-up will also present the “Always” option, but not right away.)

The app developers argue that this change may confuse less technical users, who will assume the app isn’t functioning properly unless they figure out how to change their iOS Settings to ensure the app has the proper permissions.

The developers’ argument is a valid assessment of user behavior and how such a change could impact their apps. The added friction of having to go to Settings in order to toggle a switch so an app to function can cause users to abandon apps. It’s also, in part, why apps like Safari ad blockers and iOS replacement keyboards never really went mainstream, as they require extra steps involving the iOS Settings.

That said, the changes Apple is rolling out with iOS 13 don’t actually break these apps entirely. They just require the apps to refine their onboarding instructions to users. Instead of asking for the “Always Allow” permission, they will need to point users to the iOS Settings screen, or limit the apps’ functionality until it’s granted the “Always Allow” permission.

In addition, the developers’ letter pointed out that Apple’s own built-in apps (like Find My) aren’t treated like this, which raises anti-competitive concerns.

The letter also noted that Apple in iOS 13 would not allow developers to use PushKit for any other purpose beyond internet voice calls — again, due to the fact that some developers abused this toolkit to collect private user data.

“We understand that there were certain developers, specifically messaging apps, that were using this as a backdoor to collect user data,” the email said, according to the report. “While we agree loopholes like this should be closed, the current Apple plan to remove [access to the internet voice feature] will have unintended consequences: it will effectively shut down apps that have a valid need for real-time location.”

The letter was signed by Tile CEO CJ Prober; Arity (Allstate) President Gary Hallgren; CEO of Life360 Chris Hullsan; CEO of dating app Happn Didier Rappaport; CEO of Zenly (Snap), Antoine Martin; , CEO of Zendrive, Jonathan Matus which; and chief strategy officer of social networking app Twenty, Jared Allgood.

Apple responded to The Information by saying that any changes it makes to the operating system are “in service to the user” and to their privacy. It also noted that any apps it distributes from the App Store have to abide by the same procedures.

It’s another example of how erring on the side of increased user privacy can lead to complications and friction for end users. One possible solution could be allowing apps to present their own in-app Settings screen where users could toggle the app’s full set of permissions directly — including everything from location data to push notifications to the app’s use of cellular data or Bluetooth sharing.

The news comes at a time when the U.S. Dept. of Justice is considering investigating Apple for anti-competitive behavior. Apple told The Information it was working with some of the impacted developers using PushKit on alternate solutions.

Privacy researchers devise a noise-exploitation attack that defeats dynamic anonymity

Privacy researchers in Europe believe they have the first proof that a long-theorised vulnerability in systems designed to protect privacy by aggregating and adding noise to data to mask individual identities is no longer just a theory.

The research has implications for the immediate field of differential privacy and beyond — raising wide-ranging questions about how privacy is regulated if anonymization only works until a determined attacker figures out how to reverse the method that’s being used to dynamically fuzz the data.

Current EU law doesn’t recognise anonymous data as personal data. Although it does treat pseudoanonymized data as personal data because of the risk of re-identification.

Yet a growing body of research suggests the risk of de-anonymization on high dimension data sets is persistent. Even — per this latest research — when a database system has been very carefully designed with privacy protection in mind.

It suggests the entire business of protecting privacy needs to get a whole lot more dynamic to respond to the risk of perpetually evolving attacks.

Academics from Imperial College London and Université Catholique de Louvain are behind the new research.

This week, at the 28th USENIX Security Symposium, they presented a paper detailing a new class of noise-exploitation attacks on a query-based database that uses aggregation and noise injection to dynamically mask personal data.

The product they were looking at is a database querying framework, called Diffix — jointly developed by a German startup called Aircloak and the Max Planck Institute for Software Systems.

On its website Aircloak bills the technology as “the first GDPR-grade anonymization” — aka Europe’s General Data Protection Regulation, which began being applied last year, raising the bar for privacy compliance by introducing a data protection regime that includes fines that can scale up to 4% of a data processor’s global annual turnover.

What Aircloak is essentially offering is to manage GDPR risk by providing anonymity as a commercial service — allowing queries to be run on a data-set that let analysts gain valuable insights without accessing the data itself. The promise being it’s privacy (and GDPR) ‘safe’ because it’s designed to mask individual identities by returning anonymized results.

The problem is personal data that’s re-identifiable isn’t anonymous data. And the researchers were able to craft attacks that undo Diffix’s dynamic anonymity.

“What we did here is we studied the system and we showed that actually there is a vulnerability that exists in their system that allows us to use their system and to send carefully created queries that allow us to extract — to exfiltrate — information from the data-set that the system is supposed to protect,” explains Imperial College’s Yves-Alexandre de Montjoye, one of five co-authors of the paper.

“Differential privacy really shows that every time you answer one of my questions you’re giving me information and at some point — to the extreme — if you keep answering every single one of my questions I will ask you so many questions that at some point I will have figured out every single thing that exists in the database because every time you give me a bit more information,” he says of the premise behind the attack. “Something didn’t feel right… It was a bit too good to be true. That’s where we started.”

The researchers chose to focus on Diffix as they were responding to a bug bounty attack challenge put out by Aircloak.

“We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information,” he explains.

“What a lot of people will do is try to cancel out the noise and recover the piece of information. What we’re doing with this attack is we’re taking it the other way round and we’re studying the noise… and by studying the noise we manage to infer the information that the noise was meant to protect.

“So instead of removing the noise we study statistically the noise sent back that we receive when we send carefully crafted queries — that’s how we attack the system.”

A vulnerability exists because the dynamically injected noise is data-dependent. Meaning it remains linked to the underlying information — and the researchers were able to show that carefully crafted queries can be devised to cross-reference responses that enable an attacker to reveal information the noise is intended to protect.

Or, to put it another way, a well designed attack can accurately infer personal data from fuzzy (‘anonymized’) responses.

This despite the system in question being “quite good,” as de Montjoye puts it of Diffix. “It’s well designed — they really put a lot of thought into this and what they do is they add quite a bit of noise to every answer that they send back to you to prevent attacks”.

“It’s what’s supposed to be protecting the system but it does leak information because the noise depends on the data that they’re trying to protect. And that’s really the property that we use to attack the system.”

The researchers were able to demonstrate the attack working with very high accuracy across four real-world data-sets. “We tried US censor data, we tried credit card data, we tried location,” he says. “What we showed for different data-sets is that this attack works very well.

“What we showed is our attack identified 93% of the people in the data-set to be at risk. And I think more importantly the method actually is very high accuracy — between 93% and 97% accuracy on a binary variable. So if it’s a true or false we would guess correctly between 93-97% of the time.”

They were also able to optimise the attack method so they could exfiltrate information with a relatively low level of queries per user — up to 32.

“Our goal was how low can we get that number so it would not look like abnormal behaviour,” he says. “We managed to decrease it in some cases up to 32 queries — which is very very little compared to what an analyst would do.”

After disclosing the attack to Aircloak, de Montjoye says it has developed a patch — and is describing the vulnerability as very low risk — but he points out it has yet to publish details of the patch so it’s not been possible to independently assess its effectiveness. 

“It’s a bit unfortunate,” he adds. “Basically they acknowledge the vulnerability [but] they don’t say it’s an issue. On the website they classify it as low risk. It’s a bit disappointing on that front. I think they felt attacked and that was really not our goal.”

For the researchers the key takeaway from the work is that a change of mindset is needed around privacy protection akin to the shift the security industry underwent in moving from sitting behind a firewall waiting to be attacked to adopting a pro-active, adversarial approach that’s intended to out-smart hackers.

“As a community to really move to something closer to adversarial privacy,” he tells TechCrunch. “We need to start adopting the red team, blue team penetration testing that have become standard in security.

“At this point it’s unlikely that we’ll ever find like a perfect system so I think what we need to do is how do we find ways to see those vulnerabilities, patch those systems and really try to test those systems that are being deployed — and how do we ensure that those systems are truly secure?”

“What we take from this is really — it’s on the one hand we need the security, what can we learn from security including open systems, verification mechanism, we need a lot of pen testing that happens in security — how do we bring some of that to privacy?”

“If your system releases aggregated data and you added some noise this is not sufficient to make it anonymous and attacks probably exist,” he adds.

“This is much better than what people are doing when you take the dataset and you try to add noise directly to the data. You can see why intuitively it’s already much better.  But even these systems are still are likely to have vulnerabilities. So the question is how do we find a balance, what is the role of the regulator, how do we move forward, and really how do we really learn from the security community?

“We need more than some ad hoc solutions and only limiting queries. Again limiting queries would be what differential privacy would do — but then in a practical setting it’s quite difficult.

“The last bit — again in security — is defence in depth. It’s basically a layered approach — it’s like we know the system is not perfect so on top of this we will add other protection.”

The research raises questions about the role of data protection authorities too.

During Diffix’s development, Aircloak writes on its website that it worked with France’s DPA, the CNIL, and a private company that certifies data protection products and services — saying: “In both cases we were successful in so far as we received essentially the strongest endorsement that each organization offers.”

Although it also says that experience “convinced us that no certification organization or DPA is really in a position to assert with high confidence that Diffix, or for that matter any complex anonymization technology, is anonymous”, adding: “These organizations either don’t have the expertise, or they don’t have the time and resources to devote to the problem.”

The researchers’ noise exploitation attack demonstrates how even a level of regulatory “endorsement” can look problematic. Even well designed, complex privacy systems can contain vulnerabilities and cannot offer perfect protection. 

“It raises a tonne of questions,” says de Montjoye. “It is difficult. It fundamentally asks even the question of what is the role of the regulator here?

When you look at security my feeling is it’s kind of the regulator is setting standards and then really the role of the company is to ensure that you meet those standards. That’s kind of what happens in data breaches.

“At some point it’s really a question of — when something [bad] happens — whether or not this was sufficient or not as a [privacy] defence, what is the industry standard? It is a very difficult one.”

“Anonymization is baked in the law — it is not personal data anymore so there are really a lot of implications,” he adds. “Again from security we learn a lot of things on transparency. Good security and good encryption relies on open protocol and mechanisms that everyone can go and look and try to attack so there’s really a lot at this moment we need to learn from security.

“There’s no going to be any perfect system. Vulnerability will keep being discovered so the question is how do we make sure things are still ok moving forward and really learning from security — how do we quickly patch them, how do we make sure there is a lot of research around the system to limit the risk, to make sure vulnerabilities are discovered by the good guys, these are patched and really [what is] the role of the regulator?

“Data can have bad applications and a lot of really good applications so I think to me it’s really about how to try to get as much of the good while limiting as much as possible the privacy risk.”

Amazon customers say they received emails for other people’s orders

Users have said they are receiving emails from Amazon containing invoices and order updates on other customers, TechCrunch has learned.

Jake Williams, founder of cybersecurity firm Rendition Infosec, raised the alarm after he received an email from Amazon addressed to another customer with their name, postal address and their order details.

Williams said he ordered something months ago which recently became available for shipping. He checked the email headers to make sure it was a genuine message.

“I think they legitimately intended to email me a notification that my item was shipping early,” he said. “I just think they screwed something up in the system and sent the updates to the wrong people.”

He said the apparent security lapse was worrying because emails about orders sent to the wrong place is a “serious breach of trust” that can reveal private information about a customer’s life, such as sexual orientation, proclivities or other personal information

Several other Amazon customers also said they received emails seemingly meant for other people.

“I made an order yesterday afternoon and received her email last night,” another customer who tweeted about the mishap told TechCrunch. “Luckily I’m not a malicious person but that’s a huge security issue,” she said.

Another customer tweeted out about receiving an email meant for someone else. He said he spoke to Amazon customer service, which said they will investigate additional security issues.

“Hope you didn’t send my sensitive account info to someone else,” he added.

And, one other customer posted a tweet thread about the issue, saying they spoke to a supervisor about the issue who gave a “nonchalant” response, she wrote. She said the supervisor said the issue happens frequently.

A spokesperson for Amazon did not return a request for comment when we asked how many customers were affected and if the company plans to inform customers of the breach. If we hear back, we’ll update.

It’s the second security lapse in a year. In November the company emailed customers saying a “technical error” had exposed an unknown number of their email addresses. When asked about specifics, the notoriously secretive company declined to comment further.

8 million Android users tricked into downloading 85 adware apps from Google Play

Dozens of Android adware apps disguised as photo editing apps and games have been caught serving ads that would take over users’ screens as part of a fraudulent money-making scheme.

Security firm Trend Micro said it found 85 individual apps downloaded more than eight million times from the Google Play — all of which have since been removed from the app store.

More often than not adware apps will run on a user’s device and will silently serve and click ads in the background and without the user’s knowledge to generate ad revenue. But these apps were particularly brazen and sneaky, one of the researchers said.

“It isn’t your run-of-the-mill adware family,” said Ecular Xu, a mobile threat response engineer at Trend Micro. “Apart from displaying advertisements that are difficult to close, it employs unique techniques to evade detection through user behavior and time-based triggers.”

The researchers discovered that the apps would keep a record when they were installed and sit dormant for around half-an-hour. After the delay, the app would hide its icon and create a shortcut on the user’s home screen, the security firm said. That, they say, helped to protect the app from being deleted if the user decided to drag and drop the shortcut to the ‘uninstall’ section of the screen.

“These ads are shown in full screen,” said Xu. “Users are forced to view the whole duration of the ad before being able to close it or go back to app itself.”

When the app unlocked, it displayed ads on the user’s home screen. The code also checks to make sure it doesn’t show the same ad too frequently, the researchers said.

Worse, the ads can be remotely configured by the fraudster, allowing ads to be displayed more frequently than the default five minute intervals.

Trend Micro provided a list of the apps — including Super Selfie Camera, Cos Camera, Pop Camera, and One Stroke Line Puzzle — all of which had a million downloads each.

Users about to install the apps had a dead giveaway: most of the apps had appalling reviews, many of which had as many one-star reviews as they did five-stars, with users complaining about the deluge of pop-up ads.

Google does not typically comment on app removals beyond acknowledging their removal from Google Play.

Read more:

Toolkit for digital abuse could help victims protect themselves

Domestic abuse comes in digital forms as well as physical and emotional, but a lack of tools to address this kind of behavior leaves many victims unprotected and desperate for help. This Cornell project aims to define and detect digital abuse in a systematic way.

Digital abuse may be many things: hacking the victim’s computer, using knowledge of passwords or personal date to impersonate them or interfere with their presence online, accessing photos to track their location, and so on. As with other forms of abuse, there are as many patterns as there are people who suffer from it.

But with something like emotional abuse, there are decades of studies and clinical approaches to address how to categorize and cope with it. Not so with newer phenomena like being hacked or stalked via social media. That means there’s little standard playbook for them, and both abused and those helping them are left scrambling for answers.

“Prior to this work, people were reporting that the abusers were very sophisticated hackers, and clients were receiving inconsistent advice. Some people were saying, ‘Throw your device out.’ Other people were saying, ‘Delete the app.’ But there wasn’t a clear understanding of how this abuse was happening and why it was happening,” explained Diana Freed, a doctoral student at Cornell Tech and co-author of a new paper about digital abuse.

“They were making their best efforts, but there was no uniform way to address this,” said co-author Sam Havron. “They were using Google to try to help clients with their abuse situations.”

Investigating this problem with the help of a National Science Foundation grant to examine the role of tech in domestic abuse, they and some professor collaborators at Cornell and NYU came up with a new approach.

There’s a standardized questionnaire to characterize the type of tech-based being experienced. It may not occur to someone who isn’t tech-savvy that their partner may know their passwords, or that there are social media settings they can use to prevent that partner from seeing their posts. This information and other data are added to a sort of digital presence diagram the team calls the “technograph” and which helps the victim visualize their technological assets and exposure.

technograph filled

The team also created a device they call the IPV Spyware Discovery, or ISDi. It’s basically spyware scanning software loaded on a device that can check the victim’s device without having to install anything. This is important because an abuser may have installed tracking software that would alert them if the victim is trying to remove it. Sound extreme? Not to people fighting a custody battle who can’t seem to escape the all-seeing eye of an abusive ex. And these spying tools are readily available for purchase.

“It’s consistent, it’s data-driven and it takes into account at each phase what the abuser will know if the client makes changes. This is giving people a more accurate way to make decisions and providing them with a comprehensive understanding of how things are happening,” explained Freed.

Even if the abuse can’t be instantly counteracted, it can be helpful simply to understand it and know that there are some steps that can be taken to help.

The authors have been piloting their work at New York’s Family Justice Centers, and following some testing have released the complete set of documents and tools for anyone to use.

This isn’t the team’s first piece of work on the topic — you can read their other papers and learn more about their ongoing research at the Intimate Partner Violence Tech Research program site.

US legislator, David Cicilline, joins international push to interrogate platform power

US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.

Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.

The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.

Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.

Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.

“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”

“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”

The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.

Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.

A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.

Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.

Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.

Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.

While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.

In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world.  As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.

“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”

Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.

Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.

WebKit’s new anti-tracking policy puts privacy on a par with security

WebKit, the open source engine that underpins Internet browsers including Apple’s Safari browser, has announced a new tracking prevention policy that takes the strictest line yet on the background and cross-site tracking practices and technologies which are used to creep on Internet users as they go about their business online.

Trackers are technologies that are invisible to the average web user, yet which are designed to keep tabs on where they go and what they look at online — typically for ad targeting but web user profiling can have much broader implications than just creepy ads, potentially impacting the services people can access or the prices they see, and so on. Trackers can also be a conduit for hackers to inject actual malware, not just adtech.

This translates to stuff like tracking pixels; browser and device fingerprinting; and navigational tracking to name just a few of the myriad methods that have sprouted like weeds from an unregulated digital adtech industry that’s poured vast resource into ‘innovations’ intended to strip web users of their privacy.

WebKit’s new policy is essentially saying enough: Stop the creeping.

But — and here’s the shift — it’s also saying it’s going to treat attempts to circumvent its policy as akin to malicious hack attacks to be responded to in kind; i.e. with privacy patches and fresh technical measures to prevent tracking.

“WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert),” the organization writes (emphasis its), adding that these goals will apply to all types of tracking listed in the policy — as well as “tracking techniques currently unknown to us”.

“If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques,” it adds.

“We will review WebKit patches in accordance with this policy. We will review new and existing web standards in light of this policy. And we will create new web technologies to re-enable specific non-harmful practices without reintroducing tracking capabilities.”

Spelling out its approach to circumvention, it states in no uncertain terms: “We treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities,” adding: “If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention.”

It also says that if a certain tracking technique cannot be completely prevented without causing knock-on effects with webpage functions the user does intend to interact with, it will “limit the capability” of using the technique” — giving examples such as “limiting the time window for tracking” and “reducing the available bits of entropy” (i.e. limiting how many unique data points are available to be used to identify a user or their behavior).

If even that’s not possible “without undue user harm” it says it will “ask for the user’s informed consent to potential tracking”.

“We consider certain user actions, such as logging in to multiple first party websites or apps using the same account, to be implied consent to identifying the user as having the same identity in these multiple places. However, such logins should require a user action and be noticeable by the user, not be invisible or hidden,” it further warns.

WebKit credits Mozilla’s anti-tracking policy as inspiring and underpinning its new approach.

Commenting on the new policy, Dr Lukasz Olejnik, an independent cybersecurity advisor and research associate at the Center for Technology and Global Affairs Oxford University, says it marks a milestone in the evolution of how user privacy is treated in the browser — setting it on the same footing as security.

“Treating privacy protection circumventions on par with security exploitation is a first of its kind and unprecedented move,” he tells TechCrunch. “This sends a clear warning to the potential abusers but also to the users… This is much more valuable than the still typical approach of ‘we treat the privacy of our users very seriously’ that some still think is enough when it comes to user expectation.”

Asked how he sees the policy impacting pervasive tracking, Olejnik does not predict an instant, overnight purge of unethical tracking of users of WebKit-based browsers but argues there will be less room for consent-less data-grabbers to manoeuvre.

“Some level of tracking, including with unethical technologies, will probably remain in use for the time being. But covert tracking is less and less tolerated,” he says. “It’s also interesting if any decisions will follow, such as for example the expansion of bug bounties to reported privacy vulnerabilities.”

“How this policy will be enforced in practice will be carefully observed,” he adds.

As you’d expect, he credits not just regulation but the role played by active privacy researchers in helping to draw attention and change attitudes towards privacy protection — and thus to drive change in the industry.

There’s certainly no doubt that privacy research is a vital ingredient for regulation to function in such a complex area — feeding complaints that trigger scrutiny that can in turn unlock enforcement and force a change of practice.

Although that’s also a process that takes time.

“The quality of cybersecurity and privacy technology policy, including its communication still leave much to desire, at least at most organisations. This will not change fast,” says says Olejnik. “Even if privacy is treated at the ‘C-level’, this then still tends to be about the purely risk of compliance. Fortunately, some important industry players with good understanding of both technology policy and the actual technology, even the emerging ones still under active research, treat it increasingly seriously.

“We owe it to the natural flow of the privacy research output, the talent inflows, and the slowly moving strategic shifts as well to a minor degree to the regulatory pressure and public heat. This process is naturally slow and we are far from the end.”

For its part, WebKit has been taking aim at trackers for several years now, adding features intended to reduce pervasive tracking — such as, back in 2017, Intelligent Tracking Prevention (ITP), which uses machine learning to squeeze cross-site tracking by putting more limits on cookies and other website data.

Apple immediately applied ITP to its desktop Safari browser — drawing predictable fast-fire from the Internet Advertising Bureau whose membership is comprised of every type of tracker deploying entity on the Internet.

But it’s the creepy trackers that are looking increasingly out of step with public opinion. And, indeed, with the direction of travel of the industry.

In Europe, regulation can be credited with actively steering developments too — following last year’s application of a major update to the region’s comprehensive privacy framework (which finally brought the threat of enforcement that actually bites). The General Data Protection Regulation (GDPR) has also increased transparency around security breaches and data practices. And, as always, sunlight disinfects.

Although there remains the issue of abuse of consent for EU regulators to tackle — with research suggesting many regional cookie consent pop-ups currently offer users no meaningful privacy choices despite GDPR requiring consent to be specific, informed and freely given.

It also remains to be seen how the adtech industry will respond to background tracking being squeezed at the browser level. Continued aggressive lobbying to try to water down privacy protections seems inevitable — if ultimately futile. And perhaps, in Europe in the short term, there will be attempts by the adtech industry to funnel more tracking via cookie ‘consent’ notices that nudge or force users to accept.

As the security space underlines, humans are always the weakest link. So privacy-hostile social engineering might be the easiest way for adtech interests to keep overriding user agency and grabbing their data anyway. Stopping that will likely need regulators to step in and intervene.

Another question thrown up by WebKit’s new policy is which way Chromium will jump, aka the browser engine that underpins Google’s hugely popular Chrome browser.

Of course Google is an ad giant, and parent company Alphabet still makes the vast majority of its revenue from digital advertising — so it maintains a massive interest in tracking Internet users to serve targeted ads.

Yet Chromium developers did pay early attention to the problem of unethical tracking. Here, for example, are two discussing potential future work to combat tracking techniques designed to override privacy settings in a blog post from nearly five years ago.

There have also been much more recent signs Google paying attention to Chrome users’ privacy, such as changes to how it handles cookies which it announced earlier this year.

But with WebKit now raising the stakes — by treating privacy as seriously as security — that puts pressure on Google to respond in kind. Or risk being seen as using its grip on browser marketshare to foot-drag on baked in privacy standards, rather than proactively working to prevent Internet users from being creeped on.

Artificial intelligence can contribute to a safer world

We all see the headlines nearly every day. A drone disrupting the airspace in one of the world’s busiest airports, putting aircraft at risk (and inconveniencing hundreds of thousands of passengers) or attacks on critical infrastructure. Or a shooting in a place of worship, a school, a courthouse. Whether primitive (gunpowder) or cutting-edge (unmanned aerial vehicles) in the wrong hands, technology can empower bad actors and put our society at risk, creating a sense of helplessness and frustration.

Current approaches to protecting our public venues are not up to the task, and, frankly appear to meet Einstein’s definition of insanity: “doing the same thing over and over and expecting a different result.” It is time to look past traditional defense technologies and see if newer approaches can tilt the pendulum back in the defender’s favor. Artificial Intelligence (AI) can play a critical role here, helping to identify, classify and promulgate counteractions on potential threats faster than any security personnel.

Using technology to prevent violence, specifically by searching for concealed weapons has a long history. Alexander Graham Bell invented the first metal detector in 1881 in an unsuccessful attempt to locate the fatal slug as President James Garfield lay dying of an assassin’s bullet. The first commercial metal detectors were developed in the 1960s. Most of us are familiar with their use in airports, courthouses and other public venues to screen for guns, knives and bombs.

However, metal detectors are slow and full of false positives – they cannot distinguish between a Smith & Wesson and an iPhone. It is not enough to simply identify a piece of metal; it is critical to determine whether it is a threat. Thus, the physical security industry has developed newer approaches, including full-body scanners – which are now deployed on a limited basis. While effective to a point, the systems in use today all have significant drawbacks. One is speed. Full body scanners, for example, can process only about 250 people per hour, not much faster than a metal detector. While that might be okay for low volume courthouses, it’s a significant problem for larger venues like a sporting arena.

Image via Getty Images

Fortunately, new AI technologies are enabling major advances in physical security capabilities. These new systems not only deploy advanced sensors to screen for guns, knives and bombs, they get smarter with each screen, creating an increasingly large database of known and emerging threats while segmenting off alarms for common, non-threatening objects (keys, change, iPads, etc.)

As part of a new industrial revolution in physical security, engineers have developed a welcomed approach to expediting security screenings for threats through machine learning algorithms, facial recognition, and advanced millimeter wave and other RF sensors to non-intrusively screen people as they walk through scanning devices. It’s like walking through sensors at the door at Nordstrom, the opposite of the prison-like experience of metal detectors with which we are all too familiar. These systems produce an analysis of what someone may be carrying in about a hundredth of a second, far faster than full body scanners. What’s more, people do not need to empty their pockets during the process, further adding speed. Even so, these solutions can screen for firearms, explosives, suicide vests or belts at a rate of about 900 people per hour through one lane.

Using AI, advanced screening systems enable people to walk through quickly and provide an automated decision but without creating a bottleneck. This volume greatly improves traffic flow while also improving the accuracy of detection and makes this technology suitable for additional facilities such as stadiums and other public venues such as Lincoln Center in New York City and the Oakland airport.

Apollo Shield’s anti-drone system.

So much for the land, what about the air?   Increasingly drones are being used as weapons. Famously, this was seen in a drone attack last year against Venezuelan president Nicolas Maduro. An airport drone incident drew widespread attention when a drone shut down Gatwick Airport in late 2018 inconveniency stranded tens of thousands of people.

People are rightly concerned about how easy it is to get a gun. Drones are also easy to acquire and operate, and quite difficult to monitor and to defend against. AI is now being deployed to prevent drone attacks, whether at airports, stadiums, or critical infrastructure. For example, new AI-powered radar technology is being used to detect, classify, monitor and safely capture drones identified as dangerous.

Additionally, these systems use can rapidly develop a map of the airspace and effectively create a security “dome” around specific venues or areas. These systems have an integration component to coordinate with on-the-ground security teams and first responders. Some even have a capture drone to incarcerate a suspicious drone. When a threatening drone is detected and classified by the system as dangerous, the capture drone is dispatched and nets the invading drone. The hunter then tows the targeted drone to a safe zone for the threat to be evaluated and if needed, destroyed.

While there is much dialogue about the potential risk of AI affecting our society, there is also a positive side to these technologies. Coupled with our best physical security approaches, AI can help prevent violent incidents.