California to close data breach notification loopholes under new law

California, which has some of the strongest data breach notification laws in the U.S., thinks it can do even better.

The golden state’s attorney general Xavier Becerra announced a new bill Thursday that aims to close loopholes in its existing data breach notification laws by expanding the requirements for companies to notify users or customers if their passport and government ID numbers, along with biometric data, such as fingerprints, and iris and facial recognition scans, have been stolen.

The updated draft legislation lands a few months after the Starwood hack, which Becerra and Democratic state assembly member Marc Levine, who introduced the bill, said prompted the law change.

Marriott-owned hotel chain Starwood said data on fewer than 383 million unique guests was stolen in the data breach, revealed in September, including guest names, postal addresses, phone numbers, dates of birth, genders, email addresses, some encrypted payment card data and other reservation information. Starwood also disclosed that five million passport numbers were stolen.

Although Starwood came clean and revealed the data breach, companies are not currently legally obligated to disclose that passport numbers or biometric data have been stolen. Under California state law, only Social Security numbers, driver’s license numbers, banking information, passwords, medical and health insurance information and data collected through automatic license plate recognition systems must be reported.

That’s set to change, under the new California assembly bill 1130, the state attorney general said.

“We have an opportunity today to make our data breach law stronger and that’s why we’re moving today to make it more difficult for hackers and cybercriminals to get your private information,” said Becerra at a press conference in San Francisco. “AB 1130 closes a gap in California law and ensures that our state remains the nation’s leader in data privacy and protection,” he said.

Several other states, like Alabama, Florida and Oregon, already require data breach notifications in the event of passport number breaches, and also biometric data in the case of Iowa and Nebraska, among others.

California remains, however, one of only a handful of states that require the provision of credit monitoring or identity theft protection after certain kinds of breaches.

Thursday’s bill comes less than a year after state lawmakers passed the California Privacy Act into law, greatly expanding privacy rights for consumers — similar to provisions provided to Europeans under the newly instituted General Data Protection Regulation. The state privacy law, passed in June and set to go into effect in 2020, was met with hostility by tech companies headquartered in the state, prompting a lobbying effort to push for a superseding but weaker federal privacy law.

Even the IAB warned adtech risks EU privacy rules

A privacy complaint targeting the behavioral advertising industry has a new piece of evidence that shows the Internet Advertising Bureau (IAB) shedding doubt on whether it’s possible to obtain informed consent from web users for the programmatic ad industry’s real-time bidding (RTB) system to broadcast their personal data.

The adtech industry functions by harvesting web users’ data, packaging individual identifiers and browsing data in bid requests that are systematically shared with third parties in order to solicit and scale advertiser bids for the user’s attention.

However a series of RTB complaints — filed last fall by Jim Killock, director of the Open Rights Group; Dr Johnny Ryan of private browser Brave; and Michael Veale, a data and policy researcher at University College London — allege this causes “wide-scale and systemic breaches” of European Union data protection rules.

So far complaints have been filed with data protection agencies in Ireland, the UK and Poland, though the intent is for the action to expand across the EU given that behavioral advertising isn’t region specific.

Google and the IAB set the RTB specifications used by the online ad industry and are thus the main targets here, with complainants advocating for amendments to the specification to bring the system into compliance with the bloc’s data protection regime.

We’ve covered the complaint before, including an earlier submission showing the highly sensitive inferences that can be included in bid requests. But documents obtained by the complainants via freedom of information request and newly published this week show the IAB itself warned in 2017 that the RTB system risks falling foul of the bloc’s privacy rules, and specifically the rules around consent under the EU’s General Data Protection Regulation (GDPR), which came into force last May.

The complainants have published the latest evidence on a new campaign website.

At the very least the admission looks awkward for online ad industry body.

“incompatible with consent under GDPR “

In an email sent to senior personnel at the European Commission in June 2017 by Townsend Feehan, the CEO of IAB Europe — and now being used as evidence in the complaints — she writes that she wants to expand on concerns voiced at a roundtable session about the Commission’s ePrivacy proposals that she claims could “mean the end of the online advertising business model”.

Feehan attached an 18-page document to the email in which the IAB can be seen lobbying against the Commission’s ePrivacy proposal — claiming it will have “serious negative impacts on the digital advertising industry, on European media, and ultimately on European citizens’ access to information and other online content and services”.

The IAB goes on to push for specific amendments to the proposed text of the regulation. (As we’ve written before a major lobbying effort has blow up since GDPR was agreed to try to block updating the ePrivacy rules which operate alongside, covering marketing and electronic communications and cookies and other online tracking technologies.)

As it lobbies to water down ePrivacy rules, the IAB suggests it’s “technically impossible” for informed consent to function in a real-time bidding scenario — writing the following, in a segment entitled ‘Prior information requirement will “break” programmatic trading’:

As it is technically impossible for the user to have prior information about every data controller involved in a real-time bidding (RTB) scenario, programmatic trading, the area of fastest growth in digital advertising spend, would seem, at least prima facie, to be incompatible with consent under GDPR – and, as noted above, if a future ePrivacy Regulation makes virtually all interactions with the Internet subject solely to the consent legal basis, and consent is unavailable, then there will be no legal be no basis for such processing to take place or for media to monetise their content in this way.

The notion that it’s impossible to obtain informed consent from web users for processing their personal data prior to doing so is important because the behavioral ad industry, as it currently functions, includes personal data in bid requests that it systematically broadcasts to what can be thousands of third party companies.

Indeed, the crux of the RTB complaints are that personal data should be stripped out of these requests — and only contextual information broadcast for targeting ads, exactly because the current system is systematically breaching the rights of European web users by failing to obtain their consent for personal data to be sucked out and handed over to scores of unknown entities.

In its lobbying efforts to knock the teeth out of the ePrivacy Regulation the IAB can here be seen making a similar point — when it writes that programmatic trading “would seem, at least prima facie, to be incompatible with consent under GDPR”. (Albeit, injecting some of its own qualifiers into the sentence.)

The IAB is certainly seeking to deploy pro-privacy arguments to try to dilute Europeans’ privacy rights.

Despite it’s own claimed reservations about there being no technical fix to get consent for programmatic trading under GDPR the IAB nonetheless went on to launch a technical mechanism for managing — and, it claimed — complying with GDPR consent requirements in April 2018, when it urged the industry to use its GDPR “Consent & Transparency Framework”.

But in another piece of evidence obtained by the group of individuals behind the RTB complaints — an IAB document, dated May 2018, intended for publishers making use of this framework — the IAB also acknowledges that: “Publishers recognize there is no technical way to limit the way data is used after the data is received by a vendor for decisioning/bidding on/after delivery of an ad”.

In a section on liability, the IAB document lays out other publisher concerns that each bid request assumes “indiscriminate rights for vendors” — and that “surfacing thousands of vendors with broad rights to use data without tailoring those rights may be too many vendors/permissions”.

So again, er, awkward.

Another piece of evidence now attached to the RTB complaints shows a set of sample bid requests from the IAB and Google’s documentation for users of their systems — with annotations by the complainants showing exactly how much personal data gets packaged up and systematically shared.

This can include a person’s latitude and longitude GPS coordinates; IP address; device specific identifiers; various ID codes; inferred interests (which could include highly sensitive personal data); and the current webpage they’re looking at;

“The fourteen sample bid requests further prove that very personal data are contained in bid requests,” the complainants argue.

They have also included an estimated breakdown of seven major ad exchanges’ daily bid requests — Index Exchange, OpenX, Rubicon Project, Oath/AOL*, AppNexus, Smaato, Google DoubleClick — showing they collectively broadcast “hundreds of billions of bid requests per day”, to illustrate the scale of data being systematically broadcast by the ad industry.

“This suggests that the New Economics Foundation’s estimate in December that bid requests broadcast data about the average UK internet user 164 times a day was a conservative estimate,” they add.

The IAB has responded to the new evidence by couching the complainants’ claims as “false” and “intentionally damaging to the digital advertising industry and to European digital media”.

Regarding its 2017 document, in which it wrote that it was “technically impossible” for an Internet user to have prior information about every data controller involved in a RTB “scenario”, the IAB responds that “that was true at the time, but has changed since” — pointing to its Transparency & Consent framework (TCF) as the claimed fix for that, and further claiming it “demonstrates that real-time bidding is certainly not ‘incompatible with consent under GDPR'”.

Here are the relevant paras of IAB rebuttal on that:

The TCF provides a way to provide transparency to users about how, and by whom, their personal data is processed. It also enables users to express choices. Moreover, the TCF enables vendors engaged in programmatic advertising to know ahead of time whether their own and/or their partners’ transparency and consent status allows them to lawfully process personal data for online advertising and related purposes. IAB Europe’s submission to the European Commission in April 2017 showed that the industry needed to adapt to meet higher standards for transparency and consent under the GDPR. The TCF demonstrates how complex challenges can be overcome when industry players come together. But most importantly, the TCF demonstrates that real-time bidding is certainly not “incompatible with consent under GDPR”.

The OpenRTB protocol is a tool that can be used to determine which advertisement should be served on a given web page at a given time. Data can inform that determination. Like all technology, OpenRTB must be used in a way that complies with the law. Doing so is entirely possible and greatly facilitated by the IAB Europe Transparency & Consent Framework, whose whole raison d’être is to help ensure that the collection and processing of user data is done in full compliance with EU privacy and data protection rules.

The IAB goes on to couch the complaints as stemming from a “hypothetical possibility for personal data to be processed unlawfully in the course of programmatic advertising processes”.

“This hypothetical possibility arises because neither OpenRTB nor the TCF are capable of physically preventing companies using the protocol to unlawfully process personal data. But the law does not require them to,” the IAB claims.

However the crux of the RTB complaint is that programmatic advertising’s processing of personal data is not adequately secure — and they have GDPR Article 5, paragraph 1, point f to point to; which requires that personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss”.

So it will be down to data protection authorities to determine what “appropriate security of personal data” means in this context. And whether behavioral advertising is inherently hostile to data protection law (not forgetting that other forms of non-personal-data-based advertising remain available, e.g. contextual advertising).

Discussing the complaint with TechCrunch late last year, Brave’s Ryan likened the programmatic ad system to dumping truck-loads of briefcases in the middle of a busy railway station in “the full knowledge that… business partners will all scramble around and try and grab them” — arguing that such a dysfunctional and systematic breaching of people’s data is lurking at the core of the online ad industry.

The solution Ryan and the other complainants are advocating for is not pulling the plug on the online ad industry entirely — but rather an update to the RTB spec to strip out personal data so that it respects Internet users’ rights. Ads can still be targeted contextually and successfully without Internet users having to be surveilled 24/7 online, is the claim.

They also argue that this would lead to a much better situation for quality online publishers because it would make it harder for their high value audiences to be arbitraged and commodified by privacy-hostile tracking technologies which — as it stands — trail Internet users everywhere they go. Albeit they freely concede that purveyors of low quality clickbait might fair less well.

*Disclosure: TechCrunch is owned by Verizon Media Group, aka Oath/AOL . We also don’t consider ourselves to be purveyors of low quality clickbait  

Facebook adds new background location privacy controls to its Android app

Facebook is updating its privacy settings on Android to make it easier for users to control what location data is sent to and stored by the company.

In its announcement, Facebook acknowledged that Android users have expressed concern over the app’s ability to continuously log location data in the background. Due to Android’s all-or-nothing system of location permissions relative to iOS, the Facebook app has historically had the green light for collecting location data whether a user is actively in the app or not.

While the company stopped short of admitting the practice, Facebook for Android users who previously had location services enabled can probably assume that Facebook was extensively tracking their location even when they weren’t actively using the app. Facebook describes the choice to toggle location history on as “[allowing] Facebook to build a history of precise locations received through Location Services on your devices.”

Android users who previously allowed Facebook access to their location data will retain those settings, though they’ll receive an alert about the new location controls. For users who kept the location settings for Facebook disabled, those permissions will remain toggled off. While these changes apply only to Android users, Facebook also noted that it would send out an alert to iOS users to remind them to reevaluate their location history settings.

If your location history isn’t something you’ve thought much about before, it’s worth spending a minute to consider how comfortable you are with that depth of personal data being transmitted continuously to a company with Facebook’s privacy track record. Remember: Once that information is out of your hands, you have little to no control over what happens with it.

When surveillance meets incompetence

Last week brought an extraordinary demonstration of the dangers of operating a surveillance state — especially a shabby one, as China’s apparently is. An unsecured database exposed millions of records of Chinese Muslims being tracked via facial recognition — an ugly trifecta of prejudice, bureaucracy, and incompetence.

The security lapse was discovered by Victor Gevers at the GDI Foundation, a security organization working in the public’s interest. Using the infamous but useful Shodan search engine, he found a MongoDB instance owned by the Chinese company SenseNets that stored an ever-increasing number of data points from a facial recognition system apparently at least partially operated by the Chinese government.

Many of the targets of this system were Uyghur Muslims, an ethnic and religious minority in China that the country has persecuted in what it considers secrecy, isolating them in remote provinces in what amount to religious gulags.

This database was no limited sting operation: some 2.5 million people had their locations and other data listed in it. Gevers told me that data points included national ID card number with issuance and expiry dates; sex; nationality; home address; DOB; photo; employer; and known previously visited face detection locations.

This data, Gevers said, plainly “had been visited multiple times by visitors all over the globe. And also the database was ransacked somewhere in December by a known actor,” one known as Warn, who has previously ransomed poorly configured MongoDB instances. So it’s all out there now.

A bad idea, poorly executed, with sad parallels

Courtesy: Victor Gevers/GDI.foundation

First off, it is bad enough that the government is using facial recognition systems to target minorities and track their movements, especially considering the treatment many of these people have already received. The ethical failure on full display here is colossal but unfortunately no more than we have come to expect from an increasingly authoritarian China.

Using technology as a tool to track and influence the populace is a proud bullet point on the country’s security agenda, but even allowing for the cultural differences that produce something like the social credit rating system, the wholesale surveillance of a minority group is beyond the pale. (And I say this in full knowledge of our own problematic methods in the U.S.)

But to do this thing so poorly is just embarrassing, and should serve as a warning to anyone who thinks a surveillance state can be well administrated — in Congress, for example. We’ve seen security tech theater from China before, in the ineffectual and likely barely functioning AR displays for scanning nearby faces, but this is different — not a stunt but a major effort and correspondingly large failure.

The duty of monitoring these citizens was obviously at least partially outsourced to SenseNets (note this is different from SenseTime, but many of the same arguments will apply to any major people-tracking tech firm), which in a way mirrors the current controversy in the U.S. regarding Amazon’s Rekognition and its use — though on a far, far smaller scale — by police departments. It is not possible for federal or state actors to spin up and support the tech and infrastructure involved in such a system on short notice; like so many other things the actual execution falls to contractors.

And as SenseNets shows, these contractors can easily get it wrong, sometimes disastrously so.

MongoDB, it should be said, is not inherently difficult to secure; it’s just a matter of choosing the right settings in deployment (settings that are now but were not always the defaults). But for some reason people tend to forget to check those boxes when using the popular system; over and over we’ve seen poorly configured instances being accessible to the public, exposing hundreds of thousands of accounts. This latest one must surely be the largest and most damaging, however.

Gevers pointed out that the server was also highly vulnerable to MySQL exploits among other things, and was of course globally visible on Shodan. “So this was a disaster waiting to happen,” he said.

In fact it was a disaster waiting to happen twice; the company re-exposed the database a few days after securing it, after I wrote this story but before I published:

Living in a glass house

The truth is, though, that any such centralized database of sensitive information is a disaster waiting to happen, for pretty much everyone involved. A facial recognition database full of carefully organized demographic data and personal movements is a hell of a juicy target, and as the SenseTimes instance shows, malicious actors foreign and domestic will waste no time taking advantage of the slightest slip-up (to say nothing of a monumental failure).

We know major actors in the private sector fail at this stuff all the time and, adding insult to injury, are not held responsible — case in point: Equifax. We know our weapons systems are hackable; our electoral systems are trivial to compromise and under active attack; the census is a security disaster; and unsurprisingly the agencies responsible for making all these rickety systems are themselves both unprepared and ignorant, by the government’s own admission… not to mention unconcerned with due process.

The companies and governments of today are simply not equipped to handle the enormousness, or recognize the enormity, of large scale surveillance. Not only that, but the people that compose those companies and governments are far from reliable themselves, as we have seen from repeated abuse and half-legal uses of surveillance technologies for decades.

Naturally we must also consider the known limitations of these systems, such as their poor record with people of color, the lack of transparency with which they are generally implemented, and the inherently indiscriminate nature of their collection methods. The systems themselves are not ready.

A failure at any point in the process of legalizing, creating, securing, using, or administrating these systems can have serious political consequences (such as the exposure of a national agenda, which one can imagine could be held for ransom), commercial consequences (who would trust SenseNets after this? The government must be furious), and most importantly personal consequences — to the people whose data is being exposed.

And this is all due (here, in China, and elsewhere) to the desire of a government to demonstrate tech superiority, and of a company to enable that and enrich itself in the process.

In the case of this particular database Gevers says that although the policy of the GDI is one of responsible disclosure, he immediately regretted his role. “Personally it made angry after I found out that I unknowingly helping the company secure its oppression tool,” he told me. “This was not a happy experience.”

The best we can do, and which Gevers did, is to loudly proclaim how bad the idea is and how poorly it has been done, is being done, and will be done.

India’s state gas company leaks millions of Aadhaar numbers

Another security lapse has exposed millions of Aadhaar numbers.

This time, India’s state-owned gas company Indane left exposed a part of its website for dealers and distributors, even though it’s only supposed to be accessible with a valid username and password. But the part of the site was indexed in Google, allowing anyone to bypass the login page altogether and gain unfettered access to the dealer database.

The data was found by a security researcher who asked to remain anonymous for fear of retribution from the Indian authorities. Aadhaar’s regulator, the Unique Identification Authority of India (UIDAI), is known to quickly dismiss reports of data breaches or exposures, calling critical news articles “fake news,” and threatening legal action and filing police complaints against journalists.

Baptiste Robert, a French security researcher who goes by the online handle Elliot Alderson and has prior experience investigating Aadhaar exposures, investigated the exposure and provided the results to TechCrunch. Using a custom-built script to scrape the database, he found customer data for 11,000 dealers, including names and addresses of customers, as well as the customers’ confidential Aadhaar number hidden in the link of each record.

Robert, who explained more about his findings in a blog post, found 5.8 million Indane customer records before his script was blocked. In all, Robert estimated the total number affected could surpass 6.7 million customers.

We verified a sample of Aadhaar numbers from the site using UIDAI’s own web-based verification tool. Each record came back as a positive match.

A screenshot showing the unauthenticated access to Indane’s dealer portal, which included sensitive information on millions of Indian citizens. This was one dealer who had 4,034 customers. (Image: TechCrunch)

It’s the latest security lapse involving Aadhaar data, and the second lapse to embroil Indane. Last year, the gas and energy company was found leaking data from an endpoint with a direct connection to Aadhaar’s database. This time, however, the leak is believed to be limited to its own data.

Indane is said to have more than 90 million customers across India.

The exposure comes just weeks after an Indian state leaked the personal information of more than 160,000 government workers, including their Aadhaar numbers.

Aadhaar numbers aren’t secret, but are treated as confidential and private information similar to Social Security numbers. More than 90 percent of India’s population, some 1.23 billion citizens, are enrolled in Aadhaar, which the government and some private enterprises use to verify identities. The government uses Aadhaar to enroll citizens in state services, like voting, or applying for welfare or financial assistance. Some companies also pushed customers to enroll their bank accounts or phone service to their Aadhaar identity, but this was recently struck down by the country’s Supreme Court. Many say linking their Aadhaar identities to their bank accounts has led to fraud.

The exposure is likely to reignite fresh concerns that the Aadhaar system is not as secure as UIDAI has claimed. Although few of the security incidents have involved a direct breach of Aadhaar’s central database, the weakest link remains the companies or government departments that rely on the data.

We contacted both Indane and UIDAI, but did not hear back.

Stop saying, “We take your privacy and security seriously”

In my years covering cybersecurity, there’s one variation of the same lie that floats above the rest. “We take your privacy and security seriously.”

You might have heard the phrase here and there. It’s a common trope used by companies in the wake of a data breach — either in a “mea culpa” email to their customers or a statement on their website to tell you that they care about your data, even though in the next sentence they all too often admit to misusing or losing it.

The truth is, most companies don’t care about the privacy or security of your data. They care about having to explain to their customers that their data was stolen.

I’ve never understood exactly what it means when a company says it values my privacy. If that were the case, data hungry companies like Google and Facebook, which sell data about you to advertisers, wouldn’t even exist.

I was curious how often this go-to one liner was used. I scraped every reported notification to the California attorney general, a requirement under state law in the event of a breach or security lapse, stitched them together, and converted it into machine-readable text.

About one-third of all 285 data breach notifications had some variation of the line.

It doesn’t show that companies care about your data. It shows that they don’t know what to do next.

A perfect example of a company not caring: Last week, we reported several OkCupid users had complained their accounts were hacked. More likely than not, the accounts were hit by credential stuffing, where hackers take lists of usernames and passwords and try to brute-force their way into people’s accounts. Other companies have learned from such attacks and took the time to improve account security, like rolling out two-factor authentication.

Instead, OkCupid’s response was to deflect, defend, and deny, a common way for companies to get ahead of a negative story. It looked like this:

  • Deflect: “All websites constantly experience account takeover attempts,” the company said.
  • Defend: “There’s no story here,” the company later told another publication.
  • Deny: “No further comment,” when asked what the company will do about it.

It would’ve been great to hear OkCupid say it cared about the matter and what it was going to do about it.

Every industry has long neglected security. Most of the breaches today are the result of shoddy security over years or sometimes decades, coming back to haunt them. Nowadays, every company has to be a security company, whether it’s a bank, a toymaker, or a single app developer.

Companies can start off small: tell people how to reach contact them with security flaws, roll out a bug bounty to encourage bug submissions, and grant good-faith researchers safe harbor by promising not to sue. Startup founders can also fill their executive suite with a chief security officer from the very beginning. They’d be better off than 95 percent of the world’s richest companies that haven’t even bothered.

But this isn’t what happens. Instead, companies would rather just pay the fines.

Target paid $18.5 for a data breach that ensnared 41 million credit cards, compared to full-year revenues of $72 billion. Anthem paid $115 million in fines after a data breach put 79 million insurance holders’ data at risk, on revenues that year of $79 billion. And, remember Equifax? The biggest breach of 2017 led to all talk but no action.

With no incentive to change, companies will continue to parrot their usual hollow remarks. Instead, they should do something about it.

UK parliament calls for antitrust, data abuse probe of Facebook

A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.

In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.

Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.

Interrogating the distribution of ‘fake news’

The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to user data to developers and advertisers in order to increase revenue and/or usage; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.

The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.

Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.

“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”

“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.

We’ve reached out to Facebook for comment on the committee’s report.

Last fall the company was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although Facebook is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.

During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.

Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.

Among the report’s main recommendations are:

  • clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
  • privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
  • a levy on tech companies operating in the UK to support enhanced regulation of such platforms
  • a call for the ICO to investigate Facebook’s platform practices and use of user data
  • a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
  • changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
  • a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
  • a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users

Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.

It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.

Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.

“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” the committee writes. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”

The report calls for tech companies to be regulated as a new category “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.

Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18 — and the government said then that it has not ruled out doing so.

“Digital gangsters”

Competition concerns are also raised several times by the committee.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”. 

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.

The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.

“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.

That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.

“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.

“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”

It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.

“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.

In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by a developer called Six4Three.

The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.

“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .

On Soltani’s evidence, it writes:

Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.

While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.

It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”

The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.

Its interim report, published last summer, made many of the same recommendations.

Russian interest

But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.

The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.

Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.

It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached. 

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.

“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”

The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”

It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…

Source: Web and publications unit, House of Commons

“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.

“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.

Three senior managers knew

Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.

The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.

The committee dubs this as an example of “a profound failure” of internal governance, and also branding it evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.

Here’s the committee’s account of that detail:

We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.

The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

What business leaders can learn from Jeff Bezos’ leaked texts

The ‘below the belt selfie’ media circus surrounding Jeff Bezos has made encrypted communications top of mind among nervous executive handlers. Their assumption is that a product with serious cryptography like Wickr – where I work – or Signal could have helped help Mr. Bezos and Amazon avoid this drama.

It’s a good assumption, but a troubling conclusion.

I worry that moments like these will drag serious cryptography down to the level of the National Enquirer. I’m concerned that this media cycle may lead people to view privacy and cryptography as a safety net for billionaires rather than a transformative solution for data minimization and privacy.

We live in the chapter of computing when data is mostly unprotected because of corporate indifference. The leaders of our new economy – like the vast majority of society – value convenience and short-term gratification over the security and privacy of consumer, employee and corporate data.  

We cannot let this media cycle pass without recognizing that when corporate executives take a laissez-faire approach to digital privacy, their employees and organizations will follow suit.

Two recent examples illustrate the privacy indifference of our leaders…

  • The most powerful executive in the world is either indifferent to, or unaware that, unencrypted online flirtations would be accessed by nation states and competitors.
  • 2016 presidential campaigns were either indifferent to, or unaware that, unencrypted online communications detailing “off-the-record” correspondence with media and payments to adult actor(s) would be accessed by nation states and competitors.

If our leaders do not respect and understand online security and privacy, then their organizations will not make data protection a priority. It’s no surprise that we see a constant stream of large corporations and federal agencies breached by nation states and competitors.  Who then can we look to for leadership?

GDPR is an early attempt by regulators to lead. The European Union enacted GDPR to ensure individuals own their data and enforce penalties on companies who do not protect personal data.  It applies to all data processors, but the EU is clearly focused on sending a message to the large US based data processors – Amazon, Facebook, Google, Microsoft, etc. In January, France’s National Data Protection Commission sent a message by fining Google $57 million for breaching GDPR rules. It was an unprecedented fine that garnered international attention. However, we must remember that in 2018 Google’s revenues were greater than $300 million … per day!  GPDR is, at best, an annoying speed-bump in the monetization strategy of large data processors.

It is through this lens that Senator Ron Wyden’s (Oregon) idealistic call for billions of dollars in corporate fines and jail time for executives who enable privacy breaches can be seen as reasonable.  When record financial penalties are inconsequential it is logical to pursue other avenues to protect our data.

Real change will come when our leaders understand that data privacy and security can increase profitability and reliability.  For example, the Compliance, Governance and Oversight Council reports that an enterprise will spend as much as $50 million to protect 10 petabytes of data, and that $34.5 million of this is spent on protecting data that should be deleted. Serious efficiencies are waiting to be realized and serious cryptography can help.  

So, thank you Mr. Bezos for igniting corporate interest in secure communications. Let’s hope this news cycle convinces our corporate leaders and elected officials to embrace data privacy, protection and minimization because it responsible, profitable and efficient. We need leaders and elected officials to set an example and respect their own data and privacy if we have any hope of their organizations to protect ours.

Even years later, Twitter doesn’t delete your direct messages

When does “delete” really mean delete? Not always or even at all if you’re Twitter .

Twitter retains direct messages for years, including messages you and others have deleted, but also data sent to and from accounts that have been deactivated and suspended, according to security researcher Karan Saini.

Saini found years-old messages found in a file from an archive of his data obtained through the website from accounts that were no longer on Twitter. He also filed a similar bug, found a year earlier but not disclosed until now, that allowed him to use a since-deprecated API to retrieve direct messages even after a message was deleted from both the sender and the recipient — though, the bug wasn’t able to retrieve messages from suspended accounts.

Saini told TechCrunch that he had “concerns” that the data was retained by Twitter for so long.

Direct messages once let users to “unsend” messages from someone else’s inbox, simply by deleting it from their own. Twitter changed this years ago, and now only allows a user to delete messages from their account. “Others in the conversation will still be able to see direct messages or conversations that you have deleted,” Twitter says in a help page. Twitter also says in its privacy policy that anyone wanting to leave the service can have their account “deactivated and then deleted.” After a 30-day grace period, the account disappears and along with its data.

But, in our tests, we could recover direct messages from years ago — including old messages that had since been lost to suspended or deleted accounts. By downloading your account’s data, it’s possible to download all of the data Twitter stores on you.

A conversation, dated March 2016, with a suspended Twitter account was still retrievable today. (Image: TechCrunch

Saini says this is a “functional bug” rather than a security flaw, but argued that the bug allows anyone a “clear bypass” of Twitter mechanisms to prevent accessed to suspended or deactivated accounts.

But it’s also a privacy matter, and a reminder that “delete” doesn’t mean delete — especially with your direct messages. That can open up users, particularly high-risk accounts like journalist and activists, to government data demands that call for data from years earlier.

That’s despite Twitter’s claim that once an account has been deactivated, there is “a very brief period in which we may be able to access account information, including tweets,” to law enforcement.

A Twitter spokesperson said the company was “looking into this further to ensure we have considered the entire scope of the issue.”

Retaining direct messages for years may put the company in a legal grey area ground amid Europe’s new data protection laws, which allows users to demand that a company deletes their data.

Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, said there’s “no formality at all” to how a user can ask for their data to be deleted. Any request from a user to delete their data that’s directly communicated to the company “is a valid exercise” of a user’s rights, he said.

Companies can be fined up to four percent of their annual turnover for violating GDPR rules.

“A delete button is perhaps a different matter, as it is not obvious that ‘delete’ means the same as ‘exercise my right of erasure’,” said Brown. Given that there’s no case law yet under the new General Data Protection Regulation regime, it will be up to the courts to decide, he said.

When asked if Twitter thinks that consent to retain direct messages is withdrawn when a message or account is deleted, Twitter’s spokesperson had “nothing further” to add.

Amazon buys Eero: What does it mean for your privacy?

In case you hadn’t seen, Amazon is buying router maker Eero. And in case you hadn’t heard, people are pretty angry.

Deluged in a swarm of angry tweets and social media posts, many have taken to reading tealeaves to try to understand what the acquisition means for ordinary privacy-minded folks like you and me. Not many had much love for Amazon on the privacy front. A lot of people like Eero because it wasn’t attached to one of the big tech giants. Now it’s to be part of Amazon, some are anticipating the worst for their privacy.

Of the many concerns we’ve seen, the acquisition boils down to a key concern: “Amazon shouldn’t have access to all internet traffic.”

Rightfully so! It’s bad enough that Amazon wants to put a listening speaker in every corner of our home. How worried should you be that Amazon flips the switch on Eero and it’s no longer the privacy-minded router it once was?

This calls for a lesson in privacy pragmatism, and one of cautious optimism.

Don’t panic — yet

Nothing will change overnight. The acquisition will take time, and any possible changes will take longer. Eero has an easy to read privacy policy, and the company tweeted that the company will “continue to protect” customer privacy, noting that Eero “does not track customers’ internet activity and this policy will not change with the acquisition.”

That’s true! Eero doesn’t monitor your internet activity. We scoured the privacy policy, and the most the router collects is some basic information from each device connecting to the router that it already broadcasts, such as device name and its unique networking address. We didn’t see anything beyond boilerplate language for a smart router. And there’s nothing in there that says even vaguely that Eero can or will spy on your internet traffic.

Among the many reasons, it (mostly) couldn’t even if it wanted to.

Every single time you open an app or load a website, most now load over HTTPS. And most do because Google has taken to security-shaming sites that don’t. That’s an encrypted connection between your computer and the app or website. Not even your router can see your internet traffic. It’s only rare cases like Facebook’s creepy “research” app that forces you to give it “root” access to your device’s network traffic when companies can snoop on everything you do.

If Eero starts asking you to install root certificates on your devices, then we have a problem.

Fear the internet itself

The reality is that your internet service provider knows more about your internet activity than your router does.

Your internet provider not only processes your internet requests, it routes and directs them. Even when the traffic is HTTPS-encrypted, your internet provider for the most part knows which domains you visit, and when, and with that it can sometimes figure out why. With that information, your internet provider can piece together a timeline of your online life. It’s the reason why HTTPS and using privacy-focused DNS services are so important.

It doesn’t stop there. Once your internet traffic goes past your router, you’re into the big wide world of the world wide web. Your router is the least of your troubles: it’s a jungle of data collection out there.

Props to the spirited gentleman who tweeted that he trusts Google “way more with my privacy than Amazon” for the sole reason that, “Amazon wants to use the data to sell me more stuff vs. Google just wants to serve targeted ads.” Think of that: Amazon wants to sell you products from its own store, but somehow that’s worse than Google selling its profiles of who it thinks you are to advertisers to try to sell you things?

Every time you go online, what’s your first hit? Google. Every time you open a new browser window, it’s Google. Every time you want to type something in to the omnibar at the top of your browser, it’s Google. Google knows more about your browsing history than your router does because most people use Google as their one-stop directory for all they need on the internet. Your internet provider may not be able to see past the HTTPS domain that you’re visiting, but Google, for one, tracks which search queries you type, which websites you go to, and even tracks you from site-to-site with its pervasive ad network.

At least when you buy a birthday present or a sex toy (or both?) from Amazon, that knowledge stays in-house.

Knock knock, it’s Amazon already

If Amazon wanted to track you, it already could.

Everyone seems to forgets Amazon’s massive cloud business. Most of the internet these days runs on Amazon Web Services, the company’s dedicated cloud unit that made up all of the company’s operating income in 2017. It’s a cash cow and an infrastructure giant, and its retail prowess is just part of the company’s business.

Think you can escape Amazon? Just look at what happened when Gizmodo’s Kashmir Hill tried to cut out Amazon from her life. She found it “impossible.” Why? Everything seems to rely on Amazon these days — from Spotify and Netflix’s back-end, popular consumer and government websites use it, and many other major apps and services rely on Amazon’s cloud. She ended up blocking 23 million IP addresses controlled by Amazon, and still struggled..

In a single week, Hill found 95,260 total attempts by her devices to communicate with Amazon, compared to less than half that for Google at 40,527 requests, and a paltry 36 attempts for Apple. Amazon already knows which sites you go to — because it runs most of them.

So where does that leave me?

Your router is a lump of plastic. And it should stay that way. We can all agree on that.

It’s a natural fear that when “big tech” wades in, it’s going to ruin everything. Especially with Amazon. The company’s track record on transparency is lackluster at best, and downright evasive at its worst. But just because Amazon is coming in doesn’t mean it’ll necessarily become a surveillance machine. Even Google’s own mesh router system, Eero’s direct competitor, promises to “not track the websites you visit or collect the content of any traffic on your network.”

Amazon can’t turn the Eero into a surveillance hub overnight, but it doesn’t mean it won’t try.

All you can do is keep a close eye on the company’s privacy policy. We’ll do it for you. And in the event of a sudden change, we’ll let you know. Just make sure you have an escape plan.