California to close data breach notification loopholes under new law

California, which has some of the strongest data breach notification laws in the U.S., thinks it can do even better.

The golden state’s attorney general Xavier Becerra announced a new bill Thursday that aims to close loopholes in its existing data breach notification laws by expanding the requirements for companies to notify users or customers if their passport and government ID numbers, along with biometric data, such as fingerprints, and iris and facial recognition scans, have been stolen.

The updated draft legislation lands a few months after the Starwood hack, which Becerra and Democratic state assembly member Marc Levine, who introduced the bill, said prompted the law change.

Marriott-owned hotel chain Starwood said data on fewer than 383 million unique guests was stolen in the data breach, revealed in September, including guest names, postal addresses, phone numbers, dates of birth, genders, email addresses, some encrypted payment card data and other reservation information. Starwood also disclosed that five million passport numbers were stolen.

Although Starwood came clean and revealed the data breach, companies are not currently legally obligated to disclose that passport numbers or biometric data have been stolen. Under California state law, only Social Security numbers, driver’s license numbers, banking information, passwords, medical and health insurance information and data collected through automatic license plate recognition systems must be reported.

That’s set to change, under the new California assembly bill 1130, the state attorney general said.

“We have an opportunity today to make our data breach law stronger and that’s why we’re moving today to make it more difficult for hackers and cybercriminals to get your private information,” said Becerra at a press conference in San Francisco. “AB 1130 closes a gap in California law and ensures that our state remains the nation’s leader in data privacy and protection,” he said.

Several other states, like Alabama, Florida and Oregon, already require data breach notifications in the event of passport number breaches, and also biometric data in the case of Iowa and Nebraska, among others.

California remains, however, one of only a handful of states that require the provision of credit monitoring or identity theft protection after certain kinds of breaches.

Thursday’s bill comes less than a year after state lawmakers passed the California Privacy Act into law, greatly expanding privacy rights for consumers — similar to provisions provided to Europeans under the newly instituted General Data Protection Regulation. The state privacy law, passed in June and set to go into effect in 2020, was met with hostility by tech companies headquartered in the state, prompting a lobbying effort to push for a superseding but weaker federal privacy law.

Uber fixes bug that exposed third-party app secrets

Uber has fixed a bug that allowed access to the secret developer tokens of any app that integrated with the ride-sharing service, according to the security researchers who discovered the flaw.

In a blog post, Anand Prakash and Manisha Sangwan explained that a vulnerable developer endpoint on Uber’s back-end systems — since locked down — was mistakenly spitting back client secrets and server tokens for apps authorized by the Uber account owner.

Client secrets and server tokens are considered highly sensitive bits of information for developers as they allow apps to communicate with Uber’s servers. For its part, Uber warns developers to “never share” the keys with anyone.

Prakash, founder of Bangalore-based AppSecure, told TechCrunch that the bug was “very easy” to exploit, and could have allowed an attacker to obtain trip receipts and invoicesBut he didn’t test how far the access could have given him as he immediately reported the bug to Uber.

Uber took a month to fix the bug, according to the disclosure timeline, and was considered serious enough to email developers last week warning of the possible exposure.

“At this time, we have no indication that the issue was exploited, but suggest reviewing your application’s practices out of an abundance of caution,” Uber’s email to developers said. “We have mitigated the issue by restricting the information returned to the name and id of the authorized applications.”

Uber did not respond to a request for comment. If that changes, we’ll update.

Prakash was paid $5,000 in Uber’s bug bounty for reporting the bug, and currently ranks in the top five submitters on Uber’s bug bounty.

The security researcher is no stranger to Uber’s bug bounty. Two years ago, he found and successfully exploited a bug that allowed him to receive free trips in both the U.S. and his native India.

Xage brings role-based single sign-on to industrial devices

Traditional industries like oil and gas and manufacturing often use equipment that was created in a time when remote access wasn’t a gleam in an engineer’s eye, and hackers had no way of connecting to them. Today, these devices require remote access and some don’t have even rudimentary authentication. Xage, the startup that wants to make industrial infrastructure more secure, announced a new solution today to bring single sign-on and role-based control to even the oldest industrial devices.

Company CEO Duncan Greatwood says that some companies have adopted firewall technology, but if a hacker breaches the firewall, there often isn’t even a password to defend these kinds of devices. He adds that hackers have been increasingly targeting industrial infrastructure.

Xage has come up with a way to help these companies with its latest product called Xage Enforcement Point (XEP). This tool gives IT a way to control these devices with a single password, a kind of industrial password manager. Greatwood says that some companies have hundreds of passwords for various industrial tools. Sometimes, whether because of distance across a factory floor, or remoteness of location, workers would rather adjust these machines remotely when possible.

While operations wants to simplify this for workers with remote access, IT worries about security and the tension can hold companies back, force them to make big firewall investments or in some cases implement these kinds of solutions without adequate protection.

XEP helps bring a level of protection to these pieces of equipment. “XEP is a relatively small piece of software that can run on a tiny credit-card size computer, and you simply insert it in front of the piece of equipment you want to protect,” Greatwood explained.

The rest of the Xage platform adds additional security. The company introduced fingerprinting last year, which gives unique identifiers to these pieces of equipment. If a hacker tries to spoof a piece of equipment, and the device lacks a known fingerprint, they can’t get on the system.

Xage also makes use of the blockchain and a rules engine to secure industrial systems. The customer can define rules and use the blockchain as an enforcement mechanism where each node in the chain carries the rules, and a certain number of nodes as defined by the customer, must agree that the person, machine or application trying to gain access is a legitimate actor.

The platform taken as a whole provides several levels of protection in an effort to discourage hackers who are trying to breach these systems. Greatwood says that while companies don’t usually get rid of tools they already have like firewalls, they may scale back their investment after buying the Xage solution.

Xage was founded at the end of 2017. It has raised $16 million to this point and has 30 employees. Greatwood didn’t want to discuss a specific number of customers, but did say they were making headway in oil and gas, renewable energy, utilities and manufacturing.

When surveillance meets incompetence

Last week brought an extraordinary demonstration of the dangers of operating a surveillance state — especially a shabby one, as China’s apparently is. An unsecured database exposed millions of records of Chinese Muslims being tracked via facial recognition — an ugly trifecta of prejudice, bureaucracy, and incompetence.

The security lapse was discovered by Victor Gevers at the GDI Foundation, a security organization working in the public’s interest. Using the infamous but useful Shodan search engine, he found a MongoDB instance owned by the Chinese company SenseNets that stored an ever-increasing number of data points from a facial recognition system apparently at least partially operated by the Chinese government.

Many of the targets of this system were Uyghur Muslims, an ethnic and religious minority in China that the country has persecuted in what it considers secrecy, isolating them in remote provinces in what amount to religious gulags.

This database was no limited sting operation: some 2.5 million people had their locations and other data listed in it. Gevers told me that data points included national ID card number with issuance and expiry dates; sex; nationality; home address; DOB; photo; employer; and known previously visited face detection locations.

This data, Gevers said, plainly “had been visited multiple times by visitors all over the globe. And also the database was ransacked somewhere in December by a known actor,” one known as Warn, who has previously ransomed poorly configured MongoDB instances. So it’s all out there now.

A bad idea, poorly executed, with sad parallels

Courtesy: Victor Gevers/GDI.foundation

First off, it is bad enough that the government is using facial recognition systems to target minorities and track their movements, especially considering the treatment many of these people have already received. The ethical failure on full display here is colossal but unfortunately no more than we have come to expect from an increasingly authoritarian China.

Using technology as a tool to track and influence the populace is a proud bullet point on the country’s security agenda, but even allowing for the cultural differences that produce something like the social credit rating system, the wholesale surveillance of a minority group is beyond the pale. (And I say this in full knowledge of our own problematic methods in the U.S.)

But to do this thing so poorly is just embarrassing, and should serve as a warning to anyone who thinks a surveillance state can be well administrated — in Congress, for example. We’ve seen security tech theater from China before, in the ineffectual and likely barely functioning AR displays for scanning nearby faces, but this is different — not a stunt but a major effort and correspondingly large failure.

The duty of monitoring these citizens was obviously at least partially outsourced to SenseNets (note this is different from SenseTime, but many of the same arguments will apply to any major people-tracking tech firm), which in a way mirrors the current controversy in the U.S. regarding Amazon’s Rekognition and its use — though on a far, far smaller scale — by police departments. It is not possible for federal or state actors to spin up and support the tech and infrastructure involved in such a system on short notice; like so many other things the actual execution falls to contractors.

And as SenseNets shows, these contractors can easily get it wrong, sometimes disastrously so.

MongoDB, it should be said, is not inherently difficult to secure; it’s just a matter of choosing the right settings in deployment (settings that are now but were not always the defaults). But for some reason people tend to forget to check those boxes when using the popular system; over and over we’ve seen poorly configured instances being accessible to the public, exposing hundreds of thousands of accounts. This latest one must surely be the largest and most damaging, however.

Gevers pointed out that the server was also highly vulnerable to MySQL exploits among other things, and was of course globally visible on Shodan. “So this was a disaster waiting to happen,” he said.

In fact it was a disaster waiting to happen twice; the company re-exposed the database a few days after securing it, after I wrote this story but before I published:

Living in a glass house

The truth is, though, that any such centralized database of sensitive information is a disaster waiting to happen, for pretty much everyone involved. A facial recognition database full of carefully organized demographic data and personal movements is a hell of a juicy target, and as the SenseTimes instance shows, malicious actors foreign and domestic will waste no time taking advantage of the slightest slip-up (to say nothing of a monumental failure).

We know major actors in the private sector fail at this stuff all the time and, adding insult to injury, are not held responsible — case in point: Equifax. We know our weapons systems are hackable; our electoral systems are trivial to compromise and under active attack; the census is a security disaster; and unsurprisingly the agencies responsible for making all these rickety systems are themselves both unprepared and ignorant, by the government’s own admission… not to mention unconcerned with due process.

The companies and governments of today are simply not equipped to handle the enormousness, or recognize the enormity, of large scale surveillance. Not only that, but the people that compose those companies and governments are far from reliable themselves, as we have seen from repeated abuse and half-legal uses of surveillance technologies for decades.

Naturally we must also consider the known limitations of these systems, such as their poor record with people of color, the lack of transparency with which they are generally implemented, and the inherently indiscriminate nature of their collection methods. The systems themselves are not ready.

A failure at any point in the process of legalizing, creating, securing, using, or administrating these systems can have serious political consequences (such as the exposure of a national agenda, which one can imagine could be held for ransom), commercial consequences (who would trust SenseNets after this? The government must be furious), and most importantly personal consequences — to the people whose data is being exposed.

And this is all due (here, in China, and elsewhere) to the desire of a government to demonstrate tech superiority, and of a company to enable that and enrich itself in the process.

In the case of this particular database Gevers says that although the policy of the GDI is one of responsible disclosure, he immediately regretted his role. “Personally it made angry after I found out that I unknowingly helping the company secure its oppression tool,” he told me. “This was not a happy experience.”

The best we can do, and which Gevers did, is to loudly proclaim how bad the idea is and how poorly it has been done, is being done, and will be done.

India’s state gas company leaks millions of Aadhaar numbers

Another security lapse has exposed millions of Aadhaar numbers.

This time, India’s state-owned gas company Indane left exposed a part of its website for dealers and distributors, even though it’s only supposed to be accessible with a valid username and password. But the part of the site was indexed in Google, allowing anyone to bypass the login page altogether and gain unfettered access to the dealer database.

The data was found by a security researcher who asked to remain anonymous for fear of retribution from the Indian authorities. Aadhaar’s regulator, the Unique Identification Authority of India (UIDAI), is known to quickly dismiss reports of data breaches or exposures, calling critical news articles “fake news,” and threatening legal action and filing police complaints against journalists.

Baptiste Robert, a French security researcher who goes by the online handle Elliot Alderson and has prior experience investigating Aadhaar exposures, investigated the exposure and provided the results to TechCrunch. Using a custom-built script to scrape the database, he found customer data for 11,000 dealers, including names and addresses of customers, as well as the customers’ confidential Aadhaar number hidden in the link of each record.

Robert, who explained more about his findings in a blog post, found 5.8 million Indane customer records before his script was blocked. In all, Robert estimated the total number affected could surpass 6.7 million customers.

We verified a sample of Aadhaar numbers from the site using UIDAI’s own web-based verification tool. Each record came back as a positive match.

A screenshot showing the unauthenticated access to Indane’s dealer portal, which included sensitive information on millions of Indian citizens. This was one dealer who had 4,034 customers. (Image: TechCrunch)

It’s the latest security lapse involving Aadhaar data, and the second lapse to embroil Indane. Last year, the gas and energy company was found leaking data from an endpoint with a direct connection to Aadhaar’s database. This time, however, the leak is believed to be limited to its own data.

Indane is said to have more than 90 million customers across India.

The exposure comes just weeks after an Indian state leaked the personal information of more than 160,000 government workers, including their Aadhaar numbers.

Aadhaar numbers aren’t secret, but are treated as confidential and private information similar to Social Security numbers. More than 90 percent of India’s population, some 1.23 billion citizens, are enrolled in Aadhaar, which the government and some private enterprises use to verify identities. The government uses Aadhaar to enroll citizens in state services, like voting, or applying for welfare or financial assistance. Some companies also pushed customers to enroll their bank accounts or phone service to their Aadhaar identity, but this was recently struck down by the country’s Supreme Court. Many say linking their Aadhaar identities to their bank accounts has led to fraud.

The exposure is likely to reignite fresh concerns that the Aadhaar system is not as secure as UIDAI has claimed. Although few of the security incidents have involved a direct breach of Aadhaar’s central database, the weakest link remains the companies or government departments that rely on the data.

We contacted both Indane and UIDAI, but did not hear back.

Stop saying, “We take your privacy and security seriously”

In my years covering cybersecurity, there’s one variation of the same lie that floats above the rest. “We take your privacy and security seriously.”

You might have heard the phrase here and there. It’s a common trope used by companies in the wake of a data breach — either in a “mea culpa” email to their customers or a statement on their website to tell you that they care about your data, even though in the next sentence they all too often admit to misusing or losing it.

The truth is, most companies don’t care about the privacy or security of your data. They care about having to explain to their customers that their data was stolen.

I’ve never understood exactly what it means when a company says it values my privacy. If that were the case, data hungry companies like Google and Facebook, which sell data about you to advertisers, wouldn’t even exist.

I was curious how often this go-to one liner was used. I scraped every reported notification to the California attorney general, a requirement under state law in the event of a breach or security lapse, stitched them together, and converted it into machine-readable text.

About one-third of all 285 data breach notifications had some variation of the line.

It doesn’t show that companies care about your data. It shows that they don’t know what to do next.

A perfect example of a company not caring: Last week, we reported several OkCupid users had complained their accounts were hacked. More likely than not, the accounts were hit by credential stuffing, where hackers take lists of usernames and passwords and try to brute-force their way into people’s accounts. Other companies have learned from such attacks and took the time to improve account security, like rolling out two-factor authentication.

Instead, OkCupid’s response was to deflect, defend, and deny, a common way for companies to get ahead of a negative story. It looked like this:

  • Deflect: “All websites constantly experience account takeover attempts,” the company said.
  • Defend: “There’s no story here,” the company later told another publication.
  • Deny: “No further comment,” when asked what the company will do about it.

It would’ve been great to hear OkCupid say it cared about the matter and what it was going to do about it.

Every industry has long neglected security. Most of the breaches today are the result of shoddy security over years or sometimes decades, coming back to haunt them. Nowadays, every company has to be a security company, whether it’s a bank, a toymaker, or a single app developer.

Companies can start off small: tell people how to reach contact them with security flaws, roll out a bug bounty to encourage bug submissions, and grant good-faith researchers safe harbor by promising not to sue. Startup founders can also fill their executive suite with a chief security officer from the very beginning. They’d be better off than 95 percent of the world’s richest companies that haven’t even bothered.

But this isn’t what happens. Instead, companies would rather just pay the fines.

Target paid $18.5 for a data breach that ensnared 41 million credit cards, compared to full-year revenues of $72 billion. Anthem paid $115 million in fines after a data breach put 79 million insurance holders’ data at risk, on revenues that year of $79 billion. And, remember Equifax? The biggest breach of 2017 led to all talk but no action.

With no incentive to change, companies will continue to parrot their usual hollow remarks. Instead, they should do something about it.

UK parliament calls for antitrust, data abuse probe of Facebook

A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.

In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.

Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.

Interrogating the distribution of ‘fake news’

The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to user data to developers and advertisers in order to increase revenue and/or usage; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.

The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.

Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.

“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”

“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.

We’ve reached out to Facebook for comment on the committee’s report.

Last fall the company was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although Facebook is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.

During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.

Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.

Among the report’s main recommendations are:

  • clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
  • privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
  • a levy on tech companies operating in the UK to support enhanced regulation of such platforms
  • a call for the ICO to investigate Facebook’s platform practices and use of user data
  • a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
  • changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
  • a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
  • a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users

Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.

It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.

Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.

“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” the committee writes. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”

The report calls for tech companies to be regulated as a new category “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.

Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18 — and the government said then that it has not ruled out doing so.

“Digital gangsters”

Competition concerns are also raised several times by the committee.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”. 

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.

The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.

“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.

That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.

“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.

“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”

It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.

“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.

In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by a developer called Six4Three.

The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.

“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .

On Soltani’s evidence, it writes:

Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.

While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.

It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”

The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.

Its interim report, published last summer, made many of the same recommendations.

Russian interest

But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.

The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.

Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.

It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached. 

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.

“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”

The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”

It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…

Source: Web and publications unit, House of Commons

“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.

“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.

Three senior managers knew

Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.

The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.

The committee dubs this as an example of “a profound failure” of internal governance, and also branding it evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.

Here’s the committee’s account of that detail:

We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.

The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

OpenAI built a text generator so good, it’s considered too dangerous to release

A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI, which it says is so good at generating convincing, well-written text that it’s worried about potential abuse.

That’s angered some in the community, who have accused the company of reneging on a promise not to close off its research.

OpenAI said its new natural language model, GPT-2, was trained to predict the next word in a sample of 40 gigabytes of internet text. The end result was the system generating text that “adapts to the style and content of the conditioning text,” allowing the user to “generate realistic and coherent continuations about a topic of their choosing.” The model is a vast improvement on the first version by producing longer text with greater coherence.

But with every good application of the system, such as bots capable of better dialog and better speech recognition, the non-profit found several more, like generating fake news, impersonating people, or automating abusive or spam comments on social media.

To wit: when GPT-2 was tasked with writing a response to the prompt, “Recycling is good for the world,” which nearly everyone agrees with, the machine spat back:

“Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources.”

No wonder OpenAI was worried about releasing it.

For that reason, OpenAI said, it’s only releasing a smaller version of the language model, citing its charter, which noted that the organizations expects that “safety and security concerns will reduce our traditional publishing in the future.” Admittedly, the organization said that it wasn’t sure of the decision, “we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.”

Not everyone took that well. OpenAI’s tweet announcing GPT-2 was met with anger and frustration, accusing the company of “closing off” its research, and doing the “opposite of open,” seizing on the company’s name.

Others were more forgiving, calling the move a “new bar for ethics” for thinking ahead of possible abuses.

Jack Clark, policy director at OpenAI, said the organization’s priority is “not enabling malicious or abusive uses of the technology,” calling it a “very tough balancing act for us.”

Elon Musk, one of the initial funders of OpenAI, was roped into the controversy, confirming in a tweet that he has not been involved with the company “for over a year,” and that he and the company parted “on good terms.”

OpenAI said it’s not settled on a final decision about GPT-2’s release, and that it will revisit in six months. In the meantime, the company said that governments “should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems.”

Just this week, President Trump signed an executive order on artificial intelligence. It comes months after the U.S. intelligence community warned that artificial intelligence was one of the many “emerging threats” to U.S. national security, along with quantum computing and autonomous unmanned vehicles.

ClassPass, Gfycat, StreetEasy hit in latest round of mass site hacks

In just a week, a single seller put close to 750 million records from 24 hacked sites up for sale. Now, the hacker has struck again.

The hacker, whose identity isn’t known, began listing user data from several major websites — including MyFitnessPal, 500px and Coffee Meets Bagel, and more recently Houzz and Roll20 — earlier this week. This weekend, the hacker added a third round of data breaches — another eight sites, amounting to another 91 million user records — to their dark web marketplace.

To date, the hacker has revealed breaches at 30 companies, totaling about 841 million records.

According to the latest listings, the sites include 20 million accounts from Legendas.tv, OneBip, Storybird, and Jobandtalent, as well as eight million accounts at Gfycat, 1.5 million ClassPass accounts, 60 million Pizap accounts, and another one million StreetEasy property searching accounts.

The hacker is selling the eight additional hacked sites for 2.6 bitcoin, or about $9,350.

From the samples that TechCrunch has seen, the accounts include some variations of usernames and email addresses, names, locations by country and region, account creation dates, passwords hashed in various formats, and other account information.

We haven’t found any financial data in the samples.

Little is known about the hacker, and it remains unclear exactly how these sites were hacked.

Ariel Ainhoren, research team leader at Israeli security firm IntSights, told TechCrunch this week that the hacker was likely using the same exploit to target each of the sites and dump the backend databases.

“As most of these sites were not known breaches, it seems we’re dealing here with a hacker that did the hacks by himself, and not just someone who obtained it from somewhere else and now just resold it,” said Ainhoren. The software in question, PostgreSQL, an open-source database project, said it was “currently unaware of any patched or unpatched vulnerabilities” that could have caused the breaches.

We contacted several of the companies prior to publication. Only Gfycat responded, saying it was launching an investigation. We’ll update once it comes in.

Marriott now lets you check if you’re a victim of the Starwood hack

Hotel chain giant Marriott will now let you check if you’re a victim of the Starwood hack.

The company confirmed to TechCrunch that it has put in place “a mechanism to enable guests to look up individual passport numbers to see if they were included in the set of unencrypted passport numbers.” That follows a statement last month from the company confirming that five million unencrypted passport numbers were stolen in the data breach last year.

The checker, hosted by security firm OneTrust, will ask for some personal information, like your name, email address, as well as the last six-digits of your passport number.

Marriott says data on “fewer than 383 million unique guests” was stolen in the data breach, revealed in September, including guest names, postal addresses, phone numbers, dates of birth, genders, email addresses and reservation information. Later it transpired that more than 20 million encrypted passport numbers were also stolen, along with 8.6 million unique payment card numbers. Marriott said only 354,000 cards were active and unexpired at the time of the breach in September.

Opening up the checker to the wider public is a bright spot in what’s been a fairly atrocious incident recovery by Marriott since the breach. The company’s initial response was plagued with hiccups and missteps that many security experts stepped in to fill in the gaps at their own expense.

The checker won’t kick back a result straight away — you’ll have to wait for a response — and Marriott doesn’t say how long that’ll take. There is a certain irony in having to turn over your own data — not least to a third-party — to be told if you’re a victim of a breach. It’s literally the last thing any breach victim wants to do: hand over even their more of their personal information. But that’s the world we’re living in, and everything is terrible.

Use the checker at your own risk.