Facebook suspends analytics firm Crimson Hexagon over data use concerns

As part of its ongoing mission to close the barn doors after the cows have got out, Facebook has suspended the accounts of British data analytics firm Crimson Hexagon over concerns that it may be improperly handling user data.

The ominously named company has for years used official APIs to siphon public posts from Facebook, Instagram, Twitter and other sources online, collating and analyzing for various purposes, such as to gauge public opinion on a political candidate or issue. It has clients around the world, serving Russia and Turkey as well as the U.S. and United Kingdom.

Facebook, it seems, was not fully aware of the extent of Crimson Hexagon’s use of user data, however, including in several government contracts which it didn’t have the opportunity to evaluate before they took effect. The possibility that the company is not complying with its data use rules, specifically that they may have been helping build surveillance tools, was apparently real enough for Facebook to take action. Perhaps the bar for suspension has been lowered somewhat over the last year, and with good reason.

“We are investigating the claims about Crimson Hexagon to see if they violated any of our policies,” said Facebook VP Product Partnerships Ime Archibong in a statement.

The Wall Street Journal, which first reported the suspension, noted that Crimson Hexagon currently has a contract with FEMA to monitor online discussion for various disaster-related purposes, but a deal with ICE fell through because Twitter resisted this application of their “firehose” data.

However, beyond the suggestion that the company has undertaken work that skirts the edge of what the social media companies consider appropriate use of public data, Crimson Hexagon doesn’t seem to have done anything as egregious as the wholesale network collection done by others. It restricts itself to publicly available data that it pays to access, and applies its own methods to produce its own brand of insight and intelligence.

The company also isn’t (at least, not obviously) a quasi-independent arm of a big, shady network of companies working actively to obscure their connections and deals, as Cambridge Analytica was. Crimson Hexagon is more above the board, with ordinary venture investment and partnerships. Its work is in a way similar to CA, in that it is gleaning insights of a perhaps troublingly specific nature from billions of public posts, but it’s at least doing it in full view.

As before, the onus of responsibility is equally on Facebook to enforce as it is on partners to engage in scrupulous handling of user data. It’s hardly good data custodianship for Facebook to let companies take what they need under a handshake agreement that they’ll do no evil, and then take them to task years later when the damage has already been done. But that seems to be the company’s main priority now: To reiterate the folksy metaphor from above, it is frantically counting the cows that have bolted while apologizing for having left the door open for the last decade or so.

Incidentally, Crimson Hexagon was co-founded by the same person who was put in charge of Facebook’s new social science initiative: Harvard’s Gary King. In a statement, he denied any involvement in the former’s everyday work, although he is chairman. No doubt this connection will receive a bit of scrutiny on Facebook’s side as well.

Healthcare data breach in Singapore affected 1.5M patients, targeted the prime minister

In what’s believed to be the biggest data breach in Singapore’s history, 1.5 million members of the country’s largest healthcare group have had their personal data compromised.

The breach affected SingHealth, Singapore’s biggest network of healthcare facilities. Data obtained in the breach includes names, addresses, gender, race, date of birth and patients’ national identification numbers. Around 160,000 of the 1.5 million patients also had their outpatient medical information accessed by unauthorized individuals. All patients affected by the hack had visited SingHealth clinics between May 1, 2015 and July 4, 2018, Singapore newspaper The Straits Times reports.

“Investigations by the Cyber Security Agency of Singapore (CSA) and the Integrated Health Information System confirmed that this was a deliberate, targeted and well-planned cyberattack,” a press release from Singapore’s Ministry of Health stated. “It was not the work of casual hackers or criminal gangs.”

The hackers appear to have accessed the sensitive data by compromising a single SingHealth workstation with malware and were then able to obtain privileged account credentials with which they accessed the patient database. The breach was first noticed on July 4 and a police report was filed on July 12.

During a press conference, investigating authorities disclosed that Singapore Prime Minister Lee Hsien Loong was “specifically and repeatedly targeted.”

The Prime Minister elaborated on the incident on his Facebook page:

SingHealth’s database has experienced a major cyber-attack. 1.5 million patients have had their personal particulars…

Posted by Lee Hsien Loong on Friday, July 20, 2018

It’s official: Brexit campaign broke the law — with social media’s help

The UK’s Electoral Commission has published the results of a near nine-month-long investigation into Brexit referendum spending and has found that the official Vote Leave campaign broke the law by breaching election campaign spending limits.

Vote Leave broke the law including by channeling money to a Canadian data firm, AggregateIQ, to use for targeting political advertising on Facebook’s platform, via undeclared joint working with another Brexit campaign, BeLeave, it found.

Aggregate IQ remains the subject of a separate joint investigation by privacy watchdogs in Canada and British Columbia.

The Electoral Commission’s investigation found evidence that BeLeave spent more than £675,000 with AggregateIQ under a common arrangement with Vote Leave. Yet the two campaigns had failed to disclose on their referendum spending returns that they had a common plan.

As the designated lead leave campaign, Vote Leave had a £7M spending limit under UK law. But via its joint spending with BeLeave the Commission determined it actually spent £7,449,079 — exceeding the legal spending limit by almost half a million pounds.

The June 2016 referendum in the UK resulted in a narrow 52:48 majority for the UK to leave the European Union. Two years on from the vote, the government has yet to agree a coherent policy strategy to move forward in negotiations with the EU, leaving businesses to suck up ongoing uncertainty and society and citizens to remain riven and divided.

Meanwhile, Facebook — whose platform played a key role in distributing referendum messaging — booked revenue of around $40.7BN in 2017 alone, reporting a full year profit of almost $16BN.

Back in May, long-time leave supporter and MEP, Nigel Farage, told CEO Mark Zuckerberg to his face in the European Parliament that without “Facebook and other forms of social media there is no way that Brexit or Trump or the Italian elections could ever possibly have happened”.

The Electoral Commission’s investigation focused on funding and spending, and mainly concerned five payments made to Aggregate IQ in June 2016 — payments made for campaign services for the EU Referendum — by the three Brexit campaigns it investigated (the third being: Veterans for Britain).

Veterans for Britain’s spending return included a donation of £100,000 that was reported as a cash donation received and accepted on 20 May 2016. But the Commission found this was in fact a payment by Vote Leave to Aggregate IQ for services provided to Veterans for Britain in the final days of the EU Referendum campaign. The date was also incorrectly reported: It was actually paid by Vote Leave on 29 June 2016.

Despite the donation to a third Brexit campaign by the official Vote Leave campaign being for services provided by Aggregate IQ, which was also simultaneously providing services to Vote Leave, the Commission did not deem it to constitute joint working, writing: “[T]he evidence we have seen does not support the concern that the services were provided to Veterans for Britain as joint working with Vote Leave.”

It was, however, found to constitute an inaccurate donation report — another offense under the UK’s Political Parties, Elections and Referendums Act 2000.

The report details multiple issues with spending returns across the three campaigns. And the Commission has issued a series of fines to the three Brexit campaigns.

It has also referred two individuals — Vote Leave’s David Alan Halsall and BeLeave’s Darren Grimes — to the UK’s Metropolitan Police Service, which has the power to instigate a criminal investigation.

Early last year the Commission decided not to fully investigate Vote Leave’s spending but by October it says new information had emerged — which suggested “a pattern of action by Vote Leave” — so it revisited the assessment and reopened an investigation in November.

Its report also makes it clear that Vote Leave failed to co-operate with its investigation — including by failing to produce requested information and documents; by failing to provide representatives for interview; by ignoring deadlines to respond to formal investigation notices; and by objecting to the fact of the investigation, including suggesting it would judicially review the opening of the investigation.

Judging by the Commission’s account, Vote Leave seemingly did everything it could to try to thwart and delay the investigation — which is only reporting now, two years on from the Brexit vote and with mere months of negotiating time left before the end of the formal Article 50 exit notification process.

What’s crystal clear from this report is that following money and data trails takes time and painstaking investigation, which — given that, y’know, democracy is at stake — heavily bolsters the case for far more stringent regulations and transparency mechanisms to prevent powerful social media platforms from quietly absorbing politically motivated money and messaging without recognizing any responsibility to disclose the transactions, let alone carry out due diligence on who or what may be funding the political spending.

The political ad transparency measures that Facebook has announced so far come far too late for Brexit — or indeed, for the 2016 US presidential election when its platform carried and amplifiedKremlin funded divisive messaging which reached the eyeballs of hundreds of millions of US voters.

Last week the UK’s information commissioner, Elizabeth Denham, criticized Facebook for transparency and control failures relating to political ads on its platform, and also announced its intention to fine Facebook the maximum possible for breaches of UK data protection law relating to the Cambridge Analytica scandal, after it emerged that information on as many as 87 million Facebook users was extracted from its platform and passed to a controversial UK political consultancy without most people’s knowledge or consent.

She also published a series of policy recommendations around digital political campaigning — calling for an ethical pause on the use of personal data for political ad targeting, and warning that a troubling lack of transparency about how people’s data is being used risks undermining public trust in democracy

“Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned.

The Cambridge Analytica Facebook scandal is linked to the Brexit referendum via AggregateIQ — which was also a contractor for Cambridge Analytica, and also handled Facebook user information which the former company had improperly obtained, after paying a Cambridge University academic to use a quiz app to harvest people’s data and use it to create psychometric profiles for ad targeting.

The Electoral Commission says it was approached by Facebook during the Brexit campaign spending investigation with “some information about how Aggregate IQ used its services during the EU Referendum campaign”.

We’ve reached out to Facebook for comment on the report and will update this story with any response.

The Commission states that evidence from Facebook indicates that AggregateIQ used “identical target lists for Vote Leave and BeLeave ads”, although at least in one instance the BeLeave ads “were not run”.

It writes:

BeLeave’s ability to procure services from Aggregate IQ only resulted from the actions of Vote Leave, in providing those donations and arranging a separate donor for BeLeave. While BeLeave may have contributed its own design style and input, the services provided by Aggregate IQ to BeLeave used Vote Leave messaging, at the behest of BeLeave’s campaign director. It also appears to have had the benefit of Vote Leave data and/or data it obtained via online resources set up and provided to it by Vote Leave to target and distribute its campaign material. This is shown by evidence from Facebook that Aggregate IQ used identical target lists for Vote Leave and BeLeave ads, although the BeLeave ads were not run.

“We also asked for copies of the adverts Aggregate IQ placed for BeLeave, and for details of the reports he received from Aggregate IQ on their use. Mr Grimes replied to our questions,” it further notes in the report.

At the height of the referendum campaign — at a crucial moment when Vote Leave had reached its official spending limit — officials from the official leave campaign persuaded BeLeave’s only other donor, an individual called Anthony Clake, to allow it to funnel a donation from him directly to Aggregate IQ, who Vote Leave campaign director Dominic Cummins dubbed a bunch of “social media ninjas”.

The Commission writes:

On 11 June 2016 Mr Cummings wrote to Mr Clake saying that Vote Leave had all the money it could spend, and suggesting the following: “However, there is another organisation that could spend your money. Would you be willing to spend the 100k to some social media ninjas who could usefully spend it on behalf of this organisation? I am very confident it would be well spent in the final crucial 5 days. Obviously it would be entirely legal. (sic)”

Mr Clake asked about this organisation. Mr Cummings replied as follows: “the social media ninjas are based in canada – they are extremely good. You would send your money directly to them. the organisation that would legally register the donation is a permitted participant called BeLeave, a “young people’s organisation”. happy to talk it through on the phone though in principle nothing is required from you but to wire money to a bank account if you’re happy to take my word for it. (sic)

Mr Clake then emailed Mr Grimes to offer a donation to BeLeave. He specified that this donation would made “via the AIQ account.”

And while the Commission says it found evidence that Grimes and others from BeLeave had “significant input into the look and design of the BeLeave adverts produced by Aggregate IQ”, it also determined that Vote Leave messaging was “influential in their strategy and design” — hence its determination of a common plan between the two campaigns. Aggregate IQ was the vehicle used by Vote Leave to breech its campaign spending cap.

Providing examples of the collaboration it found between the two campaigns, the Commission quotes internal BeLeave correspondence — including an instruction from Grimes to: “Copy and paste lines from Vote Leave’s briefing room in a BeLeave voice”.

It writes:

On 15 June 2016 Mr Grimes told other BeLeave Board members and Aggregate IQ that BeLeave’s ads needed to be: “an effective way of pushing our more liberal and progressive message to an audience which is perhaps not as receptive to Vote Leave’s messaging.”

On 17 June 2016 Mr Grimes told other BeLeave Board members: “So as soon as we can go live. Advertising should be back on tomorrow and normal operating as of Sunday. I’d like to make sure we have loads of scheduled tweets and Facebook status. Post all of those blogs including Shahmirs [aka Shahmir Sami; who became a BeLeave whistleblower], use favstar to check out and repost our best performing tweets. Copy and paste lines from Vote Leave’s briefing room in a BeLeave voice”

Reminder: Other people’s lives are not fodder for your feeds

#PlaneBae

You should cringe when you read that hashtag. Because it’s a reminder that people are being socially engineered by technology platforms to objectify and spy on each other for voyeuristic pleasure and profit.

The short version of the story attached to the cringeworthy hashtag is this: Earlier this month an individual, called Rosey Blair, spent all the hours of a plane flight using her smartphone and social media feeds to invade the privacy of her seat neighbors — publicly gossiping about the lives of two strangers.

Her speculation was set against a backdrop of rearview creepshots, with a few barely there scribbles added to blot out actual facial features. Even as an entire privacy invading narrative was being spun unknowingly around them.

#PlanePrivacyInvasion would be a more fitting hashtag. Or #MoralVacuumAt35000ft

And yet our youthful surveillance society started with a far loftier idea associated with it: Citizen journalism.

Once we’re all armed with powerful smartphones and ubiquitously fast Internet there will be no limits to the genuinely important reportage that will flow, we were told.

There will be no way for the powerful to withhold the truth from the people.

At least that was the nirvana we were sold.

What did we get? Something that looks much closer to mass manipulation. A tsunami of ad stalking, intentionally fake news and social media-enabled demagogues expertly appropriating these very same tools by gamifying mind-less, ethically nil algorithms.

Meanwhile, masses of ordinary people + ubiquitous smartphones + omnipresent social media feeds seems, for the most part, to be resulting in a kind of mainstream attention deficit disorder.

Yes, there is citizen journalism — such as people recording and broadcasting everyday experiences of aggression, racism and sexism, for example. Experiences that might otherwise go unreported, and which are definitely underreported.

That is certainly important.

But there are also these telling moments of #hashtaggable ethical blackout. As a result of what? Let’s call it the lure of ‘citizen clickbait’ — as people use their devices and feeds to mimic the worst kind of tabloid celebrity gossip ‘journalism’ by turning their attention and high tech tools on strangers, with (apparently) no major motivation beyond the simple fact that they can. Because technology is enabling them.

Social norms and common courtesy should kick in and prevent this. But social media is pushing in an unequal and opposite direction, encouraging users to turn anything — even strangers’ lives — into raw material to be repackaged as ‘content’ and flung out for voyeuristic entertainment.

It’s life reflecting commerce. But a particularly insidious form of commerce that does not accept editorial let alone ethical responsibility, has few (if any) moral standards, and relies, for continued function, upon stripping away society’s collective sense of privacy in order that these self-styled ‘sharing’ (‘taking’ is closer to the mark) platforms can swell in size and profit.

But it’s even worse than that. Social media as a data-mining, ad-targeting enterprise relies upon eroding our belief in privacy. So these platforms worry away at that by trying to disrupt our understanding of what privacy means. Because if you were to consider what another person thinks or feels — even for a millisecond — you might not post whatever piece of ‘content’ you had in mind.

For the platforms it’s far better if you just forget to think.

Facebook’s business is all about applying engineering ingenuity to eradicate the thoughtful friction of personal and societal conscience.

That’s why, for instance, it uses facial recognition technology to automate content identification — meaning there’s almost no opportunity for individual conscience to kick in and pipe up to quietly suggest that publicly tagging others in a piece of content isn’t actually the right thing to do.

Because it’s polite to ask permission first.

But Facebook’s antisocial automation pushes people away from thinking to ask for permission. There’s no button provided for that. The platform encourages us to forget all about the existence of common courtesies.

So we should not be at all surprised that such fundamental abuses of corporate power are themselves trickling down to infect the people who use and are exposed to these platforms’ skewed norms.

Viral episodes like #PlaneBae demonstrate that the same sense of entitlement to private information is being actively passed onto the users these platforms prey on and feed off — and is then getting beamed out, like radiation, to harm the people around them.

The damage is collective when societal norms are undermined.

#PlaneBae

Social media’s ubiquity means almost everyone works in marketing these days. Most people are marketing their own lives — posting photos of their pets, their kids, the latte they had this morning, the hipster gym where they work out — having been nudged to perform this unpaid labor by the platforms that profit from it.

The irony is that most of this work is being done for free. Only the platforms are being paid. Though there are some people making a very modern living; the new breed of ‘life sharers’ who willingly polish, package and post their professional existence as a brand of aspiration lifestyle marketing.

Social media’s gift to the world is that anyone can be a self-styled model now, and every passing moment a fashion shoot for hire — thanks to the largess of highly accessible social media platforms providing almost anyone who wants it with their own self-promoting shopwindow in the world. Plus all the promotional tools they could ever need.

Just step up to the glass and shoot.

And then your vacation beauty spot becomes just another backdrop for the next aspirational selfie. Although those aquamarine waters can’t be allowed to dampen or disrupt photo-coifed tresses, nor sand get in the camera kit. In any case, the makeup took hours to apply and there’s the next selfie to take…

What does the unchronicled life of these professional platform performers look like? A mess of preparation for projecting perfection, presumably, with life’s quotidian business stuffed higgledy piggledy into the margins — where they actually sweat and work to deliver the lie of a lifestyle dream.

Because these are also fakes — beautiful fakes, but fakes nonetheless.

We live in an age of entitled pretence. And while it may be totally fine for an individual to construct a fictional narrative that dresses up the substance of their existence, it’s certainly not okay to pull anyone else into your pantomime. Not without asking permission first.

But the problem is that social media is now so powerfully omnipresent its center of gravity is actively trying to pull everyone in — and its antisocial impacts frequently spill out and over the rest of us. And they rarely if ever ask for consent.

What about the people who don’t want their lives to be appropriated as digital windowdressing? Who weren’t asking for their identity to be held up for public consumption? Who don’t want to participate in this game at all — neither to personally profit from it, nor to have their privacy trampled by it?

The problem is the push and pull of platforms against privacy has become so aggressive, so virulent, that societal norms that protect and benefit us all — like empathy, like respect — are getting squeezed and sucked in.

The ugliness is especially visible in these ‘viral’ moments when other people’s lives are snatched and consumed voraciously on the hoof — as yet more content for rapacious feeds.

#PlaneBae

Think too of the fitness celebrity who posted a creepshot + commentary about a less slim person working out at their gym.

Or the YouTuber parents who monetize videos of their kids’ distress.

Or the men who post creepshots of women eating in public — and try to claim it’s an online art project rather than what it actually is: A privacy violation and misogynistic attack.

Or, on a public street in London one day, I saw a couple of giggling teenage girls watching a man at a bus stop who was clearly mentally unwell. Pulling out a smartphone, one girl hissed to the other: “We’ve got to put this on YouTube.”

For platforms built by technologists without thought for anything other than growth, everything is a potential spectacle. Everything is a potential post.

So they press on their users to think less. And they profit at society’s expense.

It’s only now, after social media has embedded itself everywhere, that platforms are being called out for their moral vacuum; for building systems that encourage abject mindlessness in users — and serve up content so bleak it represents a form of visual cancer.

#PlaneBae

Human have always told stories. Weaving our own narratives is both how we communicate and how we make sense of personal experience — creating order out of events that are often disorderly, random, even chaotic.

The human condition demands a degree of pattern-spotting for survival’s sake; so we can pick our individual path out of the gloom.

But platforms are exploiting that innate aspect of our character. And we, as individuals, need to get much, much better at spotting what they’re doing to us.

We need to recognize how they are manipulating us; what they are encouraging us to do — with each new feature nudge and dark pattern design choice.

We need to understand their underlying pull. The fact they profit by setting us as spies against each other. We need to wake up, personally and collectively, to social media’s antisocial impacts.

Perspective should not have to come at the expense of other people getting hurt.

This week the women whose privacy was thoughtlessly repackaged as public entertainment when she was branded and broadcast as #PlaneBae — and who has suffered harassment and yet more unwelcome attention as a direct result — gave a statement to Business Insider.

“#PlaneBae is not a romance — it is a digital-age cautionary tale about privacy, identity, ethics and consent,” she writes. “Please continue to respect my privacy, and my desire to remain anonymous.”

And as a strategy to push against the antisocial incursions of social media, remembering to respect people’s privacy is a great place to start.

ACLU calls for a moratorium on government use of facial recognition technologies

Technology executives are pleading with the government to give them guidance on how to use facial recognition technologies, and now the American Civil Liberties Union is weighing in.

On the heels of a Microsoft statement asking for the federal government to weigh in on the technology, the ACLU has called for a moratorium on the use of the technology by government agencies.

“Congress should take immediate action to put the brakes on this technology with a moratorium on its use, given that it has not been fully debated and its use has never been explicitly authorized,” said Neema Singh Guliani, ACLU legislative counsel, in a statement. “And companies like Microsoft, Amazon, and others should be heeding the calls from the public, employees, and shareholders to stop selling face surveillance technology to governments.”

In May the ACLU released a report on Amazon’s sale of facial recognition technology to different law enforcement agencies. And in June the civil liberties group pressed the company to stop selling the technology. One contract, with the Orlando Police Department, was suspended and then renewed after the uproar.

Meanwhile, Google employees revolted over their company’s work with the government on facial recognition tech… and Microsoft had problems of its own after reports surfaced of the work that the company was doing with the U.S. Immigration and Customs Enforcement service.

Some organizations are already working to regulate how facial recognition technologies are used. At MIT, Joy Buolamwini has created the Algorithmic Justice League, which is pushing a pledge that companies working with the technology can agree to as they work on the tech.

That pledge includes commitments to value human life and dignity, including the refusal to help develop lethal autonomous vehicles or equipping law enforcement with facial analysis products.

As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation

Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own.

And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created.

That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

In what companies have framed as a quest to create “better,” more efficient and more targeted services for consumers, they have tried to solve the problem of user access by moving to increasingly passive (for the user) and intrusive (by the company) forms of identification — culminating in features like Apple’s Face ID and the frivolous filters that Snap overlays over users’ selfies.

Those same technologies are also being used by security and police forces in ways that have gotten technology companies into trouble with consumers or their own staff. Amazon has been called to task for its work with law enforcement, Microsoft’s own technologies have been used to help identify immigrants at the border (indirectly aiding in the separation of families and the virtual and physical lockdown of America against most forms of immigration) and Google faced an internal company revolt over the facial recognition work it was doing for the Pentagon.

Smith posits this nightmare scenario:

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.

What’s impressive about this is the intimation that it isn’t already happening (and that Microsoft isn’t enabling it). Across the world, governments are deploying these tools right now as ways to control their populations (the ubiquitous surveillance state that China has assembled, and is investing billions of dollars to upgrade, is just the most obvious example).

In this moment when corporate innovation and state power are merging in ways that consumers are only just beginning to fathom, executives who have to answer to a buying public are now pleading for government to set up some rails. Late capitalism is weird.

But Smith’s advice is prescient. Companies do need to get ahead of the havoc their innovations can wreak on the world, and they can look good while doing nothing by hiding their own abdication of responsibility on the issue behind the government’s.

“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act,” Smith writes.

The fact is, something does, indeed, need to be done.

As Smith writes, “The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.”

All of this takes on faith that the technology actually works as advertised. And the problem is, right now, it doesn’t.

In an op-ed earlier this month, Brian Brackeen, the chief executive of a startup working on facial recognition technologies, pulled back the curtains on the industry’s not-so-secret huge problem.

Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie.

And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether.

There’s really no “nice” way to acknowledge these things.

Smith, himself admits that the technology has a long way to go before it’s perfect. But the implications of applying imperfect technologies are vast — and in the case of law enforcement, not academic. Designating an innocent bystander or civilian as a criminal suspect influences how police approach an individual.

Those instances, even if they amount to only a handful, would lead me to argue that these technologies have no business being deployed in security situations.

As Smith himself notes, “Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way.”

While Smith lays out the problem effectively, he’s less clear on the solution. He’s called for a government “expert commission” to be empaneled as a first step on the road to eventual federal regulation.

That we’ve gotten here is an indication of how bad things actually are. It’s rare that a tech company has pleaded so nakedly for government intervention into an aspect of its business.

But here’s Smith writing, “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”

Given the current state of affairs in Washington, Smith may be asking too much. Which is why perhaps the most interesting — and admirable — call from Smith in his post is for technology companies to slow their roll.

We recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology,” writes Smith. “Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. ‘Move fast and break things’ became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.”

Facial recognition startup Kairos acquires Emotion Reader

Kairos, the face recognition technology used for brand marketing, has announced the acquisition of EmotionReader.

EmotionReader is an Limerick, Ireland-based startup that uses algorithms to analyze facial expressions around video content. The startup allows brands and marketers to measure viewers emotional response to video, analyze viewer response via an analytics dashboard, and make different decisions around media spend based on viewer response.

The acquisition makes sense considering that Kairos core business is focused on facial identification for enterprise clients. Knowing who someone is, paired with how they feel about your content, is a powerful tool for brands and marketers.

The idea for Kairos started when founder Brian Brackeen was making HR time-clocking systems for Apple. People were cheating the system, so he decided to implement facial recognition to ensure that employees were actually clocking in and out when they said they were.

That premise spun out into Kairos, and Brackeen soon realized that facial identification as a service was much more powerful than any niche time clocking service.

But Brackeen is very cautious with the technology Kairos has built.

While Kairos aims to make facial recognition technology (and all the powerful insights that come with it) accessible and available to all businesses, Brackeen has been very clear about the fact that Kairos isn’t interested in selling this technology to government agencies.

Brackeen recently contributed a post right here on TechCrunch outlining the various reasons why governments aren’t ready for this type of technology. Alongside the outstanding invasion of personal privacy, there are also serious issues around bias against people of color.

From the post:

There is no place in America for facial recognition that supports false arrests and murder. In a social climate wracked with protests and angst around disproportionate prison populations and police misconduct, engaging software that is clearly not ready for civil use in law enforcement activities does not serve citizens, and will only lead to further unrest.

As part of the deal, EmotionReader CTO Dr. Stephen Moore will run Kairos’ new Singapore-based R&D center, allowing for upcoming APAC expansion.

Kairos has raised approximately $8 million from investors New World Angels, Kapor Capital, 500 Startups, Backstage Capital, Morgan Stanley, Caerus Ventures, and Florida Institute.

California malls are sharing license plate tracking data with ICE

A chain of California shopping centers is sharing its license plate reader data with a well-known U.S. Immigration and Customs Enforcement (ICE) contractor, giving that agency the ability to track license plate numbers it captures in near real-time.

A report from the Electronic Frontier Foundation revealed that real estate group the Irvine Company shares that data with Vigilant Solutions, a private surveillance tech company that sells automated license plate recognition (ALPR) equipment to law enforcement and government agencies. The Irvine Company owns nearly 50 shopping centers across California with locations in Irvine, La Jolla, Newport Beach, Redwood City, San Jose, Santa Clara and Sunnyvale. ICE finalized its contract with Vigilant Solutions in January of this year.

EFF Investigative Researcher Dave Maass discovered Irvine Group’s data sharing activities in a page detailing its ALPR policy, a disclosure required by California law. Ironically, while Irvine Group’s ALPR usage and privacy policy does describe its own practice of deleting the license data it collects once transmitted, it admits that it does in fact transmit all of it straight to Vigilant Solutions, which has no such qualms.

As Vigilant describes, the key offering in its “advanced suite” of license reading tech is unfettered access to a massive trove of license plate data:

“A hallmark of Vigilant’s solution, the ability for agencies to share real-time data nationwide amongst over 1,000 agencies and tap into our exclusive commercial LPR database of over 5 billion vehicle detections, sets our platform apart. “

The Irvine Group is only one example of this kind of data sharing, but it illustrates the ubiquity of the kind of privately owned modern surveillance technology at the fingertips of anyone willing to pay for it. While we’re likely to see more state level legal challenges to license plate tracking technology, for now the powerful pairing of license plate numbers and location data is mostly fair game for anyone who wants to make money off of collecting and aggregating it.

AI spots legal problems with tech T&Cs in GDPR research project

Technology is the proverbial double-edged sword. And an experimental European research project is ensuring this axiom cuts very close to the industry’s bone indeed by applying machine learning technology to critically sift big tech’s privacy policies — to see whether AI can automatically identify violations of data protection law.

The still-in-training privacy policy and contract parsing tool — which is called ‘Claudette‘: Aka (automated) clause detector — is being developed by researchers at the European University Institute in Florence.

They’ve also now got support from European consumer organization BEUC — for a ‘Claudette meets GDPR‘ project — which specifically applies the tool to evaluate compliance with the EU’s General Data Protection Regulation.

Early results from this project have been released today, with BEUC saying the AI was able to automatically flag a range of problems with the language being used in tech T&Cs.

The researchers set Claudette to work analyzing the privacy policies of 14 companies in all — namely: Google, Facebook (and Instagram), Amazon, Apple, Microsoft, WhatsApp, Twitter, Uber, AirBnB, Booking, Skyscanner, Netflix, Steam and Epic Games — saying this group was selected to cover a range of online services and sectors.

And also because they are among the biggest online players and — I quote — “should be setting a good example for the market to follow”. Ehem, should.

The AI analysis of the policies was carried out in June, after the update to the EU’s data protection rules had come into force. The regulation tightens requirements on obtaining consent for processing citizens’ personal data by, for example, increasing transparency requirements — basically requiring that privacy policies be written in clear and intelligible language, explaining exactly how the data will be used, in order that people can make a genuine, informed choice to consent (or not consent).

In theory, all 15 parsed privacy policies should have been compliant with GDPR by June, as it came into force on May 25. However some tech giants are already facing legal challenges to their interpretation of ‘consent’. And it’s fair to say the law has not vanquished the tech industry’s fuzzy language and logic overnight. Where user privacy is concerned, old, ugly habits die hard, clearly.

But that’s where BEUC is hoping AI technology can help.

It says that out of a combined 3,659 sentences (80,398 words) Claudette marked 401 sentences (11.0%) as containing unclear language, and 1,240 (33.9%) containing “potentially problematic” clauses or clauses providing “insufficient” information.

BEUC says identified problems include:

  • Not providing all the information which is required under the GDPR’s transparency obligations. “For example companies do not always inform users properly regarding the third parties with whom they share or get data from”
  • Processing of personal data not happening according to GDPR requirements. “For instance, a clause stating that the user agrees to the company’s privacy policy by simply using its website”
  • Policies are formulated using vague and unclear language (i.e. using language qualifiers that really bring the fuzz — such as “may”, “might”, “some”, “often”, and “possible”) — “which makes it very hard for consumers to understand the actual content of the policy and how their data is used in practice”

The bolstering of the EU’s privacy rules, with GDPR tightening the consent screw and supersizing penalties for violations, was exactly intended to prevent this kind of stuff. So it’s pretty depressing — though hardly surprising — to see the same, ugly T&C tricks continuing to be used to try to sneak consent by keeping users in the dark.

We reached out to two of the largest tech giants whose policies Claudette parsed — Google and Facebook — to ask if they want to comment on the project or its findings.

A Google spokesperson said: “We have updated our Privacy Policy in line with the requirements of the GDPR, providing more detail on our practices and describing the information that we collect and use, and the controls that users have, in clear and plain language. We’ve also added new graphics and video explanations, structured the Policy so that users can explore it more easily, and embedded controls to allow users to access relevant privacy settings directly.”

At the time of writing Facebook had not responded to our request for comment.

Commenting in a statement, Monique Goyens, BEUC’s director general, said: “A little over a month after the GDPR became applicable, many privacy policies may not meet the standard of the law. This is very concerning. It is key that enforcement authorities take a close look at this.”

The group says it will be sharing the research with EU data protection authorities, including the European Data Protection Board. And is not itself ruling out bringing legal actions against law benders.

But it’s also hopeful that automation will — over the longer term — help civil society keep big tech in legal check.

Although, where this project is concerned, it also notes that the training data-set was small — conceding that Claudette’s results were not 100% accurate — and says more privacy policies would need to be manually analyzed before policy analysis can be fully conducted by machines alone.

So file this one under ‘promising research’.

“This innovative research demonstrates that just as Artificial Intelligence and automated decision-making will be the future for companies from all kinds of sectors, AI can also be used to keep companies in check and ensure people’s rights are respected,” adds Goyens. “We are confident AI will be an asset for consumer groups to monitor the market and ensure infringements do not go unnoticed.

“We expect companies to respect consumers’ privacy and the new data protection rights. In the future, Artificial Intelligence will help identify infringements quickly and on a massive scale, making it easier to start legal actions as a result.”

For more on the AI-fueled future of legal tech, check out our recent interview with Mireille Hildebrandt.

The FBI, FTC and SEC are joining the Justice Department’s inquiries into Facebook’s Cambridge Analytica disclosures

An alphabet soup of federal agencies are now poring over Facebook’s disclosures and the company’s statements about its response to the improper use of its user information by the political consultancy Cambridge Analytica.

The Federal Bureau of Investigation, the Federal Trade Commission and the Securities and Exchange Commission have joined the Justice Department in examining how the personal information of 71 million Americans was distributed by Facebook and used by Cambridge Analytica, according to a Washington Post report released Monday.

According to the Post, the emphasis of the investigation has been on what Facebook disclosed about its information sharing with Cambridge Analytica and whether those disclosures correlate to the timeline that’s being established by government investigators. The fear, for Facebook, is that the government may decide that the company didn’t reveal enough to either investors or the public about the extent of the misallocation of user data. Another concern is whether the Cambridge Analytica decision violated the terms of an earlier settlement Facebook made with the Federal Trade Commission.

The redoubled efforts of so many divisions could potentially ensnare Facebook chief executive Mark Zuckerberg, who was brought before Congress with other Facebook officials to testify about the breaches. People familiar with the investigation told the Post that the officials’ testimony was being scrutinized.

In a statement, Facebook noted it had received questions from different agencies and that it was cooperating.

The Federal Trade Commission first confirmed that it was investigating Facebook in March.

Acting director Tom Pahl said at the time:

The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices.

The multiple investigations by U.S. and U.K. agencies into the ways in which Cambridge Analytica accessed and exploited data on social media users in political campaigns have already pushed the political consulting firm into bankruptcy.

It’s unlikely (read impossible) that Facebook would suffer anything like the same fate, and the company’s stock price has already recovered from whatever negative impact the scandal wrought on the social network’s market capitalization. Rather, the lingering investigations show the potential for government regulators (and lawmakers) to involve themselves in the company’s operations.

As with everything else in Washington, it’s always the cover up — never the crime.