WhatsApp raises minimum age to 16 in Europe ahead of GDPR

Tech giants are busy updating their T&Cs ahead of the EU’s incoming data protection framework, GDPR. Which is why, for instance, Facebook-owned Instagram is suddenly offering a data download tool. You can thank European lawmakers for being able to take your data off that platform.

Facebook -owned WhatsApp is also making a pretty big change as a result of GDPR — noting in its FAQs that it’s raising the minimum age for users of the messaging platform to 16 across the “European Region“. This includes in both EU and non-EU countries (such as Switzerland), as well as the in-the-process-of-brexiting UK (which is set to leave the EU next year).

In the US, the minimum age for WhatsApp usage remains 13.

Where teens are concerned GDPR introduces a new provision concerning children’s personal data — setting a 16-year-old age limit on kids being able to consent to their data being processed — although it does allow some wiggle room for individual countries to write a lower age limit into their laws, setting a hard cap at 13-years-old.

WhatsApp isn’t bothering to try to vary the age gate depending on limits individual EU countries have set, though. Presumably to reduce the complexity of complying with the new rules.

But also likely because it’s confident WhatsApp-loving teens won’t have any trouble circumventing the new minimum age limit. And therefore that there’s no real risk to its business because teenagers will easily ignore the rules.

Certainly it’s unclear whether WhatsApp and its parent Facebook will do anything at all to enforce the age limit — beyond asking users to state they are at least 16 (and taking them at their word). So in practice, while on paper the 16-years-old minimum seems like a big deal, the change may do very little to protect teens from being data-mined by the ad giant.

We’ve asked WhatsApp whether it will cross-check users’ accounts with Facebook accounts and data holdings to try to verify a teen really is 16, for example, but nothing in its FAQ on the topic suggests it plans to carry out any active enforcement at all — instead it merely notes:

  • Creating an account with false information is a violation of our Terms
  • Registering an account on behalf of someone who is underage is also a violation of our Terms

Ergo, that does sound very much like a buck being passed. And it will likely be up to parents to try to actively enforce the limit — by reporting their own underage WhatApp-using kids to the company (which would then have to close the account). Clearly few parents would relish the prospect of doing that.

Yet Facebook does already share plenty of data between WhatsApp and its other companies for all sorts of self-serving, business-enhancing purposes — and even including, as it couches it, “to ensure safety and security”. So it’s hardly short of data to carry out some age checks of its own and proactively enforce the limit.

One curious difference is that Facebook’s approach to teen usage of WhatsApp is notably distinct to the one it’s taking with teens on its main social platform — also as it reworks the Facebook T&Cs ahead of GDPR.

Under the new terms there Facebook users between the ages of 13 and 15 will need to get parental permission to be targeted with ads or share sensitive info on Facebook.

But again, as my TC colleague Josh Constine pointed out, the parental consent system Facebook has concocted is laughably easy for teens to circumvent — merely requiring they select one of their Facebook friends or just enter an email address (which could literally be an alternative email address they themselves control). That entirely unverified entity is then asked to give ‘consent’ for their ‘child’ to share sensitive info. So, basically, a total joke.

As we’ve said before, Facebook’s approach to GDPR ‘compliance’ is at best described as ‘doing the minimum possible’. And data protection experts say legal challenges are inevitable.

Also in Europe Facebook has previously been forced via regulatory intervention to give up one portion of the data sharing between its platforms — specifically for ad targeting purposes. However its WhatsApp T&Cs also suggest it is confident it will find a way to circumvent that in future, as it writes it “will only do so when we reach an understanding with the Irish Data Protection Commissioner on a future mechanism to enable such use” — i.e. when, not if.

Last month it also signed an undertaking with the DPC on this related to GDPR compliance, so again appears to have some kind of regulatory-workaround ‘mechanism’ in the works.

Kogan: ‘I don’t think Facebook has a developer policy that is valid’

A Cambridge University academic at the center of a data misuse scandal involving Facebook user data and political ad targeting faced questions from the UK parliament this morning.

Although the two-hour evidence session in front of the DCMS committee’s fake news enquiry raised rather more questions than it answered — with professor Aleksandr Kogan citing an NDA he said he had signed with Facebook to decline to answer some of the committee’s questions (including why and when exactly the NDA was signed).

TechCrunch understands the NDA relates to standard confidentiality provisions regarding deletion certifications and other commitments made by Kogan to Facebook not to misuse user data — after the company learned he had user passed data to SCL in contravention of its developer terms.

Asked why he had a non disclosure agreement with Facebook Kogan told the committee it would have to ask Facebook. He also declined to say whether any of his company co-directors (one of whom now works for Facebook) had been asked to sign an NDA. Nor would he specify whether the NDA had been signed in the US.

Asked whether he had deleted all the Facebook data and derivatives he had been able to acquire Kogan said yes “to the best of his knowledge”, though he also said he’s currently conducting a review to make sure nothing has been overlooked.

A few times during the session Kogan made a point of arguing that data audits are essentially useless for catching bad actors — claiming that anyone who wants to misuse data can simply put a copy on a hard drive and “store it under the mattress”.

(Incidentally, the UK’s data protection watchdog is conducting just such an audit of Cambridge Analytica right now, after obtaining a warrant to enter its London offices last month — as part of an ongoing, year-long investigation into social media data being used for political ad targeting.)

Your company didn’t hide any data in that way did it, a committee member asked Kogan? “We didn’t,” he rejoined.

“This has been a very painful experience because when I entered into all of this Facebook was a close ally. And I was thinking this would be helpful to my academic career. And my relationship with Facebook. It has, very clearly, done the complete opposite,” Kogan continued.  “I had no interest in becoming an enemy or being antagonized by one of the biggest companies in the world that could — even if it’s frivolous — sue me into oblivion. So we acted entirely as they requested.”

Despite apparently lamenting the breakdown in his relations with Facebook — telling the committee how he had worked with the company, in an academic capacity, prior to setting up a company to work with SCL/CA — Kogan refused to accept that he had broken Facebook’s terms of service — instead asserting: “I don’t think they have a developer policy that is valid… For you to break a policy it has to exist. And really be their policy, The reality is Facebook’s policy is unlikely to be their policy.”

“I just don’t believe that’s their policy,” he repeated when pressed on whether he had broken Facebook’s ToS. “If somebody has a document that isn’t their policy you can’t break something that isn’t really your policy. I would agree my actions were inconsistent with the language of this document — but that’s slightly different from what I think you’re asking.”

“You should be a professor of semantics,” quipped the committee member who had been asking the questions.

A Facebook spokesperson told us it had no public comment to make on Kogan’s testimony. But last month CEO Mark Zuckerberg couched the academic’s actions as a “breach of trust” — describing the behavior of his app as “abusive”.

In evidence to the committee today, Kogan told it he had only become aware of an “inconsistency” between Facebook’s developer terms of service and what his company did in March 2015 — when he said he begun to suspect the veracity of the advice he had received from SCL. At that point Kogan said GSR reached out to an IP lawyer “and got some guidance”.

(More specifically he said he became suspicious because former SCL employee Chris Wylie did not honor a contract between GSR and Eunoia, a company Wylie set up after leaving SLC, to exchange data-sets; Kogan said GSR gave Wylie the full raw Facebook data-set but Wylie did not provide any data to GSR.)

“Up to that point I don’t believe I was even aware or looked at the developer policy. Because prior to that point — and I know that seems shocking and surprising… the experience of a developer in Facebook is very much like the experience of a user in Facebook. When you sign up there’s this small print that’s easy to miss,” he claimed.

“When I made my app initially I was just an academic researcher. There was no company involved yet. And then when we commercialized it — so we changed the app — it was just something I completely missed. I didn’t have any legal resources, I relied on SCL [to provide me with guidance on what was appropriate]. That was my mistake.”

“Why I think this is still not Facebook’s policy is that we were advised [by an IP lawyer] that Facebook’s terms for users and developers are inconsistent. And that it’s not actually a defensible position for Facebook that this is their policy,” Kogan continued. “This is the remarkable thing about the experience of an app developer on Facebook. You can change the name, you can change the description, you can change the terms of service — and you just save changes. There’s no obvious review process.

“We had a terms of service linked to the Facebook platform that said we could transfer and sell data for at least a year and a half — nothing was ever mentioned. It was only in the wake of the Guardian article [in December 2015] that they came knocking.”

Kogan also described the work he and his company had done for SCL Elections as essentially worthless — arguing that using psychometrically modeled Facebook data for political ad targeting in the way SCL/CA had apparently sought to do was “incompetent” because they could have used Facebook’s own ad targeting platform to achieve greater reach and with more granular targeting.

“It’s all about the use-case. I was very surprised to learn that what they wanted to do is run Facebook ads,” he said. “This was not mentioned, they just wanted a way to measure personality for many people. But if the use-case you have is Facebook ads it’s just incompetent to do it this way.

“Taking this data-set you’re going to be able to target 15% of the population. And use a very small segment of the Facebook data — page likes — to try to build personality models. When do this when you could very easily go target 100% and use much more of the data. It just doesn’t make sense.”

Asked what, then, was the value of the project he undertook for SCL, Kogan responded: “Given what we know now, nothing. Literally nothing.”

He repeated his prior claim that he was not aware that work he was providing for SCL Elections would be used for targeting political ads, though he confirmed he knew the project was focused on the US and related to elections.

He also said he knew the work was being done for the Republican party — but claimed not to know which specific candidates were involved.

Pressed by one committee member on why he didn’t care to know which politicians he was indirectly working for, Kogan responded by saying he doesn’t have strong personal views on US politics or politicians generally — beyond believing that most US politicians are at least reasonable in their policy positions.

“My personal position on life is unless I have a lot of evidence I don’t know. Is the answer. It’s a good lesson to learn from science — where typically we just don’t know. In terms of politics in particular I rarely have a strong position on a candidate,” said Kogan, adding that therefore he “didn’t bother” to make the effort to find out who would ultimately be the beneficiary of his psychometric modeling.

Kogan told the committee his initial intention had not been to set up a business at all but to conduct not-for-profit big data research — via an institute he wanted to establish — claiming it was Wylie who had advised him to also set up the for-profit entity, GSR, through which he went on to engage with SCL Elections/CA.

“The initial plan was we collect the data, I fulfill my obligations to SCL, and then I would go and use the data for research,” he said.

And while Kogan maintained he had never drawn a salary from the work he did for SCL — saying his reward was “to keep the data”, and get to use it for academic research — he confirmed SCL did pay GSR £230,000 at one point during the project; a portion of which he also said eventually went to pay lawyers he engaged “in the wake” of Facebook becoming aware that data had been passed to SCL/CA by Kogan — when it contacted him to ask him to delete the data (and presumably also to get him to sign the NDA).

In one curious moment, Kogan claimed not to know his own company had been registered at 29 Harley Street in London — which the committee noted is “used by a lot of shell companies some of which have been used for money laundering by Russian oligarchs”.

Seeming a little flustered he said initially he had registered the company at his apartment in Cambridge, and later “I think we moved it to an innovation center in Cambridge and then later Manchester”.

“I’m actually surprised. I’m totally surprised by this,” he added.

Did you use an agent to set it up, asked one committee member. “We used Formations House,” replied Kogan, referring to a company whose website states it can locate a business’ trading address “in the heart of central London” — in exchange for a small fee.

“I’m legitimately surprised by that,” added Kogan of the Harley Street address. “I’m unfortunately not a Russian oligarch.”

Later in the session another odd moment came when he was being asked about his relationship with Saint Petersburg University in Russia — where he confirmed he had given talks and workshops, after traveling to the country with friends and proactively getting in touch with the university “to say hi” — and specifically about some Russian government-funded research being conducted by researchers there into cyberbullying.

Committee chair Collins implied to Kogan the Russian state could have had a specific malicious interest in such a piece of research, and wondered whether Kogan had thought about that in relation to the interactions he’d had with the university and the researchers.

Kogan described it as a “big leap” to connect the piece of research to Kremlin efforts to use online platforms to interfere in foreign elections — before essentially going on to repeat a Kremlin talking point by saying the US and the UK engage in much the same types of behavior.

“You can make the same argument about the UK government funding anything or the US government funding anything,” he told the committee. “Both countries are very famous for their spies.

“There’s a long history of the US interfering with foreign elections and doing the exact same thing [creating bot networks and using trolls for online intimidation].”

“Are you saying it’s equivalent?” pressed Collins. “That the work of the Russian government is equivalent to the US government and you couldn’t really distinguish between the two?”

“In general I would say the governments that are most high profile I am dubious about the moral scruples of their activities through the long history of UK, US and Russia,” responded Kogan. “Trying to equate them I think is a bit of a silly process. But I think certainly all these countries have engaged in activities that people feel uncomfortable with or are covert. And then to try to link academic work that’s basic science to that — if you’re going to down the Russia line I think we have to go down the UK line and the US line in the same way.

“I understand Russia is a hot-button topic right now but outside of that… Most people in Russia are like most people in the UK. They’re not involved in spycraft, they’re just living lives.”

“I’m not aware of UK government agencies that have been interfering in foreign elections,” added Collins.

“Doesn’t mean it’s not happened,” replied Kogan. “Could be just better at it.”

During Wylie’s evidence to the committee last month the former SCL data scientist had implied there could have been a risk of the Facebook data falling into the hands of the Russian state as a result of Kogan’s back and forth travel to the region. But Kogan rebutted this idea — saying the data had never been in his physical possession when he traveled to Russia, pointing out it was stored in a cloud hosting service in the US.

“If you want to try to hack Amazon Web Services good luck,” he added.

He also claimed not to have read the piece of research in question, even though he said he thought the researcher had emailed the paper to him — claiming he can’t read Russian well.

Kogan seemed most comfortable during the session when he was laying into Facebook’s platform policies — perhaps unsurprisingly, given how the company has sought to paint him as a rogue actor who abused its systems by creating an app that harvested data on up to 87 million Facebook users and then handing information on its users off to third parties.

Asked whether he thought a prior answer given to the committee by Facebook — when it claimed it had not provided any user data to third parties — was correct, Kogan said no given the company provides academics with “macro level” user data (including providing him with this type of data, in 2013).

He was also asked why he thinks Facebook lets its employees collaborate with external researchers — and Kogan suggested this is “tolerated” by management as a strategy to keep employees stimulated.

Committee chair Collins asked whether he thought it was odd that Facebook now employs his former co-director at GSR, Joseph Chancellor — who works in its research division — despite Chancellor having worked for a company Facebook has said it regards as having violated its platform policies.

“Honestly I don’t think it’s odd,” said Kogan. “The reason I don’t think it’s odd is because in my view Facebook’s comments are PR crisis mode. I don’t believe they actually think these things — because I think they realize that their platform has been mined, left and right, by thousands of others.

“And I was just the unlucky person that ended up somehow linked to the Trump campaign. And we are where we are. I think they realize all this but PR is PR and they were trying to manage the crisis and it’s convenient to point the finger at a single entity and try to paint the picture this is a rogue agent.

At another moment during the evidence session Kogan was also asked to respond to denials previously given to the committee by former CEO of Cambridge Analytica Alexander Nix — who had claimed that none of the data it used came from GSR and — even more specifically — that GSR had never supplied it with “data-sets or information”.

“Fabrication,” responded Kogan. “Total fabrication.”

“We certainly gave them [SCL/CA] data. That’s indisputable,” he added.

In written testimony to the committee he also explained that he in fact created three apps for gathering Facebook user data. The first one — called the CPW Lab app — was developed after he had begun a collaboration with Facebook in early 2013, as part of his academic studies. Kogan says Facebook provided him with user data at this time for his research — although he said these datasets were “macro-level datasets on friendship connections and emoticon usage” rather than information on individual users.

The CPW Lab app was used to gather individual level data to supplement those datasets, according to Kogan’s account. Although he specifies that data collected via this app was housed at the university; used for academic purposes only; and was “not provided to the SCL Group”.

Later, once Kogan had set up GSR and was intending to work on gathering and modeling data for SCL/Cambridge Analytica, the CPW Lab app was renamed to the GSR App and its terms were changed (with the new terms provided by Wylie).

Thousands of people were then recruited to take this survey via a third company — Qualtrics — with Kogan saying SCL directly paid ~$800,000 to it to recruit survey participants, at a cost of around $3-$4 per head (he says between 200,000 and 300,000 people took the survey as a result in the summer of 2014; NB: Facebook doesn’t appear to be able to break out separate downloads for the different apps Kogan ran on its platform — it told us about 305,000 people downloaded “the app”).

In the final part of that year, after data collection had finished for SCL, Kogan said his company revised the GSR App to become an interactive personality quiz — renaming it “thisisyourdigitallife” and leaving the commercial portions of the terms intact.

“The thisisyourdigitallife App was used by only a few hundred individuals and, like the two prior iterations of the application, collected demographic information and data about “likes” for survey participants and their friends whose Facebook privacy settings gave participants access to “likes” and demographic information. Data collected by the thisisyourdigitallife App was not provided to SCL,” he claims in the written testimony.

During the oral hearing, Kogan was pressed on misleading T&Cs in his two commercial apps. Asked by a committee member about the terms of the GSR App not specifying that the data would be used for political targeting, he said he didn’t write the terms himself but added: “If we had to do it again I think I would have insisted to Mr Wylie that we do add politics as a use-case in that doc.”

“It’s misleading,” argued the committee member. “It’s a misrepresentation.”

“I think it’s broad,” Kogan responded. “I think it’s not specific enough. So you’re asking for why didn’t we go outline specific use-cases — because the politics is a specific use-case. I would argue that the politics does fall under there but it’s a specific use-case. I think we should have.”

The committee member also noted how, “in longer, denser paragraphs” within the app’s T&Cs, the legalese does also state that “whatever that primary purpose is you can sell this data for any purposes whatsoever” — making the point that such sweeping terms are unfair.

“Yes,” responded Kogan. “In terms of speaking the truth, the reality is — as you’ve pointed out — very few if any people have read this, just like very few if any people read terms of service. I think that’s a major flaw we have right now. That people just do not read these things. And these things are written this way.”

“Look — fundamentally I made a mistake by not being critical about this. And trusting the advice of another company [SCL]. As you pointed out GSR is my company and I should have gotten better advice, and better guidance on what is and isn’t appropriate,” he added.

“Quite frankly my understanding was this was business as usual and normal practice for companies to write broad terms of service that didn’t provide specific examples,” he said after being pressed on the point again.

“I doubt in Facebook’s user policy it says that users can be advertised for political purposes — it just has broad language to provide for whatever use cases they want. I agree with you this doesn’t seem right, and those changes need to be made.”

At another point, he was asked about the Cambridge University Psychometrics Centre — which he said had initially been involved in discussions between him and SCL to be part of the project but fell out of the arrangement. According to his version of events the Centre had asked for £500,000 for their piece of proposed work, and specifically for modeling the data — which he said SCL didn’t want to pay. So SCL had asked him to take that work on too and remove the Centre from the negotiations.

As a result of that, Kogan said the Centre had complained about him to the university — and SCL had written a letter to it on his behalf defending his actions.

“The mistake the Psychometrics Centre made in the negotiation is that they believed that models are useful, rather than data,” he said. “And actually just not the same. Data’s far more valuable than models because if you have the data it’s very easy to build models — because models use just a few well understood statistical techniques to make them. I was able to go from not doing machine learning to knowing what I need to know in one week. That’s all it took.”

In another exchange during the session, Kogan denied he had been in contact with Facebook in 2014. Wylie previously told the committee he thought Kogan had run into problems with the rate at which the GSR App was able to pull data off Facebook’s platform — and had contacted engineers at the company at the time (though Wylie also caveated his evidence by saying he did not know whether what he’d been told was true).

“This never happened,” said Kogan, adding that there was no dialogue between him and Facebook at that time. “I don’t know any engineers at Facebook.”

Uber to stop storing precise location pick-ups/drop-offs in driver logs

Uber is planning to tweak the historical pick-up and drop-off logs that drivers can see in order to slightly obscure the exact location, rather than planting an exact pin in it (as now). The idea is to provide a modicum more privacy for users while still providing drivers with what look set to be remain highly detailed trip logs.

The company told Gizmodo it will initially pilot the change with drivers, but intends the privacy-focused feature to become the default setting “in the coming months”.

Earlier this month Uber also announced a complete redesign of the drivers’ app — making changes it said had been informed by “months” of driver conversations and feedback. It says the pilot of location obfuscation will begin once all drivers have the new app.

The ride-hailing giant appears to be trying to find a compromise between rider safety concerns — there have been reports of Uber drivers stalking riders, for example — and drivers wanting to have precise logs so they can challenge fare disputes.

Location data is our most sensitive information, and we are doing everything we can do to protect privacy around it,” a spokesperson told us. “The new design provides enough information for drivers to identify past trips for customer support issues or earning disputes without granting them ongoing access to rider addresses.”

In the current version of the pilot — according to screenshots obtained by Gizmodo — the location of the pin has been expanded into a circle, so it’s indicating a shaded area a few meters around a pick-up or drop-off location.

According to Uber the design may still change, as is said it intends to gather driver feedback. We’ve asked if it’s also intending to gather rider feedback on the design.

Asked whether it’s making the change as part of an FTC settlement last year — which followed an investigation into data mishandling, privacy and security complaints dating back to 2014 and 2015 — an Uber spokesman told us: “Not specifically, but user expectations are shifting and we are working to build privacy into the DNA of our products.”

Earlier this month the company agreed to a revised settlement with the FTC, including agreeing that it may be subject to civil penalties if it fails to notify the FTC of future privacy breaches — likely in light of the 2016 data breach affecting 57 million riders and drivers which the company concealed until 2017.

An incoming update to European privacy rules (called GDPR) — which beefs up fines for violations and applies extraterritorially (including, for example, if an EU citizen is using the Uber app on a trip to the U.S.) — also tightens the screw on data protection, giving individuals expanded rights to control their personal information held by a company.

A precise location log would likely be considered personal data that Uber would have to provide to any users requesting their information under GDPR, for example.

Although it’s less clear whether the relatively small amount of obfuscation it’s toying with here would be enough to ensure the location logs are no longer judged as riders’ personal data under the regulation.

Last year the company also ended a controversial feature in which its app had tracked the location of users even after their trip had ended.

Google confirms some of its own services are now getting blocked in Russia over the Telegram ban

A shower of paper airplanes darted through the skies of Moscow and other towns in Russia today, as users answered the call of entrepreneur Pavel Durov to send the blank missives out of their windows at a pre-appointed time in support of Telegram, a messaging app he founded that was blocked last week by Russian regulator Roskomnadzor (RKN) that uses a paper airplane icon. RKN believes the service is violating national laws by failing to provide it with encryption keys to access messages on the service (Telegram has refused to comply).

The paper plane send-off was a small, flashmob turn in a “Digital Resistance” — Durov’s preferred term — that has otherwise largely been played out online: currently, nearly 18 million IP addresses are knocked out from being accessed in Russia, all in the name of blocking Telegram.

And in the latest development, Google has now confirmed to us that its own services are now also being impacted. From what we understand, Google Search, Gmail and push notifications for Android apps are among the products being affected.

“We are aware of reports that some users in Russia are unable to access some Google products, and are investigating those reports,” said a Google spokesperson in an emailed response. We’d been trying to contact Google all week about the Telegram blockade, and this is the first time that the company has both replied and acknowledged something related to it.

(Amazon has acknowledged our messages but has yet to reply to them.)

Google’s comments come on the heels of RKN itself also announcing today that it had expanded its IP blocks to Google’s services. At its peak, RKN had blocked nearly 19 million IP addresses, with dozens of third-party services that also use Google Cloud and Amazon’s AWS, such as Twitch and Spotify, also getting caught in the crossfire.

Russia is among the countries in the world that has enforced a kind of digital firewall, blocking periodically or permanently certain online content. Some turn to VPNs to access that content anyway, but it turns out that Telegram hasn’t needed to rely on that workaround to get used.

“RKN is embarrassingly bad at blocking Telegram, so most people keep using it without any intermediaries,” said Ilya Andreev, COO and co-founder of Vee Security, which has been providing a proxy service to bypass the ban. Currently, it is supporting up to 2 million users simultaneously, although this is a relatively small proportion considering Telegram has around 14 million users in the country (and, likely, more considering all the free publicity it’s been getting).

As we described earlier this week, the reason so many IP addresses are getting blocked is because Telegram has been using a technique that allows it to “hop” to a new IP address when the one that it’s using is blocked from getting accessed by RKN. It’s a technique that a much smaller app, Zello, had also resorted to using for nearly a year when the RKN announced its own ban.

Zello ceased its activities earlier this year when RKN got wise to Zello’s ways and chose to start blocking entire subnetworks of IP addresses to avoid so many hops, and Amazon’s AWS and Google Cloud kindly asked Zello to stop as other services also started to get blocked. So, when Telegram started the same kind of hopping, RKN, in effect, knew just what to do to turn the screws. (And it also took the heat off Zello, which miraculously got restored.)

So far, Telegram’s cloud partners have held strong and have not taken the same route, although getting its own services blocked could see Google’s resolve tested at a new level.

Some believe that one outcome could be the regulator playing out an elaborate game of chicken with Telegram and the rest of the internet companies that are in some way aiding and abetting it, spurred in part by Russia’s larger profile and how such blocks would appear to international audiences.

“Russia can’t keep blocking random things on the Internet,” Andreev said. “Russia is working hard to make its image more alluring to foreigners in preparation for the World Cup,” which is taking place this June and July. “They can’t have tourists coming and realising Google doesn’t work in Russia.”

We’ll update this post and continue to write on further developments as we learn more.

Facebook has auto-enrolled users into a facial recognition test in Europe

Facebook users in Europe are reporting the company has begun testing its controversial facial recognition technology in the region.

Jimmy Nsubuga, a journalist at Metro, is among several European Facebook users who have said they’ve been notified by the company they are in its test bucket.

The company has previously said an opt-in option for facial recognition will be pushed out to all European users next month. It’s hoping to convince Europeans to voluntarily allow it to expand its use of the controversial, privacy-hostile tech — which was turned off in the bloc after regulatory pressure, back in 2012.

Under impending changes to Facebook’s T&Cs — ostensibly to comply with the EU’s incoming GDPR data protection standard — the company has crafted a manipulative consent flow that tries to sell people on giving it their data; including filling in facial recognition blanks in its business by convincing Europeans to agree to it grabbing and using their biometric data. 

Notably Facebook is not offering a voluntary opt-in to Europeans who find themselves in its facial recognition test bucket. Rather people are automatically being made into its lab rats — and have to actively delve into the settings to say no.

In a notification to affected users, the company writes [emphasis ours]: “You control face recognition. This setting is on, but you can turn it off at any time, which applies to features we may add later.”

Not only is the tech turned on, but users who click through to the settings to try and turn it off will also find Facebook attempting to dissuade them from doing that — with manipulative examples of how the tech can “protect” them.

As another Facebook user who found herself enrolled in the test — journalist Jennifer Baker — points out, what it’s doing here is incredibly disingenuous because it’s using fear to try to manipulate people’s choices.

Under the EU’s incoming data protection framework Facebook will not be able to automatically opt users into the tech — it will have to convince people to switch facial recognition features on.

But the experiment it’s running here (without gaining individuals’ upfront consent) looks very much like a form of A/B testing — to see which of its disingenuous examples is best able to convince users to accept what is a highly privacy-hostile technology by voluntarily switching it on.

But given that Facebook controls the entire consent flow, and can rely on big data insights gleaned from its own platform (of 2BN+ users), this is not even remotely a fair fight.

Consent is being manipulated, not freely given. This is big data-powered mass manipulation of human decisions — i.e. until the ‘right’ answer (for Facebook’s business) is ‘selected’ by the user.

Data protection experts we spoke to earlier this week do not believe Facebook’s approach to consent will be legal under GDPR. Legal challenges are certain at this point.

But legal challenges also take time. And in the meanwhile Facebook users will be being manipulated into agreeing with things that align with the company’s data-harvesting interests — and handing over their sensitive personal information without understanding the full implications.

It’s also not clear how many Facebook users are being auto-enrolled into this facial recognition test — we’ve put questions to it and will update this post with any reply.

Last month Facebook said it would be rolling out “a limited test of some of the additional choices we’ll ask people to make as part of GDPR”.

It also said it was “starting by asking only a small percentage of people so that we can be sure everything is working properly”, and further claimed: “[T]he changes we’re testing will let people choose whether to enable facial recognition, which has previously been unavailable in the EU.”

Facebook’s wording in those statements is very interesting — with no number put on how many people will be made into test subjects (though it is very clearly trying to play the experiment down; “limited test”, “small”) — so we simply don’t know how many Europeans are having their facial data processed by Facebook right now, without their upfront consent.

Nor do we know where in Europe all these test subjects are located. But it’s pretty likely the test contravenes even current EU data protection laws. (GDPR applies from May 25.)

Facebook’s description of its testing plan last month was also disingenuous as it implied users would get to choose to enable facial recognition. In fact, it’s just switching it on — saddling test subjects with the effort of opting out.

The company was likely hoping the test would not attract too much attention — given how much GDPR news is flowing through its PR channels, and how much attention the topic is generally sucking up — and we can see why now because it’s essentially reversed its 2012 decision to switch off facial recognition in Europe (made after the feature attracted so much blow-back), to grab as much data as it can while it can.

Millions of Europeans could be having their fundamental rights trampled on here, yet again. We just don’t know what the company actually means by “small”. (The EU has ~500M inhabitants — even 1%, a “small percentage”, of that would involve millions of people… )

Once again Facebook isn’t telling how many people it’s experimenting on.

LinkedIn’s AutoFill plugin could leak user data, secret fix failed

Facebook isn’t the only one in the hot seat over data privacy. A flaw in LinkedIn’s AutoFill plugin that websites use to let you quickly complete forms could have allowed hackers to steal your full name, phone number, email address, ZIP code, company, and job title. Malicious sites have been able to invisibly render the plugin on their entire page so if users who are logged into LinkedIn click anywhere, they’d effectively be hitting a hidden “AutoFill with LinkedIn” button and giving up their data.

Researcher Jack Cable discovered the issue on April 9th, 2018 and immediately disclosed it to LinkedIn. The company issued a fix on April 10th but didn’t inform the public of the issue. Cable quickly informed LinkedIn that its fix, which restricted the use of its AutoFill feature to whitelisted sites who pay LinkedIn to host their ads, still left it open to abuse. If any of those sites have cross-site scripting vulnerabilities, which Cable confirmed some do, hackers can still run AutoFill on their sites by installing an iframe to the vulnerable whitelisted site. He got no response from LinkedIn over the last 9 days so Cable reached out to TechCrunch.

LinkedIn’s AutoFill tool

LinkedIn tells TechCrunch it doesn’t have evidence that the weakness was exploited to gather user data. But Cable says “it is entirely possible that a company has been abusing this without LinkedIn’s knowledge, as it wouldn’t send any red flags to LinkedIn’s servers.”

I demoed the security fail on a site Cable set up. It was able to show me my LinkedIn sign-up email address with a single click anywhere on the page, without me ever knowing I was interacting with an exploited version of LinkedIn’s plugin. Even if users have configured their LinkedIn privacy settings to hide their email, phone number, or other info, it can still be pulled in from the AutoFill plugin.

“It seems like LinkedIn accepts the risk of whitelisted websites (and it is a part of their business model), yet this is a major security concern” Cable wrote to TechCrunch. [Update: He’s now posted a detailed write-up of the issue.]

A LinkedIn spokesperson issued this statement to TechCrunch, saying it’s planning to roll out a more comprehensive fix shortly:

“We immediately prevented unauthorized use of this feature, once we were made aware of the issue. We are now pushing another fix that will address potential additional abuse cases and it will be in place shortly. While we’ve seen no signs of abuse, we’re constantly working to ensure our members’ data stays protected. We appreciate the researcher responsibly reporting this and our security team will continue to stay in touch with them.

For clarity, LinkedIn AutoFill is not broadly available and only works on whitelisted domains for approved advertisers. It allows visitors to a website to choose to pre-populate a form with information from their LinkedIn profile.”

Facebook has recently endured heavy scrutiny regarding data privacy and security, and just yesterday confirmed it was investigating an issue with unauthorized JavaScript trackers pulling in user info from sites using Login With Facebook.

But Cable’s findings demonstrate that other tech giants deserve increased scrutiny too. In an effort to colonize the web with their buttons and gather more data about their users, sites like LinkedIn have played fast and loose with people’s personally identifiable information.

The research shows how relying on whitelists of third-party sites doesn’t always solve a problem. All it takes is for one of those sites to have its own security flaw, and a bigger vulnerability can be preyed upon. Over 70 of the world’s top websites were on LinkedIn’s whitelist, including Twitter, Stanford, Salesforce, Edelman, and Twilio. OpenBugBounty shows the prevalence of cross-site scripting problems. These “XSS” vulnerabilities accounted for 84% of secuity flaws documented by Symantec in 2007, and bug bounty service HackerOne defines XSS as a massive issue to this day.

With all eyes on security, tech companies may need to become more responsive to researchers pointing out flaws. While LinkedIn initially moved quickly, its attention to the issue lapsed while only a broken fix was in place. Meanwhile, government officials considering regulation should focus on strengthening disclosure requirements for companies that discover breaches or vulnerabilities. If they know they’ll have to embarass themselves by informing the public about their security flaws, they might work harder to keep everything locked tight.

Can data science save social media?

The unfettered internet is too often used for malicious purposes and is frequently woefully inaccurate. Social media — especially Facebook — has failed miserably at protecting user privacy and blocking miscreants from sowing discord.

That’s why CEO Mark Zuckerberg was just forced to testify about user privacy before both houses of Congress. And now governmental regulation of FaceBook and other social media appears to be a fait accompli.

At this key juncture, the crucial question is whether regulation — in concert with FaceBook’s promises to aggressively mitigate its weaknesses — correct the privacy abuses and continue to fulfill FaceBook’s goal of giving people the power to build transparent communities, bringing the world closer together?

The answer is maybe.

What has not been said is that FaceBook must embrace data science methodologies initially created in the bowels of the federal government to help protect its two billion users. Simultaneously, FaceBook must still enable advertisers — its sole source of revenue — to get the user data required to justify their expenditures.

Specifically, Facebook must promulgate and embrace what is known in high-level security circles as homomorphic encryption (HE), often considered the “Holy Grail” of cryptography, and data provenance (DP). HE would enable Facebook, for example, to generate aggregated reports about its user psychographic profiles so that advertisers could still accurately target groups of prospective customers without knowing their actual identities.

Meanwhile, data provenance – the process of tracing and recording true identities and the origins of data and its movement between data bases – could unearth the true identities of Russian perpetrators and other malefactors or at least identify unknown provenance, adding much needed transparency in cyberspace.

Both methodologies are extraordinarily complex. IBM and Microsoft, in addition to the National Security Agency, have been working on HE for years but the technology has suffered from significant performance challenges. Progress is being made, however. IBM, for example, has been granted a patent on a particular HE method – a strong hint it’s seeking a practical solution – and last month proudly announced that its rewritten HE encryption library now works up to 75 times faster. Maryland-based ENVEIL, a startup staffed by the former NSA HE team, has broken the performance barriers required to produce a commercially viable version of HE, benchmarking millions of times faster than IBM in tested use cases.

How Homomorphic Encryption Would Help FaceBook

HE is a technique used to operate on and draw useful conclusions from encrypted data without decrypting it, simultaneously protecting the source of the information. It is useful to FaceBook because its massive inventory of personally identifiable information is the foundation of the economics underlying its business model. The more comprehensive the datasets about individuals, the more precisely advertising can be targeted.

HE could keep Facebook information safe from hackers and inappropriate disclosure, but still extract the essence of what the data tells advertisers. It would convert encrypted data into strings of numbers, do math with these strings, and then decrypt the results to get the same answer it would if the data wasn’t encrypted at all.

A particularly promising sign for HE emerged last year, when Google revealed a new marketing measurement tool that relies on this technology to allow advertisers to see whether their online ads result in in-store purchases.

Unearthing this information requires analyzing datasets belonging to separate organizations, notwithstanding the fact that these organizations pledge to protect the privacy and personal information of the data subjects. HE skirts this by generating aggregated, non-specific reports about the comparisons between these datasets.

In pilot tests, HE enabled Google to successfully analyze encrypted data about who clicked on an advertisement in combination with another encrypted multi-company dataset that recorded credit card purchase records. With this data in hand, Google was able to provide reports to advertisers summarizing the relationship between the two databases to conclude, for example, that five percent of the people who clicked  on an ad wound up purchasing in a store.

Data Provenance

Data provenance has a markedly different core principle. It’s based on the fact that digital information is atomized into 1’s and 0’s with no intrinsic truth. The dual digits exist only to disseminate information, whether accurate or widely fabricated. A well-crafted lie can easily be indistinguishable from the truth and distributed across the internet. What counts is the source of these 1’s and 0’s. In short, is it legitimate?  What is the history of the 1’ and 0’s?

The art market, as an example, deploys DP to combat fakes and forgeries of the world’s greatest paintings, drawing and sculptures. It uses DP techniques to create a verifiable, chain-of-custody for each piece of the artwork, preserving the integrity of the market.

Much the same thing can be done in the online world. For example, a FaceBook post referencing a formal statement by a politician, with an accompanying photo, would  have provenance records directly linking the post to the politician’s press release and even the specifics of the photographer’s camera. The goal – again – is ensuring that data content is legitimate.

Companies such as Wal-Mart, Kroger, British-based Tesco and Swedish-based H&M, an international clothing retailer, are using or experimenting with new technologies to provide provenance data to the marketplace.

Let’s hope that Facebook and its social media brethren begin studying HE and DP thoroughly and implement it as soon as feasible. Other strong measures — such as the upcoming implementation of the European Union’s General Data Protection Regulation, which will use a big stick to secure personally identifiable information – essentially should be cloned in the U.S. What is best, however, are multiple avenues to enhance user privacy and security, while hopefully preventing breaches in the first place. Nothing less than the long-term viability of social media giants is at stake.

France to move ministers off Telegram, WhatsApp over security fears

The French government has said it intends to move to using its own encrypted messaging service this summer, over concerns of the risks that foreign entities could spy on officials using popular encrypted apps such as Telegram and WhatsApp .

Reuters reports that ministers are concerned about the use of foreign-built encrypted apps which do not have servers in France. “We need to find a way to have an encrypted messaging service that is not encrypted by the United States or Russia,” a digital ministry spokeswoman told the news agency. “You start thinking about the potential breaches that could happen, as we saw with Facebook, so we should take the lead.”

Telegram’s founder, Pavel Durov, is Russian, though the entrepreneur lives in exile and his messaging app has just been blocked in his home country after the company refused to hand over encryption keys to the authorities.

WhatsApp, which (unlike Telegram) is end-to-end encrypted across its entire platform — using the respected and open sourced Signal Protocol — is nonetheless owned by U.S. tech giant Facebook, and developed out of the U.S. (as Signal also is).

Its parent company is currently embroiled in a major data misuse scandal after it emerged that tens of millions of Facebook users’ information was passed to a controversial political consultancy without their knowledge or consent.

The ministry spokeswoman said about 20 officials and top civil servants in the French government are testing the new messaging app, with the aim of its use becoming mandatory for the whole government by the summer.

It could also eventually be made available to all citizens, she added.

Reuters reports the spokeswoman also said a state-employed developer has designed the app, using free-to-use code available for download online (which presumably means it’s based on open source software) — although she declined to name the code being used or the messaging service.

Late last week, ZDNet also reported the French government wanted to replace its use of apps like Telegram — which president Emmanuel Macron is apparently a big fan of.

It quoted Mounir Mahjoubi, France’s secretary of state for digital, saying: “We are working on public secure messaging, which will not be dependent on private offers.”

The French government reportedly already uses some secure messaging products built by defense group and IT supplier Thales. On its website Thales lists a Citadel instant messaging smartphone app — which it describes as “trusted messaging for professionals”, saying it offers “the same recognisable functionality and usability as most consumer messaging apps” with “secure messaging services on a smartphone or computer, plus a host of related functions, including end-to-end encrypted voice calls and file sharing”.

Judge says class action suit against Facebook over facial recognition can go forward

Whenever a company may be guilty of something, from petty neglect to grand deception, there’s usually a class action lawsuit filed. But until a judge rules that lawsuit legitimate, the threat remains fairly empty. Unfortunately for Facebook, one major suit from 2015 has just been given that critical go-ahead.

The case concerns an Illinois law that prohibits collection of biometric information, including facial recognition data, in the way that Facebook has done for years as part of its photo-tagging systems.

BIPA, the Illinois law, is a real thorn in Facebook’s side. The company has not only been pushing to have the case dismissed, but it has been working to have the whole law changed by supporting an amendment that would defang it — but more on that another time.

Judge James Donato in California’s Northern District has made no determination as to the merits of the case itself; first, it must be shown that there is a class of affected people with a complaint that is supported by the facts.

For now, he has found (you can read the order here) that “plaintiffs’ claims are sufficiently cohesive to allow for a fair and efficient resolution on a class basis.” The class itself will consist of “Facebook users located in Illinois for whom Facebook created and stored a face template after June 7, 2011.”

An earlier, broader class suggested by the plaintiffs included all Illinois users who appeared in a photograph on Facebook, but the judge, commendably, decided that this would include people who appeared in images but were not in fact recognized or recorded as face templates by the recognition systems. The more limited class will still amount to millions of people.

Facebook’s attempt to discredit the suit, quibbling over definitions and saying the plaintiffs “know almost nothing” about the systems in question, did not go over well with the judge. “The deposition testimony by the named plaintiffs shows a perfectly adequate understanding of the case, and it clearly manifests their concerns about Facebook’s treatment of personal biometric data,” he writes.

Its suggestion that no “actual” harm was caused also fails to hold water: “As the Court has already found, there is no question that plaintiffs here has sufficiently alleged that intangible injury.” Requiring “actual” injury would severely limit the reach of a rule like BIPA in Illinois, because, of course, the harm caused is one to one’s privacy and security, not to one’s body or wallet. Of course, the question of whether users consented to their “intangible injury” is yet to be settled, and may be a major crux in the case.

Facebook also tries the old chestnut of saying its servers aren’t in Illinois, so Illinois law doesn’t apply. “Contrary to Facebook’s suggestion,” writes Donato, “the geographic location of its data servers is not a dispositive factor. Server location may be one factor in the territoriality inquiry, but it is not the exclusive one.”

Lastly and most absurdly, Facebook argued that to establish legitimacy it would be necessary to check which users’ face templates were derived from scans of printed photographs instead of natively digital shots. “This too is unavailing,” says Donato, citing a total lack of evidence presented by Facebook.

When contacted for comment, Facebook provided a simple statement:

We are reviewing the ruling. We continue to believe the case has no merit and will defend ourselves vigorously.

The case will go ahead as ordered, though as before, at a snail’s pace.

Facebook shouldn’t block you from finding friends on competitors

Twitter, Vine, Voxer, MessageMe. Facebook has repeatedly cut off competitors from its feature for finding your Facebook friends on their apps…after jumpstarting its own social graph by convincing people to upload their Gmail contacts. Meanwhile, Facebook’s Download Your Information tool merely exports a text list of friends’ names you can’t use elsewhere.

As congress considers potential regulation following Mark Zuckerberg’s testimonies, it should prioritize leveling the playing field for aspiring alternatives to Facebook and letting consumers choose where to social network. And as a show of good faith and argument against it abusing its monopoly, Facebook should make our friend list truly portable.

It’s time to free the social graph — to treat it as a fundamental digital possession, the way the Telecomunnications Act Of 1996 protects your right to bring your phone number with you to a new network.

The two most powerful ways to do this would be for Facebook to stop, or congress to stop it from, blocking friend finding on competitors like it’s done in the past to Twitter and more. And Facebook should change its Download Your Information tool to export our friend list in a truly interoperable format. When you friend someone on Facebook, they’re not just a name. They’re someone specific amongst often many with the same name, and Facebook should be open to us getting connected with them elsewhere.

Facebook Takes Data It Won’t Give

While it continues til this day, back in 2010 Facebook goaded users to import their Gmail address books so they could add them as Facebook friends. But it refused to let users export the email addresses of their friends to use elsewhere. That led Google to change its policy and require data portability reciprocity from any app using its Contacts API.

So did Facebook back off? No. It built a workaround, giving users a deep link to download their Gmail contacts from Google’s honorable export tool. Facebook then painstakingly explained to users how to upload that file so it could suggest they friend all those contacts.

Google didn’t want to stop users from legitimately exporting their contacts, so it just put up a strongly worded warning to Gmail users: “Trap my contacts now: Hold on a second. Are you super sure you want to import your contact information for your friends into a service that won’t let you get it out? . . . Although we strongly disagree with this data protectionism, the choice is yours. Because, after all, you should have control over your data.” And Google offered to let you “Register a complaint over data protectionism.”

8 years later, Facebook has grown from a scrappy upstart chasing Google to become one of the biggest, most powerful players on the Internet. And it’s still teaching users how to snatch their Gmail contacts’ email addresses while only letting you export the names of your friends unless they opt-in through an obscure setting because it considers contact info they’ve shared as their data, not yours. Whether you should be allowed to upload other people’s contact info to a social network is a bigger question. But it is blatant data portability hypocrisy for Facebook to encourage users to import that data from other apps but not export it.

In some respects, it’s good that you can’t mass-export the email addresses of all your Facebook friends. That could enable spamming, which probably isn’t what someone had in mind when they added you as friend on Facebook. They could always block, unfriend, or mute you, but they can’t get their email address back. Facebook is already enduring criticism about how it handled data privacy in the wake of the Cambridge Analytica scandal.

Yet the idea that you could find your Facebook Friends on other apps is a legitimate reason for the platform to exist. It’s one of the things that’s made Facebook Login so useful and popular. Facebook’s API lets certain apps check to see if your Facebook Friends have already signed up, so you can easily follow them or send them a connection request. But Facebook has rescinded that option when it senses true competition.

Data Protectionism

Twitter is the biggest example. Facebook didn’t and still doesn’t let you see which of your Facebook friends are on Twitter, even though it has seven times as many users. Twitter co-founder Ev Williams, frustrated in 2010, said that “They see their social graph as their core asset, and they want to make sure there’s a win-win relationship with anybody who accesses it.”

Facebook went on to establish a formal policy that said that apps that wanted to use its Find Friends tool had to abide by these rules:

  •  If you use any Facebook APIs to build personalized or social experiences, you must also enable people to easily share their experiences back with people on Facebook.

  • You may not use Facebook Platform to promote, or to export user data to, a product or service that replicates a core Facebook product or service without our permission.

Essentially, apps that piggybacked on Facebook’s social graph had to let you share back to Facebook, and couldn’t compete with it. It’s a bit ironic, given Facebook’s overarching strategy for years has been ‘replicate core functionality”. From cloning Twitter’s asymmetrical follow and Trending Topics to Snapchat’s Stories and augmented reality filters, all the way back to cribbing FriendFeed’s News Feed and Facebook’s start as a rip-off of the Winklevii’s HarvardConnection.

Restrictions against replicating core functionality aren’t unheard of in tech. Apple’s iOS won’t let you run an App Store from inside an app, for example. But Facebook’s selective enforcement of the policy is troubling. It simply ignores competing apps that never get popular. Yet if they start to grow into potential rivals, Facebook has swiftly enforced this policy and removed their Find Friends access, often inhibiting further growth and engagement.

Here are few of examples of times Facebook has cut off competitors from its graph:

  • Voxer was one of the hottest messaging apps of 2012, climbing the charts and raising a $30 million round with its walkie-talkie style functionality. In early January 2013, Facebook copied Voxer by adding voice messaging into Messenger. Two weeks later, Facebook cut off Voxer’s Find Friends access. Voxer CEO Tom Katis told me at the time that Facebook stated his app with tens of millions of users was a “competitive social network” and wasn’t sharing content back to Facebook. Katis told us he though that was hypocritical. By June, Voxer had pivoted towards business communications, tumbling down the app charts and leaving Facebook Messenger to thrive.
  • MessageMe had a well-built chat app that was growing quickly after launching in 2013, posing a threat to Facebook Messenger. Shortly before reaching 1 million users, Facebook cut off MessageMe‘s Find Friends access. The app ended up selling for a paltry double-digit millions price tag to Yahoo before disintegrating.
  • Phhhoto and its fate show how Facebook’s data protectionism encompasses Instagram. Phhhoto’s app that let you shoot animated GIFs was growing popular. But soon after it hit 1 million users, it got cut off from Instagram’s social graph in April 2015. Six months later, Instagram launched Boomerang, a blatant clone of Phhhoto. Within two years, Phhhoto shut down its app, blaming Facebook and Instagram. “We watched [Instagram CEO Kevin] Systrom and his product team quietly using PHHHOTO almost a year before Boomerang was released. So it wasn’t a surprise at all . . . I’m not sure Instagram has a creative bone in their entire body.”
  • Vine had a real shot at being the future of short-form video. The day the Twitter-owned app launched, though, Facebook shut off Vine’s Find Friends access. Vine let you share back to Facebook, and its 6-second loops you shot in the app were a far cry from Facebook’s heavyweight video file uploader. Still, Facebook cut it off, and by late 2016, Twitter announced it was shutting down Vine.

As I wrote in 2013, “Enforcement of these policies could create a moat around Facebook. It creates a barrier to engagement, retention, and growth for competing companies.” But in 2018 amongst whispers of anti-trust action, Facebook restricting access to its social graph to protect the dominance of its News Feed seems egregiously anti-competitive.

That’s why Facebook should pledge to stop banning competitors from using its Find Friends tool. If not, congress should tell Facebook that this kind of behavior could lead to more stringent regulation.

Friends Aren’t Just Names

When Senator John Neely Kennedy asked Zuckerberg this week, “are you willing to give me the right to take my data on Facebook and move it to another social media platform?”, Zuckerberg claimed that “Senator, you can already do that. We have a Download Your Information tool where you can go get a file of all the content there, and then do whatever you want with it.”

But that’s not exactly true. You can export your photos that can be easily uploaded elsewhere. But your social graph – all those confirmed friend requests — gets reduced to a useless string of text. Download Your Information spits out merely a list of your friends’ names and the dates on which you got connected. There’s no unique username. No link to their Facebook profile. Nothing you can use to find them on another social network beyond manually typing in their names.

That’s especially problematic if your friends have common names. There are tons of John Smiths on Facebook, so finding him on another social network with just a name will require a lot of sleuthing or guess-work. Depending on where you live, locating a particular Garcia, Smirnov, or Lee could be quite difficult. Facebook even built a short-lived feature called Friendshake to help you friend someone nearby amongst everyone in their overlapping name space.

When I asked about this, Facebook told me that users can opt-in to having their email or phone number included in the Download Your Information export. But this privacy setting is buried and little-known. Just 4 percent of my friends, centered around tech savvy San Francisco, had enabled it.

As I criticized way back in 2010 when Download Your Information launched, “The data can be used as a diary, or to replace other information from a hard drive crash or stolen computer — but not necessarily to switch to a different social network.”

Given Facebook’s iron grip on the Find Friends API, users deserve decentralized data portability — a way to take their friends with them that Facebook can’t take back. That’s what Download Your Information should offer but doesn’t.

Social Graph Portability

This is why I’m calling on Facebook to improve the data portability of your friend connections. Give us the same consumer protections that make phone numbers portable.

At the very least Facebook should include your friends’ unique Facebook username and URL. But true portability would mean you could upload the list to another social network to find your friends there.

One option would be for Facebook’s export to include a privacy-safe, hashed version of your friends’ email address that they signed up with and share with you. Facebook could build a hashed email lookup tool so that if you uploaded these non-sensical strings of characters to another app, they could cross-reference them against Facebook’s database of your friends. If there’s a match, the app could surface that person as a someone you might want to reconnect with. Effectively, this would let you find friends elsewhere via email address without Facebook ever giving you or other apps a human-readable list of their contact info.

If you can’t take your social graph with you, there’s little chance for a viable alternative to Facebook to arise. It doesn’t matter if a better social network emerges, or if Facebook disrespects your privacy, because there’s nowhere to go. Opening up the social graph would require Facebook to compete on the merit of its product and policies. Trying to force the company’s hand with a variety of privacy regulations won’t solve the core issue. But the prospect of users actually being able to leave would let the market compel Facebook to treat us better.

For more on Facebook’s challenges with data privacy, check out TechCrunch’s feature stories: