AI chatbot maker Babylon Health attacks clinician in PR stunt after he goes public with safety concerns

UK startup Babylon Health pulled app data on a critical user in order to create a press release in which it publicly attacks the UK doctor who has spent years raising patient safety concerns about the symptom triage chatbot service.

In the press release released late Monday Babylon refers to Dr David Watkins — via his Twitter handle — as a “troll” and claims he’s “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”.

It also writes that Watkins has clocked up “hundreds of hours” and 2,400 tests of its service in a bid to discredit his safety concerns — saying he’s raised “fewer than 100 test results which he considered concerning”.

Babylon’s PR also claims that only in 20 instances did Watkins find “genuine errors in our AI”, whereas other instances are couched as ‘misrepresentations’ or “mistakes”, per an unnamed “panel of senior clinicians” which the startup’s PR says “investigated and re-validated every single one” — suggesting the error rate Watkins identified was just 0.8%.

Screengrab from Babylon’s press release which refers to to Dr Watkins’ “Twitter troll tests”

Responding to the attack in a telephone interview with TechCrunch Watkins described Babylon’s claims as “absolute nonsense” — saying, for example, he has not carried out anywhere near 2,400 tests of its service. “There are certainly not 2,400 completed triage assessments,” he told us. “Absolutely not.”

Asked how many tests he thinks he did complete Watkins suggested it’s likely to be between 800 and 900 full runs through “complete triages” (some of which, he points out, would have been repeat tests to see if the company had fixed issues he’d previously noticed).

He said he identified issues in about one in two or one in three instances of testing the bot — though in 2018 says he was finding far more problems, claiming it was “one in one” at that stage for an earlier version of the app.

Watkins suggests that to get to the 2,400 figure Babylon is likely counting instances where he was unable to complete a full triage because the service was lagging or glitchy. “They’ve manipulated data to try and discredit someone raising patient safety concerns,” he said.

“I obviously test in a fashion which is [that] I know what I’m looking for — because I’ve done this for the past three years and I’m looking for the same issues which I’ve flagged before to see have they fixed them. So trying to suggest that my testing is actually any indication of the chatbot is absurd in itself,” he added.

In another pointed attack Babylon writes Watkins has “posted over 6,000 misleading attacks” — without specifying exactly what kind of attacks it’s referring to (or where they’ve been posted).

Watkins told us he hasn’t even tweeted 6,000 times in total since joining Twitter four years ago — though he has spent three years using the platform to raise concerns about diagnosis issues with Babylon’s chatbot.

Such as this series of tweets where he shows a triage for a female patient failing to pick up a potential heart attack.

Watkins told us he has no idea what the 6,000 figure refers to, and accuses Babylon of having a culture of “trying to silence criticism” rather than engage with genuine clinician concerns.

“Not once have Babylon actually approached me and said ‘hey Dr Murphy — or Dr Watkins — what you’ve tweeted there is misleading’,” he added. “Not once.”

Instead, he said the startup has consistently taken a “dismissive approach” to the safety concerns he’s raised. “My overall concern with the way that they’ve approached this is that yet again they have taken a dismissive approach to criticism and again tried to smear and discredit the person raising concerns,” he said.

Watkins, a consultant oncologist at The Royal Marsden NHS Foundation Trust — who has for several years gone by the online (Twitter) moniker of @DrMurphy11, tweeting videos of Babylon’s chatbot triage he says illustrate the bot failing to correctly identify patient presentations — made his identity public on Monday when he attended a debate at the Royal Society of Medicine.

There he gave a presentation calling for less hype and more independent verification of claims being made by Babylon as such digital systems continue elbowing their way into the healthcare space.

In the case of Babylon, the app has a major cheerleader in the current UK Secretary of State for health, Matt Hancock, who has revealed he’s a personal user of the app.

Simultaneously Hancock is pushing the National Health Service to overhaul its infrastructure to enable the plugging in of “healthtech” apps and services. So you can spot the political synergies.

Watkins argues the sector needs more of a focus on robust evidence gathering and independent testing vs mindless ministerial support and partnership ‘endorsements’ as a stand in for due diligence.

He points to the example of Theranos — the disgraced blood testing startup whose co-founder is now facing charges of fraud — saying this should provide a major red flag of the need for independent testing of ‘novel’ health product claims.

“[Over hyping of products] is a tech industry issue which unfortunately seems to have infected healthcare in a couple of situations,” he told us, referring to the startup ‘fake it til you make it’ playbook of hype marketing and scaling without waiting for external verification of heavily marketed claims.

In the case of Babylon, he argues the company has failed to back up puffy marketing with evidence of the sort of extensive clinical testing and validation which he says should be necessary for a health app that’s out in the wild being used by patients. (References to academic studies have not been stood up by providing outsiders with access to data so they can verify its claims, he also says.)

“They’ve got backing from all these people — the founders of Google DeepMind, Bupa, Samsung, Tencent, the Saudis have given them hundreds of millions and they’re a billion dollar company. They’ve got the backing of Matt Hancock. Got a deal with Wolverhampton. It all looks trustworthy,” Watkins went on. “But there is no basis for that trustworthiness. You’re basing the trustworthiness on the ability of a company to partner. And you’re making the assumption that those partners have undertaken due diligence.”

For its part Babylon claims the opposite — saying its app meets existing regulatory standards and pointing to high “patient satisfaction ratings” and a lack of reported harm by users as evidence of safety, writing in the same PR in which it lays into Watkins:

Our track record speaks for itself: our AI has been used millions of times, and not one single patient has reported any harm (a far better safety record than any other health consultation in the world). Our technology meets robust regulatory standards across five different countries, and has been validated as a safe service by the NHS on ten different occasions. In fact, when the NHS reviewed our symptom checker, Healthcheck and clinical portal, they said our method for validating them “has been completed using a robust assessment methodology to a high standard.” Patient satisfaction ratings see over 85% of our patients giving us 5 stars (and 94% giving five and four stars), and the Care Quality Commission recently rated us “Outstanding” for our leadership.

But proposing to judge the efficacy of a health-related service by a patient’s ability to complain if something goes wrong seems, at the very least, an unorthodox approach — flipping the Hippocratic oath principle of ‘first do no harm’ on its head. (Plus, speaking theoretically, someone who’s dead would literally be unable to complain — which could plug a rather large loophole in any ‘safety bar’ being claimed via such an assessment methodology.)

On the regulatory point, Watkins argues that the current UK regime is not set up to respond intelligently to a development like AI chatbots and lacks strong enforcement in this new category.

Complaints he’s filed with the MHRA (Medical and Healthcare products Regulatory Agency) have resulted in it asking Babylon to work on issues, with little or no follow up, he says.

While he notes that confidentiality clauses limit what can be disclosed by the regulator.

All of that might look like a plum opportunity for a certain kind of startup ‘disruptor’, of course.

And Babylon’s app is one of several now applying AI type technologies as a diagnostic aid in chatbot form, across several global markets. Users are typically asked to respond to questions about their symptoms and at the end of the triage process get information on what might be a possible cause. Though Babylon’s PR materials are careful to include a footnote where it caveats that its AI tools “do not provide a medical diagnosis, nor are they a substitute for a doctor”.

Yet, says Watkins, if you read certain headlines and claims made for the company’s product in the media you might be forgiven for coming away with a very different impression — and it’s this level of hype that has him worried.

Other less hype-dispensing chatbots are available, he suggests — name-checking Berlin-based Ada Health as taking a more thoughtful approach on that front.

Asked whether there are specific tests he would like to see Babylon do to stand up its hype, Watkins told us: “The starting point is getting a technology which you feel is safe to actually be in the public domain.”

Notably, the European Commission is working on risk-based regulatory framework for AI applications — including for use-cases in sectors such as healthcare — which would require such systems to be “transparent, traceable and guarantee human oversight”, as well as to use unbiased data for training their AI models.

“Because of the hyperbolic claims that have been put out there previously about Babylon that’s where there’s a big issue. How do they now roll back and make this safe? You can do that by putting in certain warnings with regards to what this should be used for,” said Watkins, raising concerns about the wording used in the app. “Because it presents itself as giving patients diagnosis and it suggests what they should do for them to come out with this disclaimer saying this isn’t giving you any healthcare information, it’s just information — it doesn’t make sense. I don’t know what a patient’s meant to think of that.”

“Babylon always present themselves as very patient-facing, very patient-focused, we listen to patients, we hear their feedback. If I was a patient and I’ve got a chatbot telling me what to do and giving me a suggested diagnosis — at the same time it’s telling me ‘ignore this, don’t use it’ — what is it?” he added. “What’s its purpose?

“There are other chatbots which I think have defined that far more clearly — where they are very clear in their intent saying we’re not here to provide you with healthcare advice; we will provide you with information which you can take to your healthcare provider to allow you to have a more informed decision discussion with them. And when you put it in that context, as a patient I think that makes perfect sense. This machine is going to give me information so I can have a more informed discussion with my doctor. Fantastic. So there’s simple things which they just haven’t done. And it drives me nuts. I’m an oncologist — it shouldn’t be me doing this.”

Watkins suggested Babylon’s response to his raising “good faith” patient safety concerns is symptomatic of a deeper malaise within the culture of the company. It has also had a negative impact on him — making him into a target for parts of the rightwing media.

“What they have done, although it may not be users’ health data, they have attempted to utilize data to intimidate an identifiable individual,” he said of the company’s attack him. “As a consequence of them having this threatening approach and attempting to intimidate other parties have though let’s bundle in and attack this guy. So it’s that which is the harm which comes from it. They’ve singled out an individual as someone to attack.”

“I’m concerned that there’s clinicians in that company who, if they see this happening, they’re not going to raise concerns — because you’ll just get discredited in the organization. And that’s really dangerous in healthcare,” Watkins added. “You have to be able to speak up when you see concerns because otherwise patients are at risk of harm and things don’t change. You have to learn from error when you see it. You can’t just carry on doing the same thing again and again and again.”

Others in the medical community have been quick to criticize Babylon for targeting Watkins in such a personal manner and for revealing details about his use of its (medical) service.

As one Twitter user, Sam Gallivan — also a doctor — put it: “Can other high frequency Babylon Health users look forward to having their medical queries broadcast in a press release?”

The act certainly raises questions about Babylon’s approach to sensitive health data, if it’s accessing patient information for the purpose of trying to steamroller informed criticism.

We’ve seen similarly ugly stuff in tech before, of course — such as when Uber kept a ‘god-view’ of its ride-hailing service and used it to keep tabs on critical journalists. In that case the misuse of platform data pointed to a toxic culture problem that Uber has had to spend subsequent years sweating to turn around (including changing its CEO).

Babylon’s selective data dump on Watkins is also an illustrative example of a digital service’s ability to access and shape individual data at will — pointing to the underlining power asymmetries between these data-capturing technology platforms (which are gaining increasing agency over our decisions) and their users who only get highly mediated, hyper controlled access to the databases they help to feed.

Watkins, for example, told us he is no longer able to access his query history in the Babylon app — providing a screenshot of an error screen (below) that he says he now sees when he tries to access chat history in the app. He said he does not know why he is no longer able to access his historical usage information but says he was using it as a reference — to help with further testing (and no longer can).

If it’s a bug it’s a convenient one for Babylon PR…

We contacted Babylon to ask it to respond to criticism of its attack on Watkins. The company defended its use of his app data to generate the press release — arguing that the “volume” of queries he had run means the usual data protection rules don’t apply, and further claiming it had only shared “non-personal statistical data”, even though this was attached in the PR to his Twitter identity (and therefore, since Monday, to his real name).

In a statement the Babylon spokesperson told us:

If safety related claims are made about our technology, our medical professionals are required to look into these matters to ensure the accuracy and safety of our products. In the case of the recent use data that was shared publicly, it is clear given the volume of use that this was theoretical data (forming part of an accuracy test and experiment) rather than a genuine health concern from a patient. Given the use volume and the way data was presented publicly, we felt that we needed to address accuracy and use information to reassure our users.  The data shared by us was non-personal statistical data, and Babylon has complied with its data protection obligations throughout. Babylon does not publish genuine individualised user health data.

We also asked the UK’s data protection watchdog about the episode and Babylon making Watkins’ app usage public. The ICO told us: “People have the right to expect that organisations will handle their personal information responsibly and securely. If anyone is concerned about how their data has been handled, they can contact the ICO and we will look into the details.”

Babylon’s clinical innovation director, Dr Keith Grimes, attended the same Royal Society debate as Watkins this week — which was entitled Recent developments in AI and digital health 2020 and billed as a conference that will “cut through the hype around AI”.

So it looks to be no accident that their attack press release was timed to follow hard on the heels of a presentation it would have known (since at least last December) was coming that day — and in which Watkins argued where AI chatbots are concerned “validation is more important than valuation”.

Last summer Babylon announced a $550M Series C raise, at a $2BN+ valuation.

Investors in the company include Saudi Arabia’s Public Investment Fund, an unnamed U.S.-based health insurance company, Munich Re’s ERGO Fund, Kinnevik, Vostok New Ventures and DeepMind co-founder Demis Hassabis, to name a few helping to fund its marketing.

“They came with a narrative,” said Watkins of Babylon’s message to the Royal Society. “The debate wasn’t particularly instructive or constructive. And I say that purely because Babylon came with a narrative and they were going to stick to that. The narrative was to avoid any discussion about any safety concerns or the fact that there were problems and just describe it as safe.”

The clinician’s counter message to the event was to pose a question EU policymakers are just starting to consider — calling for the AI maker to show data-sets that stand up its safety claims.

Facebook’s latest ‘transparency’ tool doesn’t offer much — so we went digging

Just under a month ago Facebook switched on global availability of a tool which affords users a glimpse into the murky world of tracking that its business relies upon to profile users of the wider web for ad targeting purposes.

Facebook is not going boldly into transparent daylight — but rather offering what privacy rights advocacy group Privacy International has dubbed “a tiny sticking plaster on a much wider problem”.

The problem it’s referring to is the lack of active and informed consent for mass surveillance of Internet users via background tracking technologies embedded into apps and websites, including as people browse outside Facebook’s own content garden.

The dominant social platform is also only offering this feature in the wake of the 2018 Cambridge Analytica data misuse scandal, when Mark Zuckerberg faced awkward questions in Congress about the extent of Facebook’s general web tracking. Since then policymakers around the world have dialled up scrutiny of how its business operates — and realized there’s a troubling lack of transparency in and around adtech generally and Facebook specifically

Facebook’s tracking pixels and social plugins — aka the share/like buttons that pepper the mainstream web — have created a vast tracking infrastructure which silently informs the tech giant of Internet users’ activity, even when a person hasn’t interacted with any Facebook-branded buttons.

Facebook claims this is just ‘how the web works’. And other tech giants are similarly engaged in tracking Internet users (notably Google). But as a platform with 2.2BN+ users Facebook has got a march on the lion’s share of rivals when it comes to harvesting people’s data and building out a global database of person profiles.

It’s also positioned as a dominant player in an adtech ecosystem which means it’s the one being fed with intel by data brokers and publishers who deploy tracking tech to try to survive in such a skewed system.

Meanwhile the opacity of online tracking means the average Internet user is none the wiser that Facebook can be following what they’re browsing all over the Internet. Questions of consent loom very large indeed.

Facebook is also able to track people’s usage of third party apps if a person chooses a Facebook login option which the company encourages developers to implement in their apps — again the carrot being to be able to offer a lower friction choice vs requiring users create yet another login credential.

The price for this ‘convenience’ is data and user privacy as the Facebook login gives the tech giant a window into third part app usage.

The company has also used a VPN app it bought and badged as a security tool to glean data on third party app usage — though it’s recently stepped back from the Onavo app after a public backlash (though that did not stop it running a similar tracking program targeted at teens).

Background tracking is how Facebook’s creepy ads function (it prefers to call such behaviorally targeted ads ‘relevant’) — and how they have functioned for years

Yet it’s only in recent months that it’s offered users a glimpse into this network of online informers — by providing limited information about the entities that are passing tracking data to Facebook, as well as some limited controls.

From ‘Clear History’ to “Off-Facebook Activity”

Originally briefed in May 2018, at the crux of the Cambridge Analytica scandal, as a ‘Clear History’ option this has since been renamed ‘Off-Facebook Activity’ — a label so bloodless and devoid of ‘call to action’ that the average Facebook user, should they stumble upon it buried deep in unlovely settings menus, would more likely move along than feel moved to carry out a privacy purge.

(For the record you can access the setting here — but you do need to be logged into Facebook to do so.)

The other problem is that Facebook’s tool doesn’t actually let you purge your browsing history, it just delinks it from being associated with your Facebook ID. There is no option to actually clear your browsing history via its button. Another reason for the name switch. So, no, Facebook hasn’t built a clear history ‘button’.

“While we welcome the effort to offer more transparency to users by showing the companies from which Facebook is receiving personal data, the tool offers little way for users to take any action,” said Privacy International this week, criticizing Facebook for “not telling you everything”.

As the saying goes, a little knowledge can be a dangerous thing. So a little transparency implies — well — anything but clarity. And Privacy International sums up the Off-Facebook Activity tool with an apt oxymoron — describing it as “a new window to the opacity”.

“This tool illustrates just how impossible it is for users to prevent external data from being shared with Facebook,” it writes, warning with emphasis: “Without meaningful information about what data is collected and shared, and what are the ways for the user to opt-out from such collection, Off-Facebook activity is just another incomplete glimpse into Facebook’s opaque practices when it comes to tracking users and consolidating their profiles.”

It points out, for instance, that the information provided here is limited to a “simple name” — thereby preventing the user from “exercising their right to seek more information about how this data was collected”, which EU users at least are entitled to.

“As users we are entitled to know the name/contact details of companies that claim to have interacted with us. If the only thing we see, for example, is the random name of an artist we’ve never heard before (true story), how are we supposed to know whether it is their record label, agent, marketing company or even them personally targeting us with ads?” it adds.

Another criticism is Facebook is only providing limited information about each data transfer — with Privacy International noting some events are marked “under a cryptic CUSTOM” label; and that Facebook provides “no information regarding how the data was collected by the advertiser (Facebook SDK, tracking pixel, like button…) and on what device, leaving users in the dark regarding the circumstances under which this data collection took place”.

“Does Facebook really display everything they process/store about those events in the log/export?” queries privacy researcher Wolfie Christl, who tracks the adtech industry’s tracking techniques. “They have to, because otherwise they don’t fulfil their SAR [Subject Access Request] obligations [under EU law].”

Christl notes Facebook makes users jump through an additional “download” hoop in order to view data on tracked events — and even then, as Privacy International points out, it gives up only a limited view of what has actually been tracked…

“For example, why doesn’t Facebook list the specific sites/URLs visited? Do they infer data from the domains e.g. categories? If yes, why is this not in the logs?” Christl asks.

We reached out to Facebook with a number of questions, including why it doesn’t provide more detail by default. It responded with this statement attributed to spokesperson:

We offer a variety of tools to help people access their Facebook information, and we’ve designed these tools to comply with relevant laws, including GDPR. We disagree with this [Privacy International] article’s claims and would welcome the chance to discuss them with Privacy International.

Facebook also said it’s continuing to develop which information it surfaces through the Off-Facebook Activity tool — and said it welcomes feedback on this.

We also asked it about the legal bases it uses to process people’s information that’s been obtained via its tracking pixels and social plug-ins. It did not provide a response to those questions.

Six names, many questions…

When the company launched the Off-Facebook Activity tool a snap poll of available TechCrunch colleagues showed very diverse results for our respective tallies (which also may not show the most recent activity, per other Facebook caveats) — ranging from one colleague who had an eye-watering 1,117 entities (likely down to doing a lot of app testing); to several with several/a few hundred apiece; to a couple in the middle tens.

In my case I had just six. But from my point of view — as an EU citizen with a suite of rights related to privacy and data protection; and as someone who aims to practice good online privacy hygiene, including having a very locked down approach to using Facebook (never using its mobile app for instance) — it was still six too many. I wanted to find out how these entities had circumvented my attempts not to be tracked.

And in the case of the first one in the list who on earth it was…

Turns out cloudfront is an Amazon Web Services Content Delivery Network subdomain. But I had to go searching online myself to figure out that the owner of that particular domain is (now) a company called Nativo.

Facebook’s list provided only very bare bones information. I also clicked to delink the first entity, since it immediately looked so weird, and found that by doing that Facebook wiped all the entries — which meant I was unable to retain access to what little additional info it had provided about the respective data transfers.

Undeterred I set out to contact each of the six companies directly with questions — asking what data of mine they had transferred to Facebook and what legal basis they thought they had for processing my information.

(On a practical level six names looked like a sample size I could at least try to follow up manually — but remember I was the TechCrunch exception; imagine trying to request data from 1,117 companies, or 450 or even 57, which were the lengths of lists of some of my colleagues.)

This process took about a month and a lot of back and forth/chasing up. It likely only yielded as much info as it did because I was asking as a journalist; an average Internet user may have had a tougher time getting attention on their questions — though, under EU law, citizens have a right to request a copy of personal data held on them.

Eventually, I was able to obtain confirmation that tracking pixels and Facebook share buttons had been involved in my data being passed to Facebook in certain instances. Even so I remain in the dark on many things. Such as exactly what personal data Facebook received.

In one case I was told by a listed company that it doesn’t know itself what data was shared — only Facebook knows because it’s implemented the company’s “proprietary code”. (Insert your own ‘WTAF’ there.)

The legal side of these transfers also remains highly opaque. From my point of view I would not intentionally consent to any of this tracking — but in some instances the entities involved claim that (my) consent was (somehow) obtained (or implied).

In other cases they said they are relying on a legal basis in EU law that’s referred to as ‘legitimate interests’. However this requires a balancing test to be carried out to ensure a business use does not have a disproportionate impact on individual rights.

I wasn’t able to ascertain whether such tests had ever been carried out.

Meanwhile, since Facebook is also making use of the tracking information from its pixels and social plug ins (and seemingly more granular use, since some entities claimed they only get aggregate not individual data), Christl suggests it’s unlikely such a balancing test would be easy to pass for that tiny little ‘platform giant’ reason.

Notably he points out Facebook’s Business Tool terms state that it makes use of so called “event data” to “personalize features and content and to improve and secure the Facebook products” — including for “ads and recommendations”; for R&D purposes; and “to maintain the integrity of and to improve the Facebook Company Products”.

In a section of its legal terms covering the use of its pixels and SDKs Facebook also puts the onus on the entities implementing its tracking technologies to gain consent from users prior to doing so in relevant jurisdictions that “require informed consent” for tracking cookies and similar — giving the example of the EU.

“You must ensure, in a verifiable manner, that an end user provides the necessary consent before you use Facebook Business Tools to enable us to store and access cookies or other information on the end user’s device,” Facebook writes, pointing users of its tools to its Cookie Consent Guide for Sites and Apps for “suggestions on implementing consent mechanisms”.

Christl flags the contradiction between Facebook claiming users of its tracking tech needing to gain prior consent vs claims I was given by some of these entities that they don’t because they’re relying on ‘legitimate interests’.

“Using LI as a legal basis is even controversial if you use a data analytics company that reliably processes personal data strictly on behalf of you,” he argues. “I guess, industry lawyers try to argue for a broader applicability of LI, but in the case of FB business tools I don’t believe that the balancing test (a businesses legitimate interests vs. the impact on the rights and freedoms of data subjects) will work in favor of LI.”

Those entities relying on legitimate interests as a legal base for tracking would still need to offer a mechanism where users can object to the processing — and I couldn’t immediately see such a mechanism in the cases in question.

One thing is crystal clear: Facebook itself does not provide a mechanism for users to object to its processing of tracking data nor opt out of targeted ads. That remains a long-standing complaint against its business in the EU which data protection regulators are still investigating.

One more thing: Non-Facebook users continue to have no way of learning what data of theirs is being tracked and transferred to Facebook. Only Facebook users have access to the Off-Facebook Activity tool, for example. Non-users can’t even access a list.

Facebook has defended its practice of tracking non-users around the Internet as necessary for unspecified ‘security purposes’. It’s an inherently disproportionate argument of course. The practice also remains under legal challenge in the EU.

Tracking the trackers

SimpleReach (aka d8rk54i4mohrb.cloudfront.net)

What is it? A California-based analytics platform (now owned by Nativo) used by publishers and content marketers to measure how well their content/native ads performs on social media. The product began life in the early noughties as a simple tool for publishers to recommend similar content at the bottom of articles before the startup pivoted — aiming to become ‘the PageRank of social’ — offering analytics tools for publishers to track engagement around content in real-time across the social web (plugging into platform APIs). It also built statistical models to predict which pieces of content will be the most social and where, generating a proprietary per article score. SimpleReach was acquired by Nativo last year to complement analytics tools the latter already offered for tracking content on the publisher/brand’s own site.

Why did it appear in your Off-Facebook Activity list? Given it’s a b2b product it does not have a visible consumer brand of its own. And, to my knowledge, I have never visited its own website prior to investigating why it appeared in my Off-Facebook Activity list. Clearly, though, I must have visited a site (or sites) that are using its tracking/analytics tools. Of course an Internet user has no obvious way to know this — unless they’re actively using tools to monitor which trackers are tracking them.

In a further quirk, neither the SimpleReach (nor Nativo) brand names appeared in my Off-Facebook Activity list. Rather a domain name was listed — d8rk54i4mohrb.cloudfront.net — which looked at first glance weird/alarming.

I found this is owned by SimpleReach by using a tracker analytics service.

Once I knew the name I was able to connect the entry to Nativo — via news reports of the acquisition — which led me to an entity I could direct questions to.  

What happened when you asked them about this? There was a bit of back and forth and then they sent a detailed response to my questions in which they claim they do not share any data with Facebook — “or perform ‘off site activity’ as described on Facebook’s activity tool”.

They also suggested that their domain had appeared as a result of their tracking code being implemented on a website I had visited which had also implemented Facebook’s own trackers.

“Our technology allows our Data Controllers to insert other tracking pixels or tags, using us as a tag manager that delivers code to the page. It is possible that one of our customers added a Facebook pixel to an article you visited using our technology. This could lead Facebook to attribute this pixel to our domain, though our domain was merely a ‘carrier’ of the code,” they told me.

In terms of the data they collect, they said this: “The only Personal Data that is collected by the SimpleReach Analytics tag is your IP Address and a randomly generated id.  Both of these values are processed, anonymized, and aggregated in the SimpleReach platform and not made available to anyone other than our sub-processors that are bound to process such data only on our behalf. Such values are permanently deleted from our system after 3 months. These values are used to give our customers a general idea of the number of users that visited the articles tracked.”

So, again, they suggested the reason why their domain appeared in my Off-Facebook Activity list is a combination of Nativo/SimpleReach’s tracking technologies being implemented on a site where Facebook’s retargeting pixel is also embedded — which then resulted in data about my online activity being shared with Facebook (which Facebook then attributes as coming from SimpleReach’s domain).

Commenting on this, Christl agreed it sounds as if publishers “somehow attach Facebook pixel events to SimpleReach’s cloudfront domain”.

“SimpleReach probably doesn’t get data from this. But the question is 1) is SimpleReach perhaps actually responsible (if it happens in the context of their domain); 2) The Off-Facebook activity is a mess (if it contains events related to domains whose owners are not web or app publishers).”

Nativo offered to determine whether they hold any personal information associated with the unique identifier they have assigned to my browser if I could send them this ID. However I was unable to locate such an ID (see below).

In terms of legal base to process my information the company told me: “We have the right to process data in accordance with provisions set forth in the various Data Processor agreements we have in place with Data Controllers.”

Nativo also suggested that the Offsite Activity in question might have predated its purchase of the SimpleReach technology — which occurred on March 20, 2019 — saying any activity prior to this would mean my query would need to be addressed directly with SimpleReach, Inc. which Nativo did not acquire. (However in this case the activity registered on the list was dated later than that.)

Here’s what they said on all that in full:

Thank you for submitting your data access request.  We understand that you are a resident of the European Union and are submitting this request pursuant to Article 15(1) of the GDPR.  Article 15(1) requires “data controllers” to respond to individuals’ requests for information about the processing of their personal data.  Although Article 15(1) does not apply to Nativo because we are not a data controller with respect to your data, we have provided information below that will help us in determining the appropriate Data Controllers, which you can contact directly.

First, for details about our role in processing personal data in connection with our SimpleReach product, please see the SimpleReach Privacy Policy.  As the policy explains in more detail, we provide marketing analytics services to other businesses – our customers.  To take advantage of our services, our customers install our technology on their websites, which enables us to collect certain information regarding individuals’ visits to our customers’ websites. We analyze the personal information that we obtain only at the direction of our customer, and only on that customer’s behalf.

SimpleReach is an analytics tracker tool (Similar to Google Analytics) implemented by our customers to inform them of the performance of their content published around the web.  “d8rk54i4mohrb.cloudfront.net” is the domain name of the servers that collect these metrics.  We do not share data with Facebook or perform “off site activity” as described on Facebook’s activity tool.  Our technology allows our Data Controllers to insert other tracking pixels or tags, using us as a tag manager that delivers code to the page.  It is possible that one of our customers added a Facebook pixel to an article you visited using our technology.  This could lead Facebook to attribute this pixel to our domain, though our domain was merely a “carrier” of the code.

The SimpleReach tool is implemented on articles posted by our customers and partners of our customers.  It is possible you visited a URL that has contained our tracking code.  It is also possible that the Offsite Activity you are referencing is activity by SimpleReach, Inc. before Nativo purchased the SimpleReach technology. Nativo, Inc. purchased certain technology from SimpleReach, Inc. on March 20, 2019, but we did not purchase the SimpleReach, Inc. entity itself, which remains a separate entity unaffiliated with Nativo, Inc. Accordingly, any activity that occurred before March 20, 2019 pre-dates Nativo’s use of the SimpleReach technology and should be addressed directly with SimpleReach, Inc. If, for example, TechCrunch was a publisher partner of SimpleReach, Inc. and had SimpleReach tracking code implemented on TechCrunch articles or across the TechCrunch website prior to March 20, 2019, any resulting data collection would have been conducted by SimpleReach, Inc., not by Nativo, Inc.

As mentioned above, our tracking script collects and sends information to our servers based on the articles it is implemented on. The only Personal Data that is collected by the SimpleReach Analytics tag is your IP Address and a randomly generated id.  Both of these values are processed, anonymized, and aggregated in the SimpleReach platform and not made available to anyone other than our sub-processors that are bound to process such data only on our behalf. Such values are permanently deleted from our system after 3 months.  These values are used to give our customers a general idea of the number of users that visited the articles tracked.

We do not, nor have we ever, shared ANY information with Facebook with regards to the information we collect from the SimpleReach Analytics tag, be it Personal Data or otherwise. However, as mentioned above, it is possible that one of our customers added a Facebook retargeting pixel to an article you visited using our technology. If that is the case, we would not have received any information collected from such pixel or have knowledge of whether, and to what extent, the customer shared information with Facebook. Without more information, we are unable to determine the specific customer (if any) on behalf of which we may have processed your personal information. However, if you send us the unique identifier we have assigned to your browser… we can determine whether we have any personal information associated with such browser on behalf of a customer controller, and, if we have, we can forward your request on to the controller to respond directly to your request.

As a Data Processor we have the right to process data in accordance with provisions set forth in the various Data Processor agreements we have in place with Data Controllers.  This type of agreement is designed to protect Data Subjects and ensure that Data Processors are held to the same standards that both the GDPR and the Data Controller have put forth.  This is the same type of agreement used by all other analytics tracking tools (as well as many other types of tools) such as Google Analytics, Adobe Analytics, Chartbeat, and many others.

I also asked Nativo to confirm whether Insider.com (see below) is a customer of Nativo/SimpleReach.

The company told me it could not disclose this “due to confidentiality restrictions” and would only reveal the identity of customers if “required by applicable law”.

Again, it said that if I provided the “unique identifier” assigned to my browser it would be “happy to pull a list of personal information the SimpleReach/Nativo systems currently have stored for your unique identifier (if any), including the appropriate Data Controllers”. (“If we have any personal data collected from you on behalf of Insider.com, it would come up in the list of DataControllers,” it suggested.)

I checked multiple browsers that I use on multiple devices but was unable to locate an ID attached to a SimpleReach cookie. So I also asked whether this might appear attached to any other cookie.

Their response:

Because our data is either pseudonymized or anonymized, and we do not record of any other pieces of Personal Data about you, it will not be possible for us to locate this data without the cookie value.  The SimpleReach user cookie is, and has always been, in the “__srui” cookie under the “.simplereach.com” domain or any of its sub-domains. If you are unable to locate a SimpleReach user cookie by this name on your browser, it may be because you are using a different device or because you have cleared your cookies (in which case we would no longer have the ability to map any personal data we have previously collected from you to your browser or device). We do have other cookies (under the domains postrelease.com, admin.nativo.com, and cloud.nativo.com) but those cookies would not be related to the appearance of SimpleReach in the list of Off Site Activity on your Facebook account, per your original inquiry.

What did you learn from their inclusion in the Off-Facebook Activity list? There appeared to be a correlation between this domain and a publisher, Insider.com, which also appeared in my Off-Facebook Activity list — as both logged events bear the same date; plus Insider.com is a publisher so would fall into the right customer category for using Nativo’s tool.

Given those correlations I was able to guess Insider.com is a customer of Nativo. (I confirmed this when I spoke to Insider.com) — so Facebook’s tool is able to leak relational inferences related to the tracking industry by surfacing/mapping business connections that might not have been otherwise evident.

Insider.com

What is it? A New York based business media company which owns brands such as Business Insider and Markets Insider

Why did it appear in your Off-Facebook Activity list? I imagine I clicked on a technology article that appeared in my Facebook News Feed or elsewhere but when I was logged into Facebook

What happened when you asked them about this? After about a week of radio silence an employee in Insider’com’s legal department got in touch to say they could discuss the issue on background.

This person told me the information in the Off-Facebook Activity tool came from the Facebook share button which is embedded on all articles it runs on its media websites. They confirmed that the share button can share data with Facebook regardless of whether the site visitor interacts with the button or not.

In my case I certainly would not have interacted with the Facebook share button. Nonetheless data was passed, simply by merit of loading the article page itself.

Insider.com said the Facebook share button widget is integrated into its sites using a standard set-up that Facebook intends publishers to use. If the share button is clicked information related to that action would be shared with Facebook and would also be received by Insider.com (though, in this scenario, it said it doesn’t get any personalized information — but rather gets aggregate data).

Facebook can also automatically collect other information when a user visits a webpage which incorporates its social plug-ins.

Asked whether Insider.com knows what information Facebook receives via this passive route the company told me it does not — noting the plug-in runs proprietary Facebook code. 

Asked how it’s collecting consent from users for their data to be shared passively with Facebook, Insider.com said its Privacy Policy stipulates users consent to sharing their information with Facebook and other social media sites. It also said it uses the legal ground known as legitimate interests to provide functionality and derive analytics on articles.

In the active case (of a user clicking to share an article) Insider.com said it interprets the user’s action as consent.

Insider.com confirmed it uses SimpleReach/Nativo analytics tools, meaning site visitor data is also being passed to Nativo when a user lands on an article. It said consent for this data-sharing is included within its consent management platform (it uses a CMP made by Forcepoint) which asks site visitors to specify their cookie choices.

Here site visitors can choose for their data not to be shared for analytics purposes (which Insider.com said would prevent data being passed).

I usually apply all cookie consent opt outs, where available, so I’m a little surprised Nativo/SimpleReach was passed my data from an Insider.com webpage. Either I failed to click the opt out one time or failed to respond to the cookie notice and data was passed by default.

It’s also possible I did opt out but data was passed anyway — as there has been research which has found a proportion of cookie notifications ignore choices and pass data anyway (unintentionally or otherwise).

Follow up questions I sent to Insider.com after we talked:

1) Can you confirm whether Insider has performed a legitimate interests assessment?
2) Does Insider have a site mechanism where users can object to the passive data transfer to Facebook from the share buttons?

Insider.com did not respond to my additional questions.

What did you learn from their inclusion in the Off-Facebook Activity list? That Insider.com is a customer of Nativo/SimpleReach.

Rei.com

What is it? A California-based ecommerce website selling outdoor gear

Why did it appear in your Off-Facebook Activity list? I don’t recall ever visiting their site prior to looking into why it appeared in the list so I’m really not sure

What happened when you asked them about this? After saying it would investigate it followed up with a statement, rather than detailed responses to my questions, in which it claims it does not hold any personal data associated with — presumably — my TechCrunch email, since it did not ask me what data to check against.

It also appeared to be claiming that it uses Facebook tracking pixels/tags on its website, without explicitly saying as much, writing that: “Facebook may collect information about your interactions with our websites and mobile apps and reflect that information to you through their Off-Facebook Activity tool.”

It claims it has no access to this information — which it says is “pseudonymous to us” but suggested that if I have a Facebook account Facebook could link any browsing on Rei’s site to my Facebook’s identity and therefore track my activity.

The company also pointed me to a Facebook Help Center post where the company names some of the activities that might have resulted in Rei’s website sending activity data on me to Facebook (which it could then link to my Facebook ID) — although Facebook’s list is not exhaustive (included are: “viewing content”, “searching for an item”, “adding an item to a shopping cart” and “making a donation” among other activities the company tracks by having its code embedded on third parties’ sites).

Here’s Rei’s statement in full:

Thank you for your patience as we looked into your questions.  We have checked our systems and determined that REI does not maintain any personal data associated with you based on the information you provided.  Note, however, that Facebook may collect information about your interactions with our websites and mobile apps and reflect that information to you through their Off-Facebook Activity tool. The information that Facebook collects in this manner is pseudonymous to us — meaning we cannot identify you using the information and we do not maintain the information in a manner that is linked to your name or other identifying information. However, if you have a Facebook account, Facebook may be able to match this activity to your Facebook account via a unique identifier unavailable to REI. (Funnily enough, while researching this I found TechCrunch in MY list of Off-Facebook activity!)

For a complete list of activities that could have resulted in REI sharing pseudonymous information about you with Facebook, this Facebook Help Center article may be useful.  For a detailed description of the ways in which we may collect and share customer information, the purposes for which we may process your data, and rights available to EEA residents, please refer to our Privacy Policy.  For information about how REI uses cookies, please refer to our Cookie Policy.

As a follow up question I asked Rei to tell me which Facebook tools it uses, pointing out that: “Given that, just because you aren’t (as I understand it) directly using my data yourself that does not mean you are not responsible for my data being transferred to Facebook.”

The company did not respond to that point.

I also previously asked Rei.com to confirm whether it has any data sharing arrangements with the publisher of Rock & Ice magazine (see below). And, if so, to confirm the processes involved in data being shared. Again, I got no response to that.

What did you learn from their inclusion in the Off-Facebook Activity list? Given that Rei.com appeared alongside Rock & Ice on the list — both displaying the same date and just one activity apiece — I surmised they have some kind of data-sharing arrangement. They are also both outdoors brands so there would be obvious commercial ‘synergies’ to underpin such an arrangement.

That said, neither would confirm a business relationship to me. But Facebook’s list heavily implies there is some background data-sharing going on

Rock & Ice magazine 

What is it? A climbing magazine produced by a California-based publisher, Big Stone Publishing

Why did it appear in your Off-Facebook Activity list? I imagine I clicked on a link to a climbing-related article in my Facebook feed or else visited Rock & Ice’s website while I was logged into Facebook in the same browser session

What happened when you asked them about this? After ignoring my initial email query I subsequently received a brief response from the publisher after I followed up — which read:

The Rock and Ice website is opt in, where you have to agree to terms of use to access the website. I don’t know what private data you are saying Rock and Ice shared, so I can’t speak to that. The site terms are here. As stated in the terms you can opt out.

Following up, I asked about the provision in the Rock & Ice website’s cookie notice which states: “By continuing to use our site, you agree to our cookies” — asking whether it’s passing data without waiting for the user to signal their consent.

(Relevant: In October Europe’s top court issued a ruling that active consent is necessary for tracking cookies, so you can’t drop cookies prior to a user giving consent for you to do so.)

The publisher responded:

You have to opt in and agree to the terms to use the website. You may opt out of cookies, which is covered in the terms. If you do not want the benefits of these advertising cookies, you may be able to opt-out by visiting: http://www.networkadvertising.org/optout_nonppii.asp.

If you don’t want any cookies, you can find extensions such as Ghostery or the browser itself to stop and refuse cookies. By doing so though some websites might not work properly.

I followed up again to point out that I’m not asking about the options to opt in or opt out but, rather, the behavior of the website if the visitor does not provide a consent response yet continues browsing — asking for confirmation Rock & Ice’s site interprets this state as consent and therefore sends data.

The publisher stopped responding at that point.

Earlier I had asked it to confirm whether its website shares visitor data with Rei.com? (As noted above, the two appeared with the same date on the list which suggests data may be being passed between them.) I did not get a respond to that question either.

What did you learn from their inclusion in the Off-Facebook Activity list? That the magazine appears to have a data-sharing arrangement with outdoor retailer Rei.com, given how the pair appeared at the same point in my list. However neither would confirm this when I asked

MatterHackers

What is it? A California-based retailer focused on 3D printing and digital manufacturing

Why did it appear in your Off-Facebook Activity list? I honestly have no idea. I have never to my knowledge visited their site prior to investigating why they should appear on my Off Site Activity list.

I remain pretty interested to know how/why they managed to track me. I can only surmise I clicked on some technology-related content in my Facebook feed, either intentionally or by accident.

What happened when you asked them about this? They first asked me for confirmation that they were on my list. After I had sent a screenshot, they followed up to say they would investigate. I pushed again after hearing nothing for several weeks. At this point they asked for additional information from the Off-Facebook Activity tool — namely more granular metrics, such as a time and date per event and some label information — to help with tracking down this particular data-exchange.

I had previously provided them with the date (as it appears in the screenshot) but it’s possible to download additional an additional level of information about data transfers which includes per event time/date-stamps and labels/tags, such as “VIEW_CONTENT” .

However, as noted above, I had previously selected and deleted one item off of my Off-Facebook Activity list, after which Facebook’s platform had immediately erased all entries and associated metrics. There was no obvious way I could recover access to that information.

“Without this information I would speculate that you viewed an article or product on our site — we publish a lot of ‘How To’ content related to 3D printing and other digital manufacturing technologies — this information could have then been captured by Facebook via Adroll for ad retargeting purposes,” a MatterHackers spokesman told me. “Operationally, we have no other data sharing mechanism with Facebook.”

Subsequently, the company confirmed it implements Facebook’s tracking pixel on every page of its website.

Of the pixel Facebook writes that it enables website owners to track “conversions” (i.e. website actions); create custom audiences which segment site visitors by criteria that Facebook can identify and match across its user-base, allowing for the site owner to target ads via Facebook’s platform at non-customers with a similar profile/criteria to existing customers that are browsing its site; and for creating dynamic ads where a template ad gets populated with product content based on tracking data for that particular visitor.

Regarding the legal base for the data sharing, MatterHackers had this to say: “MatterHackers is not an EU entity, nor do we conduct business in the EU and so have not undertaken GDPR compliance measures. CCPA [California’s Consumer Privacy Act] will likely apply to our business as of 2021 and we have begun the process of ensuring that our website will be in compliance with those regulations as of January 1st.”

I pointed out that GDPR is extraterritorial in scope — and can apply to non-EU based entities, such as if they’re monitoring individuals in the EU (as in this case).

Also likely relevant: A ruling last year by Europe’s top court found sites that embed third party plug-ins such as Facebook’s like button are jointly responsible for the initial data processing — and must either obtain informed consent from site visitors prior to data being transferred to Facebook, or be able to demonstrate a legitimate interest legal basis for processing this data.

Nonetheless it’s still not clear what legal base the company is relying on for implementing the tracking pixel and passing data on EU Facebook users.

When asked about this MatterHacker COO, Kevin Pope, told me:

While we appreciate the sentiment of GDPR, in this case the EU lacks the legal standing to pursue an enforcement action. I’m sure you can appreciate the potential negative consequences if any arbitrary country (or jurisdiction) were able to enforce legal penalties against any website simply for having visitors from that country. Techcrunch would have been fined to oblivion many times over by China or even Thailand (for covering the King in a negative light). In this way, the attempted overreach of the GDPR’s language sets a dangerous precedent.
To provide a little more detail – MatterHackers, at the time of your visit, wouldn’t have known that you were from the EU until we cross-referenced your session with  Facebook, who does know. At that point you would have been filtered from any advertising by us. MatterHackers makes money when our (U.S.) customers buy 3D printers or materials and then succeed at using them (hence the how-to articles), we don’t make any money selling advertising or data.
Given that Facebook does legally exist in the EU and does have direct revenues from EU advertisers, it’s entirely appropriate that Facebook should comply with EU regulations. As a global solution, I believe more privacy settings options should be available to its users. However, given Facebook’s business model, I wouldn’t expect anything other than continued deflection (note the careful wording on their tool) and avoidance from them on this issue.

What did you learn from their inclusion in the Off-Facebook Activity List? I found out that an ecommerce company I had never heard of had been tracking me

Wallapop

What is it? A Barcelona-based peer-to-peer marketplace app that lets people list secondhand stuff for sale and/or to search for things to buy in their proximity. Users can meet in person to carry out a transaction paying in cash or there can be an option to pay via the platform and have an item posted

Why did it appear in your Off-Facebook Activity list? This was the only digital activity that appeared in the list that was something I could explain — figuring out I must have used a Facebook sign-in option when using the Wallapop app to buy/sell. I wouldn’t normally use Facebook sign-in but for trust-based marketplaces there may be user benefits to leveraging network effects.

What happened when you asked them about this? After my query was booted around a bit a PR company that works with Wallapop responded asking to talk through what information I was trying to ascertain.

After we chatted they sent this response — attributed to sources from Wallapop:

Same as it happens with other apps, wallapop can appear on our users’ Facebook Off Site Activity page if they have interacted in any way with the platform while they were logged in their Facebook accounts. Some interaction examples include logging in via Facebook, visiting our website or having both apps opened and logged.

As other apps do, wallapop only shares activity events with Facebook to optimize users’ ad experience. This includes if a user is registered in wallapop, if they have uploaded an item or if they have started a conversation. Under no circumstance wallapop shares with Facebook our users’ personal data (including sex, name, email address or telephone number).

At wallapop, we are thoroughly committed with the security of our community and we do a safe treatment of the data they choose to share with us, in compliance with EU’s General Data Protection Regulation. Under no circumstance these data are shared with third parties without explicit authorization.

I followed up to ask for further details about these “activity events” — asking whether, for instance, Wallapop shares messaging content with Facebook as well as letting the social network know which items a user is chatting about.

“Under no circumstance the content of our users’ messages is shared with Facebook,” the spokesperson told me. “What is shared is limited to the fact that a conversation has been initiated with another user in relation to a specific item, this is, activity events. Under no circumstance we would share our users’ personal information either.”

Of course the point is Facebook is able to link all app activity with the user ID it already has — so every piece of activity data being shared is personal data.

I also asked what legal base Wallapop relies on to share activity data with Facebook. They said the legal basis is “explicit consent given by users” at the point of signing up to use the app.

“Wallapop collects explicit consent from our users and at any time they can exercise their rights to their data, which include the modification of consent given in the first place,” they said.

“Users give their explicit consent by clicking in the corresponding box when they register in the app, where they also get the chance to opt out and not do it. If later on they want to change the consent they gave in first instance, they also have that option through the app. All the information is clearly available on our Privacy Policy, which is GDPR compliant.”

“At wallapop we take our community’s privacy and security very seriously and we follow recommendations from the Spanish Data Protection Agency,” it added

What did you learn from their inclusion in the Off-Facebook Activity list? Not much more than I would have already guessed — i.e. that using a Facebook sign-in option in a third party app grants the social media giant a high degree of visibility into your activity within another service.

In this case the Wallapop app registered the most activity events of all six of the listed apps, displaying 13 vs only one apiece for the others — so it gave a bit of a suggestive glimpse into the volume of third party app data that can be passed if you opt to open a Facebook login wormhole into a separate service.

Sony announces its first 5G flagship, the triple lens Xperia 1 II

Sony has announced its first 5G smartphone: The Xperia 1 II — which, for the curious and/or confused, is pronounced ‘Xperia One, Mark Two’. Which isn’t at all confusing, er.

“No one understands the entertainment experience better than Sony,” said president of mobile communications, Mitsuya Kishida, claiming the company is “uniquely positioned” in the era of 5G cellular technology to offer its target users an “enriched” experience thanks to Sony’s extensive content portfolio.

“Whether you are a broadcast professional who requires dynamic speed or an everyday user who desires enhanced entertainment Xperia with 5G takes your mobile experience to the next level,” he said.

As ever with Sony — a major b2b supplier of image sensors to other smartphone makers (rather than a major seller of its own phones) — it’s made the camera a huge focus for the new Android 10 flagship, which has a 6.5in 21:9 “CinemaWide” 4K HDR OLED (3840×1644) display and is powered by a Qualcomm 865 Snapdragon chip (with 8GB of RAM on board).

Round the back the Xperia 1II packs three lenses which offer a selection of focal lengths (16mm, 24mm and 70mm) for capturing different types of photos — from super wide angle to portraits.

All three rear lenses have a 12MP sensor, while round the front there’s an 8MP lens. Sony is also using Zeiss optics for the first time in a smartphone, expanding a long-running collaboration to a new device type.

Talking up the camera, Kishida touted ultrafast low light autofocus, noting too that it supports 20fps autofocus and auto-tracking burst (which he called a world first in a smartphone) — for capturing crisp action shots.

“Our new continuous auto focus keeps tracking moving subjects. What’s special about this is with 20fps it calculates the object 3x per frame — that’s 60x per second — capturing the very moment,” he said.

“With the power and speed of 5G you will be able to share those moments more quickly and more easily across the network,” he added.

Another photo-friendly feature is real-time eye auto focus. Sony demoed this by showing it working on a video of a cat playing with a toy. So, tl;dr, Sony has trained its model on data-sets of pets too, not just humans.

A ‘Photo Pro’ interface on the handset, meanwhile, has been designed to be familiar to users of Sony’s mirrorless Alpha cameras — letting photographers tune shots via access to tweakable parameters they’re used to using on Sony’s high end digital cameras.

Sony is paying the same mind to video makers, with a video editing interface on the device that offers features such as touch autofocus and custom white balance — which Kishida said will help “visual storytellers” control the camera more easily.

There’s also a noise reduction feature to improve audio capture.

Best of all, the Xperia 1II has a 3.5mm headphone jack — enabling audiophiles to enjoy the simple pleasure of plugging in their favorite pair of high-end wired headphones and tuning out everything else.

Kishida flagged the use of an AI technology, called DSEE Ultimate, which he said upscales the sound signal to “near high resolution audio” — including when streaming. “This the best on the go acoustic experience available,” he claimed.

On the games front he touted a collaboration that will let users of the device play a mobile optimized version of Call of Duty using Play Station 4’s Dualshock 4 wireless controller.

The handset, meanwhile, packs a 4,000mAh battery as well as fast wireless charging.

Per Kishida the Xperia 1II will start shipping from Spring onwards, though it’s not yet clear which markets Sony will be bringing the device to (last year the company’s mobile division was reported to have defocused most of the global market in a bid to focus on profitability).

The Xperia 1II may have a fairly niche target buyer, as Sony is a relative bit player in consumer smartphone sales vs giants like Samsung and Huawei, but is intended to act as a showcase for what the company’s camera technologies can offer other mobile makers.

Sony’s mobile chief was making the announcements at a virtual press conference screened via YouTube after the company became one of the first big companies to pull out of attending the Mobile World Congress tradeshow.

MWC’s organizer, the GSMA, subsequently cancelled the annual mobile industry event, which had been due to take place in Barcelona this week, after scores of exhibitors said they would not attend — citing public health concerns attached to the novel coronavirus.

MWC typically attracts more than 100,000 visitors across four days. So the sight of Sony’s press conference being streamed to an empty room — entirely devoid of cameras, claps or woos but still with built in pauses for the media to take photos of the new hardware — was more than a little surreal.

Kishida had another 5G handsets to tease: aka the Xperia Pro — a flagship handset aimed at video professionals. It features 5G mm wavelength technology for improved capability to stream high-resolution video, as well as a handy micro HDMI port for easy plugging in of other high end camera kit.

Sony touted tests it’s done with US carrier Verizon (aka TechCrunch’s parent company) to use the forthcoming 5G handset for live streaming of live sports events.

“Sony’s expertise and long history in providing profession digital imaging solutions is very unique,” added Kishida. “Only Sony has such deep and well established relationships and we are bringing decades of experience to an end-to-end solution — from professional content creation to mobile communications technology in 5G.”

There was a mid range smartphone announcement too, also shipping from Spring onwards: The Xperia 10 II packs a 6in display and also features a triple lens camera as well as water resistance.

Mangrove Capital’s Mark Tluszcz on the huge mHealth opportunity and why focusing on UX is key

Mangrove Capital Partners’ co-founder and CEO Mark Tluszcz is brimming with enthusiasm for what’s coming down the pipe from health tech startups.

Populations armed with mobile devices and hungry for verified and relevant information, combined with the promise of big data and AI, is converging, as he sees it, into a massive opportunity for businesses to rethink how healthcare is delivered, both as a major platform to plugging gaps in stretched public healthcare systems and multiple spaces in between — serving up something more specific and intimate.

Think health-focused digital communities, perhaps targeting a single sex or time of life, as we’re increasingly seeing in the femtech space, or health-focused apps and services that can act as supportive spaces and sounding boards that cater to the particular biological needs of different groups of people.

Tluszcz has made some savvy bets in his time. He was an early investor in Skype, turning a $2 million investment into $200 million, and he’s also made a tidy profit backing web building platform Wix, where he remains as chairman. But the long-time, early-stage tech investor has a new focus after a clutch of investments — in period tracking (Flo), AI diagnostics (K Health) and digital therapeutics (Happify) — have garnered enough momentum to make health the dominant theme of Mangrove Capital’s last fund.

“I really don’t think that there’s a bigger area and a more inefficient area today than healthcare,” he tells us. “One of the things that that whole space is missing is just good usability. And that’s something that Internet entrepreneurs do very well.”

Extra Crunch sat down for an in-depth conversation with Tluszcz to dig into the reasons why he’s so excited about mHealth (as Mangrove calls it) and probe him on some of the challenges that arise when building data-led AI businesses with the potential to deeply impact people’s lives.

The fund has also produced a healthcare report setting out some of its thinking.

This interview has been lightly edited for length and clarity

TechCrunch: Is the breadth of what can fall in the digital health or mHealth category part of why you’re so excited about the opportunities here?

Mark Tluszcz: I think if you take a step back, even from definitions for a moment, and you look around as an investor — and we as a firm, we happen to be thematically driven but no matter who you are — and you say where are there massive pockets of opportunity? And it’s typically in areas where there’s a lot of inefficiency. And anybody who’s tried to go to the doctor anywhere in Europe or around the world or tried to get an appointment with a therapist or whatever realizes how basically inefficient and arcane that process is. From finding out who the right person is, to getting an appointment and going there and paying for it. So healthcare looks to us like one of those arcane industries — the user experience, so to speak, could be so much better. And combine that with the fact that in most cases we know nothing as individuals about health — unless you read a few books and things. But it’s generally the one place where you’re the least informed in life. So you go see your GP and he or she will tell you something and you’re blindly going to take that pill they’re going to give you because you’re not well informed. You don’t understand it.

So I think that’s the exciting part about it. If I now look around and say if I now look at all the industries in the world — and of course there’s interesting stuff happening in financial services, and it continues to happen on commerce, and many, many places — but I really don’t think that there’s a bigger area and a more inefficient area today than healthcare.

You combine that with the power that we’re beginning to see in all these mobile devices — i.e. I have it in my pocket at all times. So that’s factor two. So one is the industry is potentially big and inefficient; two is there’s tools that we have easy to access it. And there has been — I think again — a general frustration on healthcare online I would say of when you go into a search engine, or you go into Web MD or Google or whatever, the general feedback it gives you is you’re about to have a heart attack or you’re about to die because those products are not designed specifically for that. So you as a consumer are confused because you’re not feeling well so you go online. The next day you go see your doctor and he or she says you didn’t go to Google did you, right? I know you’re probably freaked out at this point. So the second point is the tools are there.

Third I’d say is that artificial intelligence, machine learning, which is kind of in the process of gaining a lot of momentum, has made it that we’re able to start to dream that we could one day crunch sufficient data to get new insights into it. So I think you put those three factors together and say this seems like it could be pretty big, in terms of a space.

One of the things that that whole space is missing is just good usability. And that’s something that Internet entrepreneurs do very well. It’s figure out that usability side of it. How do I make that experience more enjoyable or better or whatever? In fact, you see it in fintech. One of the reasons, largely, that these neobanks are winning is that their apps are much better than what you have from the incumbents. There’s no other reason for it. And so I think there’s this big opportunity that’s out there, and it says all these factors lead you to this big, big industry. And then yes, that industry in itself is extremely large — all the way from dieting apps, you might think, all the way to healthy eating apps to longevity apps, to basic information about a particular disease, to basic general practitioner information. You could then break it down into female-specific products, male-specific products — so the breadth is very, very big.

But I think the common core of that is we as humans are getting more information and knowledge about how we are, and that is going to drive, I think, a massive adoption of these products. It’s knowledge, it’s ease of use, and it’s accessibility that just make it a dream come true if we can pull all these pieces together. And this is just speaking about the developed world. This gets even bigger potentially if I go to the third world countries where they don’t even have access to basic healthcare information or basic nutritional information. So I would say that the addressable market in investors’ jargon is just huge. Much more so than in any other industry that I know of today.

Is the fund trying to break that down into particular areas of focus within that or is the fund potentially interested in everything that falls under this digital health/mHealth umbrella?

We are a generalist investment firm. As a generalist investment firm we find these trends and then anything within these trends is going to pique our interest. Where we have made some investments has been really in three areas so far, and we’ll continue to broaden that base.

We’ve made an investment into a company called Flo. They are the number one app in the world for women to help track their menstrual cycles. So you look at that and go can that be big, not big, I don’t know. I can tell you they have 35M monthly active users, so it’s massive.

Now you might say, ‘Why do women need this to help them track their cycles because they’ve been tracking these menstrual cycles other ways for thousands of years?’ This is where, as an investor, you have to combine something like that with new behavioral patterns in people. And so if you look at the younger generation of people today they’re a generation that’s been growing up on notifications — the concept of being notified to do something. Or reminded to do something. And I think these apps do a lot of that as well.

My wife, who’s had two children, might say — which she did before I invested in the company — why would I ever need such an app? And I told her, “Unfortunately you’re the wrong demographic… because when I speak to an 18- year-old she says, ‘Ah, so cool! And by the way do you have an app to remind me to brush my teeth?’ So notifications is what I think what makes it interesting for that younger demographic.

And then curiously enough — this is again the magic of what technology can bring and great products can bring — Flo is a company created by two brothers. They had no particular direct experience of the need for the app. They knew the market was big. They obviously hired women who were more contextually savvy to the problem but they were able to build this fantastic product. And did a bunch of things within the product that they had taken from their previous lives and made it so that the user experience was just so much better than looking at a calendar on your phone. So today 35M women every month use this product tells you that there’s something there — that the tech is coming and that people want to use it. And so that’s one type of a problem, and you can think about a number of others that both males and females will have — for whom making that single user experience better could be interesting. And I could go from that to ten things that might be interesting for women and ten things that might specifically be interesting for men — you can imagine breaking that down. This is why, again, the space is so big. There are so many things that we deal with as men and women [related to health and biology].

Now for me the question is, as a venture investor, will that sub-set be big enough?

And that again is no different than if I was looking at any other industry. If I was in the telecommunications industry — well is voice calling big? Is messaging big enough? Is conference calling big enough? All that is around calling, but you start breaking it down and, in some cases, we’re going to conclude that it’s big enough or that it’s not big enough. But we’re going to have to go through the process of looking at these. And we’re seeing these thematic things pop up all over the place right now. All over Europe and in the U.S. as well.

It did take us a little time to say is this big enough [in the case of Flo] but obviously getting pregnant is big enough. And as a business, think about it: once you know a woman’s menstrual cycle process and then she starts feeding into the system, ‘I am pregnant; I’m going to have a child,’ you start having a lot of information about her life and you can feed a lot of other things to her. Because you know when she’s going to have a child, you can propose advice as well around here’s how the first few months go. Because, as we know, when you have your first child, you’re generally a novice. You’re discovering what all that means. And again you have another opportunity to re-engage with that user. So that’s something that I think is interesting as a space.

So the thematic space is going to be big — the femtech side and the male tech side. All of that’s going to play a big role. One could argue always there are the specific apps that are going to be the winners; we can argue about that. But right now I guess Flo is working very well because those people haven’t found such a targeted user experience in the more generic place. They feel as if they’re in a community of like-minded women. They have forums, they can talk, they have articles they can read, and it’s just a comfortable place for them to spend some time.

So Flo is the first example of a very specific play that we did in healthcare about a year and a half ago. The first investment, in fact, that we made in healthcare.

The second example is opposed to that — it’s a much more general play in healthcare. It’s a company called K Health . Now K Health looked at the world… and said what happens when I wake up at night and I have a pain and I do go to Google and I think I’m going to have a heart attack…. So can I build a product that would mimic, if you will, a doctor? So that I might be able to create an experience when I can have immediacy of information and immediacy of diagnostics on my phone. And then I could figure out what to do with that.

This is an Israeli company and they now have 5 million users in the U.S. that are using the app, which is downloadable from the U.S. app story only. What they did is they spent a year and a half building the technology — the AI and the machine learning — because what they did is they bought a very large dataset from an insurance company. The company sold it to them anonymized. It was personal health records for 2.5 million people for 20, years so we had a lot of information. A lot of this stuff was in handwritten notes. It wasn’t well structured. So it took them a long time to build the software to be able to understand all this information and break it down into billions of data parts that they could now manipulate. And the user experience is just like a WhatsApp chat with a robot.

Their desire is not to do what some other companies are doing, which is ‘answer ten questions and maybe you should talk to a doctor via Skype.’ Because their view was that — at the end of the day — in every developed country there are shortages of doctors. That’s true for the U.K.; it’s true for the U.S. If you predict out to 2030, there’s a huge hole in the number of GPs. Part of that is also totally understandable; who would want to be a GP today? I mean your job in the U.S. and the U.K. is you’re essentially a sausage factory. Come in and you’ve got 3 minutes with your customer. It’s not a great experience for the doctor or the person who goes to the doctor.

So K Health built this fantastic app and what they do is they diagnose you and they say based on the symptoms here’s what K thinks you have, and, by the way, here’s a medicine that people like you were treated with. So there’s an amazing amount of information that you get as a user, and that’s entirely free as a user experience. Their vision is that the diagnostic part will always be free.

There are 5 million people in the US.. using the app who are diagnosing. There are 25 questions that you go through with the robot, ‘K,’ and she diagnoses you. We call that a virtual doctor’s visit. We’re doing 15,000 of those a day. Think about the scale in which we’ve been able to go in a very short time. And all that’s free.

To some extent it’s great for people who can’t necessarily afford doctors — again, that’s not typically a European problem. Because socialized medicine in Europe has made that easy. But it is a problem in the U.S.; it is a problem in Africa, Asia, India and South America. There’s about 4 billion people around the world for whom speaking to a doctor is a problem.

K Health’s view is they’re bringing healthcare free to the world. And then ultimately how they make money will be things like if you want to speak to a doctor because you need a prescription for drugs. The doctor has access to K’s diagnostic and either agrees or disagrees with it and gives you a prescription to do that. And what we’re seeing is an interesting relationship which is where we wanted it to be. Of those 15,000 free doctor visits, less than one percent of those turn into I want to speak to a human and hence pay $15 (that’s the price they’re charging in the U.S. to actually converse with a human). In the U.S., by the way, about a quarter of the population — 75 million people — don’t have complementary insurance. That when they go to the doctor it’s $150. Isn’t that a crazy thing? You can’t afford complementary insurance but you could pay the highest price to go see a doctor. Such madness.

And then there’s a whole element of it’s simple, and it’s convenient. You’re sitting at home thinking, “Okay, I’m not feeling so well” and you’ve got to call a doctor, get an appointment, drive however long it takes, and wait in line with other sick people. So what we’re finding is people are discovering new ways of accessing information…. Human doctors also don’t have time to give empathy in an ever stretched socialized medicine country [such as in Spain]. So what we’re seeing also is a very quick change in user behavior. Two and a half years ago [when K Health started], many people would say I don’t know about that. Now they’re saying convenience — at least in Europe — is why that’s interesting. In the U.S. it’s price.

So that’s the second example; much more general company but one which has the ability to come and answer a very basic need: ‘I’m not feeling well.’

We have 5M users which means we have data on 5M people. On average, a GP in his life will see about 50,000 patients. If you think about just the difference — if you come to K, K has seen 5M people, your GP Max has seen 50k. So, statistically, the app is likely to be better. We know today, through benchmarks and all sorts of other stuff, is that the app is more accurate than humans.

So you look at where that’s heading in general medicine we’ve for a long time created this myth that doctors spent eight years learning a lot of information and as a result they’re really brainy people. They are brainy people but I believe that that learning process is going to be done faster and better through a machine. That’s our bet.

The third example of an investment that we’ve made in the health space is a company called Happify . They’re a company that had developed like a gamification of online treatment if you have certain sicknesses. So, for example, if you’re a little depressive you can use their app and the gamification process and they will help you feel healthier. So so far you’re probably scatching your head saying ‘I don’t know about that…” But that was how they started and then they realized that hang on you can either do that or you can take medicine; you can pop a pill. In fact what many doctors suggest for people who have anxiety or depression.

So then they started engaging with the drugs companies and they realized that these drug companies have a problem which is the patent expiry of their medication. And when patents expire you lose a lot of money. And so what’s very typical in the pharma industry is if you’re able to modify a medicine you can typically either extend or have a new patent. So Happify, what they’ve done with the pharma companies now, is said instead of modifying the medicine and adding something else to it — another molecule for instance — could we associate treatments which is medicine plus online software? Like a digital experience. And that has now been dubbed Digital Therapeutics — DTx — is the common term being used for them. And this company Happify is one of the first in the world to do that. They signed a very large deal with a company called Sanofi — one of the big drug makers. And that’s what they’re going to roll out. When doctors say to their patients I’m diagnosing you with anxiety or depression. Sanofi has a particular medication and they’re going to bundle it now with an online experience — and in all the tests that they’ve done, actually, when you combine the two, the patient is better off at the end of this treatment. So it’s just another example of why this whole space is so large. We never thought we’d be in any business with a pharma business because we’re tech investors. But here all of a sudden the ability to marry tech with medication creates a better end user experience for the patient. And that’s very powerful in itself.

So those are just three areas where we have actually put money in the health space but there are a number of areas that one looks at — either general or more specific.

Yeah it is big. And I think for us at least the more general it stays and it’s seen the more open minded we’re going to be. Because one thing you have to be as an investor, at least early stage like ours, completely open minded. And you can’t bias your process by your own experience. It has to stay very broad.

It’s also why I think clinician led companies and investors are not good — because they come with their own baggage. I think in this case, just like in any other industry, you have to say I’m not going to be polluted by the past and for me to change the experience going forward in any given area I have to fundamentally be ready to reinvent it.

You could propose a Theranos example as a counterpoint to that — but do you think investors in the health space have got over any fallout from that high profile failure at this point?

With that company one could argue who’s fault it really was. Clearly the founder lied and did all sorts of stuff but her investors let her do it. So to some extent the checks and balances just weren’t in place. I’m only saying that because I don’t think that should be the example by which we judge everything else. That’s just a case of a fraudster and dumb investors. That’s going to continue to exist in the future forever and who knows we might come across some of those but I don’t think it’s the benchmark by which one should be judging if healthcare is a good or viable investment. Again I look at Flo, 35M active users. I look at K Health, 5M users in the US who are now beginning to use doctors, order medicine through the platform. I think the simplicity, the ease of use, for me make it that it’s undeniable that this industry’s going to be completely shaken up through this tech. And we need it because at least in the Western world are health systems are so stretched they’re going to break.

Europe vs the US is interesting — because of the existence of public healthcare vs a lack of public healthcare. What difference does that make to the startup opportunities in health in Europe vs the US? Perhaps in Europe things have to be more supplementary to public healthcare systems but perhaps ultimately there isn’t that much difference if healthcare opportunities are increasingly being broken out and people are being encouraged to be more proactive about looking after their own health needs?

Yeah. Take K Health — where you look at it and say from a use example it’s clear that everywhere in the world, including US and Europe, people are going to recognize the simple ease of use and the convenience of it. If I had to spend money to then maybe make money then I would say maybe the US is slightly better because there’s 75M people who can’t afford a doctor and I might be able to sell them something more whereas in Europe I might not. I think it becomes a commercial question more than anything else. Certainly in the UK the NHS [National Health Service] is trying to do a lot of things. It is not a great user experience when you go to the doctor there. But at the end of the day I don’t think the difference between Europe-US makes much of a difference. I think this idea that what these apps want to tend towards — which is healthcare for everybody at a super cheap or free price-point — I think we have an advantage in Europe of thinking of it that way because that’s what we’ve had all our lives. So to some extent what I want to create online is socialized medicine for the world — through K Health. And I learnt that because I live here [in Europe].

Somebody in the US — not the 75M because they have nothing — but all the others, maybe they don’t think there’s a problem because they don’t recognize it. Our view with K Health is the opportunity to make socialized medicine a global phenomenon and hoping that in 95% of the cases access to the app is all you need. And in 5% of the cases you’re going to go the specialists that need to see you — and then maybe there’s enough money to go around for everybody.

And of course, as an investor, we’re interested in global companies. Again you see the theme: Flo, K Health, Happify, all those have a potential global footprint right off the bat.

I think with healthcare there are going to be play that could be national specific and maybe still going to be decent investments. You see in that in financial services. The neo banks are very country specific — whenever they try to get out of their country, like N26, they realize that life isn’t so easy when you go somewhere else. But healthcare I think we have an easier path to going global because there is such a pent up demand and a need for you to just feel good about yourself… Most of the people who go through [the K Health diagnostic] process just want peace of mind. If 95% of the 15k people who go through that process right now just go, “Phew, I feel okay” then we’ve accomplished something quite significant. And imagine if it’s not 15,000 it’s about 150,000 a day, which seems to be quite an easy goal. So healthcare allows us to dream that TAM — in investor terms, target addressable market — is big. I can realistically think with any one of the three companies that I’ve mentioned to you that we could have hundreds of millions of users around the world. Because there’s the need.

There are different regulatory regimes across markets, there are different cultural contexts around the world — do you see this as a winner takes all scenario for health platforms?

No. Not at all. I think ultimately it’s the user — in terms of his or her experience in using an app — that’s going to matter. Flo is not the only menstrual cycle app in the world; it just happens to be by far the biggest. But there’s others. So that’s the perfect example. I don’t think there’s going to be one winner takes it all.

There’s also (UK startup) Babylon Health which sounds quite similar to K Health…

Babylon does something different. They’re essentially a symptom checker designed to push you to have a Skype call with a human doctor…. It answers a bunch of questions, it’ll say, “Well, we think you have this, let’s connect you to a real doctor.” We did not want to invest in a company that ever did that because the real problem is there just aren’t enough doctors and then frankly you and I are not going to want to talk to a doctor from Angola. Because what’s going to happen is there aren’t enough doctors in the Western countries and the solution for those type of companies — Babylon is one, there’s others doing similar things — but if you become what we call lead generation just for doctors where you get a commission for bringing people to speak to a doctor you’re just displacing the problem from in your neighborhood to, broadly speaking, where are the humans? And I think as I said humans, they have their fallacies. If you really want to scale things big and globally you have to let software do it.

No it’s not a winner takes all — for sure.

So the vision is that this stuff starts as a supplement to existing healthcare systems and gradually scales?

Correct. I’ll give you an example in the U.S. with K Health. They have a deal with the second largest insurance company called Anthem. Their go-to-market brand is called Blue Cross, Blue Shield. It’s the second largest one in America… so why is this insurance company interested? Because they know that

  1. There’s not enough doctors.
  2. That the health system in the U.S. is under stress
  3. If they could reduce the amount of doctor’s visits by promoting an app like K, that’s financially beneficial to them.

So they’re going to be proposing it, in various forms, to all their customers by saying, “Before you go see a doctor, why don’t you try K?”

In this particular case with K there’s revenue opportunities from the insurance companies and also directly from the consumer, which makes it also interesting.

You did say different regions, different countries have different systems — yes absolutely and there’s no question that going international requires work. However, having said that, I would say a European, an Indonesian and a Brazilian are largely similar. There’s sometimes this fallacy that Asians, for instance, are so different from us as Western Europeans. And the truth is not really — when you look at it down into the DNA and the functions of the body and stuff like that. Which you do have to do, though. If we were to take K to Indonesia, for example, you do have to make sure that your AI engine has enough data to be able to diagnose some local stuff.

I’ll give you an example. When we launched K in the U.S. and we started off with New York, one of things you have to be able to diagnose is called Lyme disease which is what you get from a tick that bites you. Very, very prevalent in the Greater New York area. Not so much anywhere else in the States. But in New York, if you don’t have it it looks like a cold and then you get very sick. That’s very much a regional thing that you have to have. And so if we were to go to Indonesia we’d have to have thing like Malaria and Dengue. But all that is not so difficult. But yes, there’s some customization.

There are also certain conditions that can be more common for certain ethnicities. There are also differences in how women experience medical conditions vs men. So there can be a lot of issues around how localized health data is…

I would say that that is a very small problem that is a must to be addressed, but it’s a much smaller problem than you think it is. Much smaller. For instance, in the male to female thing — of course medical sometimes plays differently — but when you have a database of 5 million of which 3 million are women, and 2 million are men, you already have that data embedded. It is true that medications work better with certain races also. But again very tiny, very small examples of those. Most doctors know it.

At the big scale that may look very small but to an individual patient if a system is not going to pick up on their condition or prescribe them the right medicine that’s obviously catastrophic from their point of view…

Of course.

Which is why, in the healthcare space, when you’re using AI and data-driven tools to do diagnosis there’s a lot of risk — and that’s part of the consideration for everyone playing in this space. So then the question is how do you break down that risk, how do you make that as small as possible and how do you communicate it to the users — if the proposition is free healthcare with some risk vs. not being able to afford going to the doctor at all?

I appreciate that, as a journalist, you’re trying to say this is a massive risk. I can tell you that as somebody who’s involved in these businesses it is a business risk we have to take into consideration but it is, by far, not insurmountable. We clearly have a responsibility as businesses to say: if I’m going to go to South East Asia, I need to be sure that I cover all the ‘weird’ things that we would not have in our database somewhere else. So I need to do that. How I go about doing that, obviously, is the secret sauce of each company. But you simply cannot launch your product in that region if you don’t solve — in this case Malaria and Dengue disease. It doesn’t make sense [for a general health app]. You’d have too many flaws and people will stop using you.

I don’t think that’s so much the case with Flo, for instance… But all these entrepreneurs who are designing these companies are fully aware that it isn’t a cookie-cutter, one-size fits all — but it is close to that. When you look at the exceptions. We’re not talking about I have to redo my database because 30% or 20% — it’s much, much smaller than that.

And, by the way, at the end of the day, the market will be the judge. In our case, when you go from an Israeli company into the U.S. and you have partners like Blue Cross, Blue Shield, they’ve tested the crap out of your product. And then you’re going to say well I’m going to do this now in Indonesia — well you get partners locally who’re going to help you do that.

One of the drawbacks about healthcare is, I would say, making sure that your product works in all these countries. And doesn’t have holes in the diagnostic side of it.

Which seems in many cases to boil down to getting the data. And that can be a big challenge. As you mentioned with K Health, there was also the need to structure the data as well — but fundamentally it’s taken Israeli population data and is using it in the U.S. You would say that model is going to scale? There are some counter examples, such as Google-owned DeepMind, which has big designs on using AI for healthcare diagnostics and has put a lot of effort into getting access to population-level health data from the NHS in the U.K., when — at the same time — Google has acquired a database of health records from the U.S. Department of Veterans Affairs. So there does seem to be a lot of effort going into trying to get very localized data but it’s challenging. Google perhaps has a head start because it’s Google. So the question then is how do startups get the data they need to address these kinds of opportunities?

If we’re just looking at K Health then obviously it’s a big challenge because you do have to get data in a way. But I would say again your example as well you have a U.S. database and does it match with a UK database. Again it largely does.

In that case the example is quite specific because the dataset Google has from the department of Veterans Affairs skews heavily male (93.6%). So they really do have almost no female data.

But that’s a bad dataset. That’s not anything else but a bad dataset.

It’s instructive that they’re still using it, though. Maybe that illustrates the challenge of getting access to population-level healthcare data for AI model making.

Maybe it does. But I don’t think this is one of those insurmountable things. Again, what we’ve done is we’ve bought a database that had data on 2.5 million patients, data over 20 years. I think that dataset equates extremely well. We’ve now seen it in U.S. markets for over a year. We’ve had nothing but positive feedback. We beat human doctors every time in tests. And so you look at it and you say they’re just business problems that we have to solve. But what we’re seeing is the consumer market is saying holy shit this is just such a better experience than I’ve ever had before.

So the human body — again — is not that complex. Most of the things that we catch are not that complex. And by the way we’ve grown our database — from the 2.5M that we bought we now have 5M. So we now have 2.5M Americans mixing into that database. And the way they diagnose you is they say based on your age, your size, you don’t smoke and so on — perhaps they say they have 300,000 people in their database like you and they’re benchmarking my symptoms against those people. So I think the smart companies are going to do these things very smartly. But you have to know what you’re using as a user as well… If you’re using that vs just a basic symptom checker — that I don’t think is a particularly great new user experience. But some companies are going to be successful doing that. At the end the great dream is how do you bring all this together and how do you give the consumer a fundamentally better choice and better information. That’s K Health.

Why couldn’t Google do the same thing? I don’t know. They just don’t think about it.

That’s a really interesting question — because Google is making big moves in health. They’re consolidating all their projects under one Google Health unit. Amazon is also increasingly interested in the space. What do you make of this big tech interest? Is that a threat or an opportunity for health startups?

Well if you think of it as an investor they’re all obviously buyers of the companies you’re going to build. So that’s a long term opportunity to sell your business. On the shorter term, does it make sense to invest in companies if all of a sudden the mammoth big players are there? By the way, that has been true for many, many other sectors as well. When I first invested in Skype in the early days people would say the telecom guys are going to crush you. Well they didn’t. But all of a sudden telecom, communication became the current that the Internet guys wanted — that’s why eBay ultimately bought us and why they all had their own messenger.

What the future’s made of we don’t know, but what we do know is that consumers want just the best experience and sometimes the best experience comes from people who are very innovative and very hungry as opposed to people who are working in very large companies. Venture capitalists are always investing in companies that somehow are competing one way or another with Amazon, Facebook, Google and all the big guys. It’s just that when you focus your energy on one thing you tend to do it better than if you don’t. And I’m not suggesting that those companies are not investing a lot of money. They are. And that’s because they realize that one of the currencies of the future is the ability to provide healthcare information, treatment and things like that.

You look at a large retail store like Wal-mart in America. Wal-mart serves largely a population that makes $50k or less. The lower income category in North America. But what are they doing to make you more loyal to them? They’re now starting to build into every Wal-mart doctor’s offices. Why would they do that? Is it because they actually know that if you make $50k or less there’s a high chance you don’t have an insurance and there’s a high chance that you can’t afford to go see a doctor. So they’re going to use that to say, “Hey, if you shop with us, instead of paying $150 for a doctor, it’ll be cheaper.” And we’re beginning to see so many examples like this — where all these companies are saying actually healthcare is the biggest and most important thing that somebody thinks about every day. And if we want to make them loyal to our brand we need to offer something that’s in the healthcare space. So the conclusion of why we’re so excited it we’re seeing it happen in real life.

Wal-mart does that — so when Amazon starts buying an online pharmacy I get why they’re doing that. They want to connect with you on an emotional level which is when you’re not feeling well.

So no, I don’t think we’re particularly worried about them. You have to respect they’re large companies, they have a lot of money and things like that. But that’s always been the case. We think that some of these will likely be bought by those players, some of those will likely build their own businesses. At the end of the day it’s who’s going to get that user experience right.

Google of course would like us all to believe that because they’re the search engine of the world they have the first rights to become the health search engine of the world. I tend to think that’s not true. Actually if you look at the history of Google they were the search engine of the world until they forgot about Amazon. And nowadays if you want to buy anything physical where do you search first? You don’t search on Google anymore — you search on Amazon.

But the space is big and there’s a lot of great entrepreneurs and Europe has a lot to offer I think in terms of taking our history of socialized medicine and saying how can tech power that to make it a better experience?

So what should entrepreneurs that are just thinking about this space — what should they be focusing on in terms of things to fix?

Right now the hottest are the three that I mentioned — because those are the ones that we’ve put money into and we’ve put money in because we think those are the hottest areas. I just think that anything where you feel deep conviction about or you’ve had some basic experience with the issue and the problem.

I simply do not think that clinicians can make this change — in any sector. If you look at those companies I mentioned none of the founders are clinicians in any way shape or form. And that’s why they’re successful. Now I’m not suggesting that you don’t have to have doctors on your staff. For sure. At K Health, we have 30 doctors…. What we’re trying to do is change the experience. So the founder, for instance. was a founder of a company called Vroom that buys and sells cars online in the States. When he started he didn’t know a whole lot about healthcare but he said to himself what I know is I don’t like the user experience. It’s a horrible user experience. I don’t like going to the doctor. I can change that.

So I would say if you’re heading into that space your first pre-occupation is how am I going to change the current user experience in a way that’s meaningful. Because that’s the only thing that people care about.

How is possible that two guys could come up with Flo? They were just good product people.

For me, that’s the driving factor — if you’re going to go into this, go into it saying you’re there to break an experience and make it just a way better place to be.

On the size of the opportunity I have seen some suggestions that health is overheated in investment terms. But perhaps that’s more true in the U.S. than Europe?

Any time an investor community gets hold of a theme and makes it the theme of the month or the year — like fintech was for ten years — I think it becomes overfunded because everybody ploughs into that. I could say yes to that statement sure. Lot of players, lot of actors. Money’s pouring in because people believe that the outcome could be big. So I don’t think it’s overheated. I think that we’ve only scratched the surface by doing certain things.

Some of the companies in the healthcare space that are either thinking of going public or are going public are companies that are pretty basic companies around connecting you with doctors online, etc. So I think that the innovation is really, really coming. As AI becomes real and we’re able to manage the data in an effective way… But again you’ve got to get the user experience right.

Flo in my experience — why it’s better than anything else — one is it’s just a great user experience. And then they have a forum on their app, and the forum is anonymized. And this is curious right. I think they anonymized it without knowing what it would do. And what it did was it allowed women to talk about stuff that perhaps they were not comfortable talking about stuff if people knew who they were. Number one issue? Abortion.

There’s a stigma out there around abortion and so by anonymizing the chat forum all of a sudden it created this opportunity for people to just exchange an experience. So that’s why I say the user experience for me is just at the core of that revolution that’s coming.

Why should it be such a horrific experience to be able to talk about that subject? Why should women be put in that position? So that’s why I think user experience is going to be so key to that.

So that’s why we’re excited. And of course the gambit is large. You think about the examples I gave — you can think of dietary examples, men’s health examples. When men turn 50 things start happening. Little things. But there’s at least 15 of those things that are 100% predictable… I just turned 50 and given there’s so much disinformation online I don’t know what’s true. So I think again there’s a fantastic opportunity for somebody to build companies around that theme — again, probably male and female separate.

Menopause would be another obvious one.

Exactly… You don’t know who you can talk to in many cases. So that’s another opportunity. And wow there are so many things out there. And when I go online today I‘m generally not sure if I can believe what I read unless it’s from a source that I can trust.

For 50 year old men erectile dysfunction is another taboo — a bit like the abortion taboo is for women. Men don’t even talk to their male friends about it… So if there was a place where you could go and learn about it I think there’s a big opportunity. I don’t think erectile dysfunction is a business, but I think how men age is one.

So it’s opportunities for communities around particular health/well-being issues.

Exactly. Because we’re looking for truths when we’re going through that experience ourselves.

The addressable market is massive. There’s men turning 50 every year and they’re probably all pretty interested to find out what are the ten or 15 things that could go wrong for them. There’s a lot of opportunities. It’s so broad. The challenge is you have to think about building it for people who are 50. You’re not building it for an 18-year-old. So the user experience again has to be somewhat different probably. And the healthcare goes all the way to the seniors. What are you looking for when you’re 75? So you see it treats anywhere from certainly from 18 all the way up across a broad-based spectrum of things. So it’s one of our major themes for the next five to ten years.

And so the idea of it being overheated in investment terms is a bit too abstract because there are specific areas that are very underinvested — like femtech. So it’s a case of spotting the particular bits of the healthcare opportunity that need more attention.

Yes. You’ve described it perfectly. In our more simpleton terms, we look at it and say if I look at the previous hot industry — fintech — you would end up with companies doing credit cards, companies doing bank accounts, companies doing lending, companies doing recovery — so many pieces of the value chain. In this case the value chain is humans.

We are even more complex than financial services have ever been, so I think the opportunities are even broader to break it down and build businesses that are going to satisfy certain sexes, maybe certain demographics, certain ages and all these kind of things that are out there. We are just so different.

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.

It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.

Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.

“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.

Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.

Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.

However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.

We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).

Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”

Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.

“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”

“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.

Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.

So — in other words — Brexit means, er, trust Google to look after your data.

“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.

“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”

Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.

The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.

So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.

It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)

Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.

Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.

Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future… 😬

We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?

Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.

This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.

In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.

Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.

In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.

Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.

Though it could be using all that personal stuff to help it build new products it can serve ads alongside.

Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.

The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

Google gobbling Fitbit is a major privacy risk, warns EU data protection advisor

The European Data Protection Board (EDPB) has intervened to raise concerns about Google’s plan to scoop up the health and activity data of millions of Fitbit users — at a time when the company is under intense scrutiny over how extensively it tracks people online and for antitrust concerns.

Google confirmed its plan to acquire Fitbit last November, saying it would pay $7.35 per share for the wearable maker in an all-cash deal that valued Fitbit, and therefore the activity, health, sleep and location data it can hold on its more than 28M active users, at ~$2.1 billion.

Regulators are in the process of considering whether to allow the tech giant to gobble up all this data.

Google, meanwhile, is in the process of dialling up its designs on the health space.

In a statement issued after a plenary meeting this week the body that advises the European Commission on the application of EU data protection law highlights the privacy implications of the planned merger, writing: “There are concerns that the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”

Just this month the Irish Data Protection Commission (DPC) opened a formal investigation into Google’s processing of people’s location data — finally acting on GDPR complaints filed by consumer rights groups as early as November 2018  which argue the tech giant uses deceptive tactics to manipulate users in order to keep tracking them for ad-targeting purposes.

We’ve reached out to the Irish DPC — which is the lead privacy regulator for Google in the EU — to ask if it shares the EDPB’s concerns.

The latter’s statement goes on to reiterate the importance for EU regulators to asses what it describes as the “longer-term implications for the protection of economic, data protection and consumer rights whenever a significant merger is proposed”.

It also says it intends to remain “vigilant in this and similar cases in the future”.

The EDPB includes a reminder that Google and Fitbit have obligations under Europe’s General Data Protection Regulation to conduct a “full assessment of the data protection requirements and privacy implications of the merger” — and do so in a transparent way, under the regulation’s principle of accountability.

“The EDPB urges the parties to mitigate the possible risks of the merger to the rights to privacy and data protection before notifying the merger to the European Commission,” it also writes.

We reached out to Google for comment but at the time of writing it had not provided a response nor responded to a question asking what commitments it will be making to Fitbit users regarding the privacy of their data.

Fitbit has previously claimed that users’ “health and wellness data will not be used for Google ads”.

However big tech has a history of subsequently steamrollering founder claims that ‘nothing will change’. (See, for e.g.: Facebook’s WhatsApp U-turn on data-linking.)

“The EDPB will consider the implications that this merger may have for the protection of personal data in the European Economic Area and stands ready to contribute its advice on the proposed merger to the Commission if so requested,” the advisory body adds.

We’ve also reached out to the European Commission’s competition unit for a response to the EDPB’s statement.

Twitter adds a button so you can thread your shower thoughts

Hold that tweet — and add another one.

Twitter is adding a new feature for mobile users to make it easier to link dispersed ‘shower thoughts’ together — and another thing styleee.

Per 9to5Mac, the feature — which Twitter tweeted about yesterday — is slowly rolling out to its iOS app. (At the time of writing we spotted it in Europe.)

The feature lets you pull down as you’re composing a tweet to add to your previous tweet by creating a thread or seeing a ‘continue thread’ option.

Tapping on a three-dots menu brings up an interface of older tweets which you can link the new tweet to — to continue (or kick off) a thread.

The feature looks intended to encourage more threads (from #140 characters to #280 to infinity tweetstorms and beyond!).

It may also be intended to address the broken thread phenomenon which can still plague the information network service. Especially where users are discussing complex and/or nuanced topics. (And Twitter has said it wants to foster healthy conversations on its platform so…)

The shortcut offers an alternative for Twitter users to being organized enough to tweet a perfectly threaded series of thoughts in the first place (i.e. by using the ‘+’ option at the point of composing your tweetstorm).

It also does away with the need to go manually searching through your feed for the particular tweet you want to expand on and then hitting reply to add another.

No, it’s still not an edit button. But, frankly, if you think Twitter is ever going to let you rewrite your existing tweets you should probably think longer before you hit ‘publish’ on your next one.

The ‘continue thread’ option could also be used as a de facto edit option — by letting users more easily append a correction to a preexisting tweet.

Whether the feature will (generally) work as intended — to boost threads and reduce broken threads and make Twitter a less confusing place for newbs — remains to be seen.

Happily it looks like Twitter has thought about (and closed off) one potential misuse risk. We tested to see what would happen if you try to insert a new tweet into the middle of an existing tweetstorm — which would have had the potential to generate more confusion (i.e. if the thread logic got altered by the addition).

But instead of embedding the new tweet in the middle of the old thread it was added at the bottom as a supplement. So you just start a new thread at the bottom of your old thread.

Good job Jack.

TechCrunch’s Romain Dillet contributed to this report 

Lack of big tech GDPR decisions looms large in EU watchdog’s annual report

The lead European Union privacy regulator for most of big tech has put out its annual report which shows another major bump in complaints filed under the bloc’s updated data protection framework, underlining the ongoing appetite EU citizens have for applying their rights.

But what the report doesn’t show is any firm enforcement of EU data protection rules vis-a-vis big tech.

The report leans heavily on stats to illustrate the volume of work piling up on desks in Dublin. But it’s light on decisions on highly anticipated cross-border cases involving tech giants including Apple, Facebook, Google, LinkedIn and Twitter.

The General Data Protection Regulation (GDPR) began being applied across the EU in May 2018 — so is fast approaching its second birthday. Yet its file of enforcements where tech giants are concerned remains very light — even for companies with a global reputation for ripping away people’s privacy.

This despite Ireland having a large number of open cross-border investigations into the data practices of platform and adtech giants — some of which originated from complaints filed right at the moment GDPR came into force.

In the report the Irish Data Protection Commission (DPC) notes it opened a further six statutory inquiries in relation to “multinational technology companies’ compliance with the GDPR” — bringing the total number of major probes to 21. So its ‘big case’ file continues to stack up. (It’s added at least two more since then, with a probe of Tinder and another into Google’s location tracking opened just this month.)

The report is a lot less keen to trumpet the fact that decisions on cross-border cases to date remains a big fat zero.

Though, just last week, the DPC made a point of publicly raising “concerns” about Facebook’s approach to assessing the data protection impacts of a forthcoming product in light of GDPR requirements to do so — an intervention that resulted in a delay to the regional launch of Facebook’s Dating product.

This discrepancy (cross-border cases: 21 – Irish DPC decisions: 0), plus rising anger from civil rights groups, privacy experts, consumer protection organizations and ordinary EU citizens over the paucity of flagship enforcement around key privacy complaints is clearly piling pressure on the regulator. (Other examples of big tech GDPR enforcement do exist. Well, France’s CNIL is one.)

In its defence the DPC does have a horrifying case load. As illustrated by other stats its keen to spotlight — such as saying it received a total of 7,215 complaints in 2019; a 75% increase on the total number (4,113) received in 2018. A full 6,904 of which were dealt with under the GDPR (while 311 complaints were filed under the Data Protection Acts 1988 and 2003).

There were also 6,069 data security breaches notified to it, per the report — representing a 71% increase on the total number (3,542) recorded last year.

While a full 457 cross-border processing complaints were received in Dublin via the GDPR’s One-Stop-Shop mechanism. (This is the device the Commission came up with for the ‘lead regulator’ approach that’s baked into GDPR and which has landed Ireland in the regulatory hot seat. tl;dr other data protection agencies are passing Dublin A LOT of paperwork.)

The DPC necessarily has to do back and forth on cross border cases, as it liaises with other interested regulators. All of which, you can imagine, creates a rich opportunity for lawyered up tech giants to inject extra friction into the oversight process — by asking to review and query everything. [Insert the sound of a can being hoofed down the road]

Meanwhile the agency that’s supposed to regulate most of big tech (and plenty else) — which writes in the annual report that it increased its full time staff from 110 to 140 last year — did not get all the funding it asked for from the Irish government.

So it also has the hard cap of its own budget to reckon with (just €15.3M in 2019) vs — for example — Google’s parent Alphabet’s $46.1BN in full year 2019 revenue. So, er, do the math.

Nonetheless the pressure is firmly now on Ireland for major GDPR enforcements to flow.

One year of major enforcement inaction could be filed under ‘bedding in’; but two years in without any major decisions would not be a good look. (It has previously said the first decisions will come early this year — so seems to be hoping to have something to show for GDPR’s 2nd birthday.)

Some of the high profile complaints crying out for regulatory action include behavioral ads serviced via real-time bidding programmatic advertising (which the UK data watchdog has admitted for half a year is rampantly unlawful); cookie consent banners (which remain a Swiss Cheese of non-compliance); and adtech platforms cynically forcing consent from users by requiring they agree to being microtargeted with ads to access the (‘free’) service. (Thing is GDPR stipulates that consent as a legal basis must be freely given and can’t be bundled with other stuff, so… )

Full disclosure: TechCrunch’s parent company, Verizon Media (née Oath), is also under ongoing investigation by the DPC — which is looking at whether it meets GDPR’s transparency requirements under Articles 12-14 of the regulation.

Seeking to put a positive spin on 2019’s total lack of a big tech privacy reckoning, commissioner Helen Dixon writes in the report: “2020 is going to be an important year. We await the judgment of the CJEU in the SCCs data transfer case; the first draft decisions on big tech investigations will be brought by the DPC through the consultation process with other EU data protection authorities, and academics and the media will continue the outstanding work they are doing in shining a spotlight on poor personal data practices.”

In further remarks to the media Dixon said: “At the Data Protection Commission, we have been busy during 2019 issuing guidance to organisations, resolving individuals’ complaints, progressing larger-scale investigations, reviewing data breaches, exercising our corrective powers, cooperating with our EU and global counterparts and engaging in litigation to ensure a definitive approach to the application of the law in certain areas.

“Much more remains to be done in terms of both guiding on proportionate and correct application of this principles-based law and enforcing the law as appropriate. But a good start is half the battle and the DPC is pleased at the foundations that have been laid in 2019. We are already expanding our team of 140 to meet the demands of 2020 and beyond.”

One notable date this year also falls when GDPR turns two — because a Commission review of how the regulation is functioning is looming in May.

That’s one deadline that may help to concentrate minds on issuing decisions.

Per the DPC report, the largest category of complaints it received last year fell under ‘access request’ issues — whereby data controllers are failing to give up (all) people’s data when asked — which amounted to 29% of the total; followed by disclosure (19%); fair processing (16%); e-marketing complaints (8%); and right to erasure (5%).

On the security front, the vast bulk of notifications received by the DPC related to unauthorised disclosure of data (aka breaches) — with a total across the private and public sector of 5,188 vs just 108 for hacking (though the second largest category was actually lost or stolen paper, with 345).

There were also 161 notification of phishing; 131 notification of unauthorized access; 24 notifications of malware; and 17 of ransomeware.

Facebook pushes EU for dilute and fuzzy Internet content rules

Facebook founder Mark Zuckerberg is in Europe this week — attending a security conference in Germany over the weekend where he spoke about the kind of regulation he’d like applied to his platform ahead of a slate of planned meetings with digital heavyweights at the European Commission.

“I do think that there should be regulation on harmful content,” said Zuckerberg during a Q&A session at the Munich Security Conference, per Reuters, making a pitch for bespoke regulation.

He went on to suggest “there’s a question about which framework you use”, telling delegates: “Right now there are two frameworks that I think people have for existing industries — there’s like newspapers and existing media, and then there’s the telco-type model, which is ‘the data just flows through you’, but you’re not going to hold a telco responsible if someone says something harmful on a phone line.”

“I actually think where we should be is somewhere in between,” he added, making his plea for Internet platforms to be a special case.

At the conference he also said Facebook now employs 35,000 people to review content on its platform and implement security measures — including suspending around 1 million fake accounts per day, a stat he professed himself “proud” of.

The Facebook chief is due to meet with key commissioners covering the digital sphere this week, including competition chief and digital EVP Margrethe Vestager, internal market commissioner Thierry Breton and Věra Jourová, who is leading policymaking around online disinformation.

The timing of his trip is clearly linked to digital policymaking in Brussels — with the Commission due to set out its thinking around the regulation of artificial intelligence this week. (A leaked draft last month suggested policymaker are eyeing risk-based rules to wrap around AI.)

More widely, the Commission is wrestling with how to respond to a range of problematic online content — from terrorism to disinformation and election interference — which also puts Facebook’s 2BN+ social media empire squarely in regulators’ sights.

Another policymaking plan — a forthcoming Digital Service Act (DSA) — is slated to upgrade liability rules around Internet platforms.

The detail of the DSA has yet to be publicly laid out but any move to rethink platform liabilities could present a disruptive risk for a content distributing giant such as Facebook.

Going into meetings with key commissioners Zuckerberg made his preference for being considered a ‘special’ case clear — saying he wants his platform to be regulated not like the media businesses which his empire has financially disrupted; nor like a dumbpipe telco.

On the latter it’s clear — even to Facebook — that the days of Zuckerberg being able to trot out his erstwhile mantra that ‘we’re just a technology platform’, and wash his hands of tricky content stuff, are long gone.

Russia’s 2016 foray into digital campaigning in the US elections and sundry content horrors/scandals before and since have put paid to that — from nation-state backed fake news campaigns to livestreamed suicides and mass murder.

Facebook has been forced to increase its investment in content moderation. Meanwhile it announced a News section launch last year — saying it would hand pick publishers content to show in a dedicated tab.

The ‘we’re just a platform’ line hasn’t been working for years. And EU policymakers are preparing to do something about that.

With regulation looming Facebook is now directing its lobbying energies onto trying to shape a policymaking debate — calling for what it dubs “the ‘right’ regulation”.

Here the Facebook chief looks to be applying a similar playbook as the Google’s CEO, Sundar Pichai — who recently tripped to Brussels to push for AI rules so dilute they’d act as a tech enabler.

In a blog post published today Facebook pulls its latest policy lever: Putting out a white paper which poses a series of questions intended to frame the debate at a key moment of public discussion around digital policymaking.

Top of this list is a push to foreground focus on free speech, with Facebook questioning “how can content regulation best achieve the goal of reducing harmful speech while preserving free expression?” — before suggesting more of the same: (Free, to its business) user-generated policing of its platform.

Another suggestion it sets out which aligns with existing Facebook moves to steer regulation in a direction it’s comfortable with is for an appeals channel to be created for users to appeal content removal or non-removal. Which of course entirely aligns with a content decision review body Facebook is in the process of setting up — but which is not in fact independent of Facebook.

Facebook is also lobbying in the white paper to be able to throw platform levers to meet a threshold of ‘acceptable vileness’ — i.e. it wants a proportion of law-violating content to be sanctioned by regulators — with the tech giant suggesting: “Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.”

It’s also pushing for the fuzziest and most dilute definition of “harmful content” possible. On this Facebook argues that existing (national) speech laws — such as, presumably, Germany’s Network Enforcement Act (aka the NetzDG law) which already covers online hate speech in that market — should not apply to Internet content platforms, as it claims moderating this type of content is “fundamentally different”.

“Governments should create rules to address this complexity — that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context,” it writes — lobbying for maximum possible leeway to be baked into the coming rules.

“The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms,” Facebook’s VP of content policy, Monika Bickert, also writes in the blog.

“If designed well, new frameworks for regulating harmful content can contribute to the internet’s continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation,” she adds, ticking off more of the tech giant’s usual talking points at the point policymakers start discussing putting hard limits on its ad business.

Facebook Dating launch blocked in Europe after it fails to show privacy workings

Facebook has been left red-faced after being forced to call off the launch date of its dating service in Europe because it failed to give its lead EU data regulator enough advanced warning — including failing to demonstrate it had performed a legally required assessment of privacy risks.

Late yesterday Ireland’s Independent.ie newspaper reported that the Irish Data Protection Commission (DPC) had sent agents to Facebook’s Dublin office seeking documentation that Facebook had failed to provide — using inspection and document seizure powers set out in Section 130 of the country’s Data Protection Act.

In a statement on its website the DPC said Facebook first contacted it about the rollout of the dating feature in the EU on February 3.

“We were very concerned that this was the first that we’d heard from Facebook Ireland about this new feature, considering that it was their intention to roll it out tomorrow, 13 February,” the regulator writes. “Our concerns were further compounded by the fact that no information/documentation was provided to us on 3 February in relation to the Data Protection Impact Assessment [DPIA] or the decision-making processes that were undertaken by Facebook Ireland.”

Facebook announced its plan to get into the dating game all the way back in May 2018, trailing its Tinder-encroaching idea to bake a dating feature for non-friends into its social network at its F8 developer conference.

It went on to test launch the product in Colombia a few months later. And since then it’s been gradually adding more countries in South American and Asia. It also launched in the US last fall — soon after it was fined $5BN by the FTC for historical privacy lapses.

At the time of its US launch Facebook said dating would arrive in Europe by early 2020. It just didn’t think to keep its lead EU privacy regulator in the loop — despite the DPC having multiple (ongoing) investigations into other Facebook-owned products at this stage.

Which is either extremely careless or, well, an intentional fuck you to privacy oversight of its data-mining activities. (Among multiple probes being carried out under Europe’s General Data Protection Regulation, the DPC is looking into Facebook’s claimed legal basis for processing people’s data under the Facebook T&Cs, for example.)

The DPC’s statement confirms that its agents visited Facebook’s Dublin office on February 10 to carry out an inspection — in order to “expedite the procurement of the relevant documentation”.

Which is a nice way of the DPC saying Facebook spent a whole week still not sending it the required information.

“Facebook Ireland informed us last night that they have postponed the roll-out of this feature,” the DPC’s statement goes on.

Which is a nice way of saying Facebook fucked up and is being made to put a product rollout it’s been planning for at least half a year on ice.

The DPC’s head of communications, Graham Doyle, confirmed the enforcement action, telling us: “We’re currently reviewing all the documentation that we gathered as part of the inspection on Monday and we have posed further questions to Facebook and are awaiting the reply.”

“Contained in the documentation we gathered on Monday was a DPIA,” he added.

This begs the question why Facebook didn’t send the DPIA to the DPC on February 3 — unless of course this document did not actually exist on that date…

We’ve reached out to Facebook for comment — and to ask when it carried out the DPIA.

We’ve also asked the DPC to confirm its next steps. The regulator could ask Facebook to make changes to how the product functions in Europe if it’s not satisfied it complies with EU laws. So a delay may mean many things.

Under GDPR there’s a requirement for data controllers to bake privacy by design and default into products which are handling people’s information. And a dating product clearly is.

While a DPIA — which is a process whereby planned processing of personal data is assessed to consider the impact on the rights and freedoms of individuals — is a requirement under the GDPR when, for example, individual profiling is taking place or there’s processing of sensitive data on a large scale.

Again, the launch of a dating product on a platform such as Facebook — which has hundreds of millions of regional users — would be a clear-cut case for such an assessment to be carried out ahead of any launch.