Microsoft’s $7.5BN GitHub buy gets green-lit by EU regulators

Microsoft’s planned acquisition of Git-based code sharing and collaboration service, GitHub, has been given an unconditional greenlight from European Union regulators.

The software giant announced its intention to bag GitHub back in June, saying it would shell out $7.5 billion in stock to do so. At the time it also pledged: “GitHub will retain its developer-first ethos and will operate independently to provide an open platform for all developers in all industries.”

The European Commission approved the plan today, saying its assessment had concluded there would be no adverse impact on competition in the relevant markets, owing to the combined entity continuing to face “significant competition”.

In particular, it said it looked at whether Microsoft would have the ability and incentive to further integrate its own devops tools and cloud services with GitHub while limiting integration with third party tools and services.

The Commission decided Microsoft would have no incentive to undermine the GitHub’s openness — saying any attempt to do so would reduce its value for developers, who the Commission judged as willing and able to switch to other platforms.

Microsoft has previously said it expects the acquisition to close before the end of the year.

Banksy’s rigged art frame was supposed to shred the whole thing

In the connected future will anyone truly own any thing? Banksy’s artworld shocker performance piece, earlier this month, when a canvas of his went under the hammer at Sothebys in London, suggests not.

Immediately the Girl with Balloon canvas sold — for a cool ~$1.1M (£860,000) — it proceeded to self-destruct, via a shredder built into the frame, leaving a roomful of designer glasses paired with a lot of shock and awe, before facial muscles twisted afresh as new calculations kicked in.

As we reported at the time, the anonymous artist had spent years planning this particular prank. Yet the stunt immediately inflated the value of the canvas — some suggested by as much as 50% — despite the work itself being half shredded, with just a heart-shaped balloon left in clear view.

The damaged canvas even instantly got a new title: Love Is in the Bin.

Thereby undermining what might otherwise be interpreted as a grand Banksy gesture critiquing the acquisitive, money-loving bent of the art world. After all, street art is his big thing.

However it turns out that the shredder malfunctioned. And had in fact been intended to send the whole canvas into the bin the second after it sold.

Or, at least, so the prankster says — via a ‘director’s cut’ video posted to his YouTube channel yesterday (and given the title: ‘Shred the love’, which is presumably what he wanted the resulting frame-sans-canvas to be called).

“In rehearsals it worked every time…” runs a caption towards the end of the video, before footage of a complete shredding is shown…

The video also appears shows how the canvas was triggered to get to work cutting.

After the hammer goes down the video cuts to a close-up shot of a pair of man’s hands pressing a button on a box with a blinking red LED — presumably sending a wireless signal to shreddy to get to work…

The suggestion, also from the video (which appears to show close up shots of some of the reactions of people in the room watching the shredding taking place in real time), is that the man — possibly Banksy himself — attended the auction in person and waited for the exact moment to manually trigger the self-destruct mechanism.

There are certainly lots of low power, short range radio technologies that could have been used for such a trigger scenario. Although the artwork itself was apparently gifted to its previous owner by Banksy all the way back in 2006. So the built-in shredder, batteries and radio seemingly had to sit waiting for their one-time public use for 12 years. Unless, well, Banksy stuck into the friend’s house to swap out batteries periodically.

Whatever the exact workings of the mechanism underpinning the stunt, the act is of course the point.

It’s almost as if Banksy is trying to warn us that technology is eroding ownership, concentrating power and shifting agents of control.

Applied gets $2M to make hiring fairer — using algorithms, not AI

London-based startup Applied has bagged £1.5M (~$2M) in seed funding for a fresh, diversity-sensitive approach to recruitment that deconstructs and reworks the traditional CV-bound process, drawing on behavioural science to level the playing field and help employers fill vacancies with skilled candidates they might otherwise have overlooked.

Fairer hiring is the pitch. “If you’re hiring for a product lead, for example, it’s true that loads and loads of product leads are straight, white men with beards. How do we get people to see well what is it actually that this job entails?” founder and CEO Kate Glazebrook tells us. “It might actually be the case that if I don’t know any of the demographic background I discover somebody who I would have otherwise overlooked.”

Applied launched its software as a service recruitment platform in 2016, and Glazebrook says so far it’s been used by more than 55 employers to recruit candidates for more than 2,000 jobs. While more than 50,000 candidates have applied via Applied to date.

The employers themselves are also a diverse bunch, not just the usual suspects from the charitable sector, with both public and private sector organizations, small and large, and from a range of industries, from book publishing to construction, signed up to Applied’s approach. “We’ve been pleased to see it’s not just the sort of thing that the kind of employers you would expect to care about care about,” says Glazebrook.

Applied’s own investor Blackbird Ventures, which is leading the seed round, is another customer — and ended up turning one investment associate vacancy, advertised via the platform, into two roles — hiring both an ethnic minority woman and a man with a startup background as a result of “not focusing on did they have the traditional profile we were expecting”, says Glazebrook.

“They discovered these people were fantastic and had the skills — just a really different set of background characteristics than they were expecting,” she adds.

Other investors in the seed include Skip Capital, Angel Academe, Giant Leap and Impact Generation Partners, plus some unnamed angels. Prior investors include the entity Applied was originally spun out of (Behavioural Insights Team, a “social purpose company” jointly owned by the UK government, innovation charity Nesta, and its own employees), as well as gender advocate and businesswoman Carol Schwartz, and Wharton Professor Adam Grant.

Applied’s approach to recruitment employs plenty of algorithms — including for scoring candidates (its process involves chunking up applications and also getting candidates to answer questions that reflect “what a day in the job actually looks like”), and also anonymizing applications to further strip away bias risks, presenting the numbered candidates in a random order too.

But it does not involve any AI-based matching. If you want to make hiring fairer, AI doesn’t look like a great fit. Last week, for example, Reuters reported how in 2014 ecommerce giant Amazon built and then later scrapped a machine learning based recruitment tool, after it failed to rate candidates in a gender-neutral way — apparently reflecting wider industry biases.

“We’re really clear that we don’t do AI,” says Glazebrook. “We don’t fall into the traps that [companies like] Amazon did. Because it’s not that we’re parsing existing data-sets and saying ‘this is what you hired for last time so we’ll match candidates to that’. That’s exactly where you get this problem of replication of bias. So what we’ve done instead is say ‘actually what we should do is change what you see and how you see it so that you’re only focusing on the things that really matter’.

“So that levels the playing field for all candidates. All candidates are assessed on the basis of their skill, not whether or not they fit the historic profile of people you’ve previously hired. We avoid a lot of those pitfalls because we’re not doing AI-based or algorithmic hiring — we’re doing algorithms that reshape the information you see, not the prediction that you have to arrive at.”

In practice this means Applied must and does take over the entire recruitment process, including writing the job spec itself — to remove things like gendered language which could introduce bias into the process — and slicing and dicing the application process to be able to score and compare candidates and fill in any missing bits of data via role-specific skills tests.

Its approach can be thought of as entirely deconstructing the CV — to not just remove extraneous details and bits of information which can bias the process (such as names, education institutions attended, hobbies etc) but also to actively harvest data on the skills being sought, with employers using the platform to set tests to measure capacities and capabilities they’re after.

“We manage the hiring process right from the design of an inclusive job description, right through to the point of making a hiring decision and all of the selection that happens beneath that,” says Glazebrook. “So we use over 30 behavioural science nudges throughout the process to try and improve conversion and inclusivity — so that includes everything from removal of gendered language in jobs descriptions to anonymization of applications to testing candidates on job preview based assessments, rather than based on their CVs.”

“We also help people to run more evidence-based structured interviews and then make the hiring decision,” she adds. “From a behavioral science standpoint I guess our USP is we’ve redesigned the shortlisting process.”

The platform also provides jobseekers with greater visibility into the assessment process by providing them with feedback — “so candidates get to see where their strengths and weaknesses were” — so it’s not simply creating a new recruitment blackbox process that keeps people in the dark about the assessments being made about them. Which is important from an algorithmic accountability point of view, even without any AI involved. Because vanilla algorithms can still sum up to dumb decisions.

From the outside looking in, Applied’s approach might sound highly manual and high maintenance, given how necessarily involved the platform is in each and every hire, but Glazebrook says in fact it’s “all been baked into the tech” — so the platform takes the strain of the restructuring by automating the hand-holding involved in debiasing job ads and judgements, letting employers self-serve to step them through a reconstructed recruitment process.

“From the job description design, for example, there are eight different characteristics that are automatically picked out, so it’s all self-serve stuff,” explains Glazebrook, noting that the platform will do things like automatically flag words to watch out for in job descriptions or the length of the job ad itself.

“All with that totally automated. And client self-serve as well, so they use a library of questions — saying I’m looking for this particular skill-set and we can say well if you look through the library we’ll find you some questions which have worked well for testing that skill set before.”

“They do all of the assessment themselves, through the platform, so it’s basically like saying rather than having your recruiting team sifting through paper forms of CVs, we have them online scoring candidates through this redesigned process,” she adds.

Employers themselves need to commit to a new way of doing things, of course. Though Applied’s claim is that ultimately a fairer approach also saves time, as well as delivering great hires.

“In many ways, one of the things that we’ve discovered through many customers is that it’s actually saved them loads of time because the shortlisting process is devised in a way that it previously hasn’t been and more importantly they have data and reporting that they’ve never previously had,” she says. “So they now know, through the platform, which of the seven places that they placed the job actually found them the highest quality candidates and also found people who were from more diverse backgrounds because we could automatically pull the data.”

Applied ran its own comparative study of its reshaped process vs a traditional sifting of CVs and Glazebrook says it discovered “statistically significant differences” in the resulting candidate choices — claiming that over half of the pool of 700+ candidates “wouldn’t have got the job if we’d been looking at their CVs”.

They also looked at the differences between the choices made in the study and also found statistically significant differences “particularly in educational and economic background” — “so we were diversifying the people we were hiring by those metrics”.

“We also saw directional evidence around improvements in diversity on disability status and ethnicity,” she adds. “And some interesting stuff around gender as well.”

Applied wants to go further on the proof front, and Glazebrook says it is now automatically collecting performance data while candidates are on the job — “so that we can do an even better job of proving here is a person that you hired and you did a really good job of identifying the skill-sets that they are proving they have when they’re on the job”.

She says it will be feeding this intel back into the platform — “to build a better feedback loop the next time you’re looking to hire that particular role”.

“At the moment, what is astonishing, is that most HR departments 1) have terrible data anyway to answer these important questions, and 2) to the extent they have them they don’t pair those data sets in a way that allows them to prove — so they don’t know ‘did we hire them because of X or Y’ and ‘did that help us to actually replicate what was working well and jettison what wasn’t’,” she adds.

The seed funding will go on further developing these sorts of data science predictions, and also on updates to Applied’s gendered language tool and inclusive job description tool — as well as on sales and marketing to generally grow the business.

Commenting on the funding in a statement, Nick Crocker, general partner at Blackbird Ventures said: “Our mission is to find the most ambitious founders, and support them through every stage of their company journey. Kate and the team blew us away with the depth of their insight, the thoughtfulness of their product, and a mission that we’re obsessed with.”

In another supporting statement, Owain Service, CEO of BI Ventures, added: “Applied uses the latest behavioural science research to help companies find the best talent. We ourselves have recruited over 130 people through the platform. This investment represents an exciting next step to supporting more organisations to remove bias from their recruitment processes, in exactly the same way that we do.”

Google tweaks Android licensing terms in Europe to allow Google app unbundling — for a fee

Google has announced changes to the licensing model for its Android mobile operating system in Europe,  including introducing a fee for licensing some of its own brand apps, saying it’s doing so to comply with a major European antitrust ruling this summer.

In July the region’s antitrust regulators hit Google with a recordbreaking $5BN fine for violations pertaining to Android, finding the company had abused the dominance of the platform by requiring manufacturers pre-install other Google apps in order to license its popular Play app store. 

Regulators also found Google had made payments to manufacturers and mobile network operators in exchange for exclusively pre-installing Google Search on their devices, and used Play store licensing to prevent manufacturers from selling devices based on Android forks.

Google disputes the Commission’s findings, and last week filed its appeal — a legal process that could take years. But in the meanwhile it’s making changes to how it licenses Android in Europe to avoid the risk of additional penalties heaped on top of the antitrust fine.

Hiroshi Lockheimer, Google’s senior vice president of platforms & ecosystems, revealed the new licensing options in a blog post published today.

Under updated “compatibility agreements”, he writes that mobile device makers will be able to build and sell Android devices intended for the European Economic Area (EEA) both with and without Google mobile apps preloaded — something Google’s same ‘compatibility’ contracts restricted them from doing before, when it was strictly either/or (either you made Android forks, or you made Android devices with Google apps — not both).

“Going forward, Android partners wishing to distribute Google apps may also build non-compatible, or forked, smartphones and tablets for the European Economic Area (EEA),” confirms Lockheimer.

However the company is also changing how it licenses the full Android bundle — which previously required OEMs to load devices with the Google mobile application suite, Google Search and the Chrome browser in order to be able to offer the popular Play Store — by introducing fees for OEMs wanting to pre-load a subset of those same apps under “a new paid licensing agreement for smartphones and tablets shipped into the EEA”.

Though Google stresses there will be no charge for using the Android platform itself. (So a pure fork without any Google services preloaded still wouldn’t require a fee.)

Google also appears to be splitting out Google Search and Chrome from the rest of the Google apps in its mobile suite (which traditionally means stuff like YouTube, the Play Store, Gmail, Google Maps, although Lockheimer’s blog post does not make it clear which exact apps he’s talking about) — letting OEMs selectively unbundle some Google apps, albeit potentially for a fee, depending on the apps in question.

“[D]evice manufacturers will be able to license the Google mobile application suite separately from the Google Search App or the Chrome browser,” is what Lockheimer unilluminatingly writes.

Perhaps Google wants future unbundled Android forks to still be able to have Google Search or Chrome, even if they don’t have the Play store, but it’s really not at all clear which configurations of Google apps will be permitted under the new licensing terms, and which won’t.

“Since the pre-installation of Google Search and Chrome together with our other apps helped us fund the development and free distribution of Android, we will introduce a new paid licensing agreement for smartphones and tablets shipped into the EEA. Android will remain free and open source,” Lockheimer adds, without specifying what the fees will be either. 

“We’ll also offer new commercial agreements to partners for the non-exclusive pre-installation and placement of Google Search and Chrome. As before, competing apps may be pre-installed alongside ours,” he continues to complete his trio of poorly explained licensing changes.

We’ve asked Google to clarify the various permitted and not permitted app configurations, as well as which apps will require a fee (and which won’t), and how much the fees will be, and will update this post with any response.

The devil in all those details should become clear soon though, as Google says the new licensing options will come into effect on October 29 for all new (Android based) smartphones and tablets launched in the EEA.

TravelPerk grabs $44M to take its pain-free SaaS for business travel global

Only six months ago Barcelona-based TravelPerk bagged a $21M Series B, off the back of strong momentum for a software as a service platform designed to take a Slack-like chunk out of the administrative tedium of arranging and expensing work trips.

Today the founders’ smiles are firmly back in place: TravelPerk has announced a $44M Series C to keep stoking growth that’s seen it grow from around 20 customers two years ago to approaching 1,500 now. The business itself was only founded at the start of 2015.

Investors in the new round include Sweden’s Kinnevik; Russian billionaire and DST Global founder Yuri Milner, and Tom Stafford, also of DST. Prior investors include the likes of Target Global, Felix Capital, Spark Capital, Sunstone, LocalGlobe and Amplo.

Commenting on the Series C in a statement, Kinnevik’s Chris Bischoff, said: “We are excited to invest in TravelPerk, a company that fits perfectly into our investment thesis of using technology to offer customers more and much better choice. Booking corporate travel is unnecessarily time-consuming, expensive and burdensome compared to leisure travel. Avi and team have capitalised on this opportunity to build the leading European challenger by focusing on a product-led solution, and we look forward to supporting their future growth.”

TravelPerk’s total funding to date now stands at almost $75M. It’s not disclosing the valuation that its latest clutch of investors are stamping on its business but, with a bit of a chuckle, co-founder and CEO Avi Meir dubs it “very high”.

Gunning for growth — to West and East

TravelPerk contends that a $1.3tr market is ripe for disruption because legacy business travel booking platforms are both lacking in options and roundly hated for being slow and horrible to use. (Hi Concur!)

Helping business save time and money using a slick, consumer-style trip booking platform that both packs in options and makes business travellers feel good about the booking process (i.e. rather than valueless cogs in a soul-destroying corporate ROI machine) is the general idea — an idea that’s seemingly catching on fast.

And not just with the usual suspect, early adopter, startup dog food gobblers but pushing into the smaller end of the enterprise market too.

“We kind of stumbled on the realization that our platform works for bigger companies than we thought initially,” says Meir. “So the users used to be small, fast-growing tech companies, like GetYourGuide, Outfittery, TypeForm etc… They’re early adopters, they’re tech companies, they have no fear of trying out tech — even for such a mission critical aspect of their business… But then we got pulled into bigger companies. We recently signed FarFetch for example.”

Other smaller sized enterprises that have signed up include the likes of Adyen, B&W, Uber and Aesop.

Companies small and big are, seemingly, united in their hatred of legacy travel booking platforms. And feeling encouraged to check out TravelPerk’s alternative thanks to the SaaS being free to use and free from the usual contract lock ins.

TravelPerk’s freemium business model is based on taking affiliate commissions on bookings. While, down the road, it also has its eye on generating a data-based revenue stream via paid-tier trip analytics.

Currently it reports booking revenues growing at 700% year on year. And Meir previously told us it’s on course to do $100M GMV this year — which he confirms continues to be the case.

It also says it’s on track to complete bookings for one million travellers by next year. And claims to be the fastest growing software as a service company in Europe, a region which remains its core market focus — though the new funding will be put towards market expansion.

And there is at least the possibility, according to Meir, that TravelPerk could actively expand outside Europe within the next 12 months.

“We definitely are looking at expansion outside of Europe as well. I don’t know yet if it’s going to be first US — West or East — because there are opportunities in both directions,” he tells TechCrunch. “And we have customers; one of our largest customers is in Singapore. And we do have a growing amount of customers out of the US.”

Doubling down on growth within Europe is certainly on the slate, though, with a chunk of the Series C going to establish a number of new offices across the region.

Having more local bases to better serve customers is the idea. Meir notes that, perhaps unusually for a startup, TravelPerk has not outsourced customer support — but kept customer service in house to try to maintain quality. (Which, in Europe, means having staff who can speak the local language.)

He also quips about the need for a travel business to serve up “human intelligence” — i.e. by using tech tools to slickly connect on-the-road customers with actual people who can quickly and smartly grapple with and solve problems; vs an automated AI response which is — let’s face it — probably the last thing any time-strapped business traveller wants when trying to get orientated fast and/or solve a snafu away from home.

“I wouldn’t use [human intelligence] for everything but definitely if people are on the road, and they need assistance, and they need to make changes, and you need to understand what they said…” argues Meir, going on to say ‘HI’ has been his response when investors asked why TravelPerk’s pitch deck doesn’t include the almost-impossible-to-avoid tech buzzword: “AI”.

“I think we are probably the only startup in the world right now that doesn’t have AI in the pitch deck somewhere,” he adds. “One of the investors asked about it and I said ‘well we have HI; it’s better’… We have human intelligence. Just people, and they’re smart.”

Also on the cards (it therefore follows): More hiring (the team is at ~150 now and Meir says he expects it to push close to 300 within 18 months); as well as continued investment on the product front, including in the mobile app which was a late addition, only arriving this year.

The TravelPerk mobile app offers handy stuff like a one-stop travel itinerary, flight updates and a chat channel for support. But the desktop web app and core platform were the team’s first focus, with Meir arguing the desktop platform is the natural place for businesses to book trips.

This makes its mobile app more a companion piece — to “how you travel” — housing helpful additions for business travellers, as nice-to-have extras. “That’s what our app does really well,” he adds. “So we’re unusually contrarian and didn’t have a mobile app until this year… It was a pretty crazy bet but we really wanted to have a great web app experience.”

Much of TravelPerk’s early energy has clearly gone into delivering on the core product via nailing down the necessary partnerships and integrations to be able to offer such a large inventory — and thus deliver expanded utility vs legacy rivals.

As well as offering a clean-looking, consumer-style interface intended to do for business travel booking feels what Slack has done for work chat, the platform boasts a larger inventory than traditional players in the space, according to Meir — by plugging into major consumer providers such as Booking.com and Expedia.

The inventory also includes Airbnb accommodation (not just traditional hotels). While other partners on the flight side include include Kayak and Skyscanner.

“We have not the largest bookable inventory in the world,” he claims. “We’re way larger than old school competitors… We went through this licensing process which is almost as difficult as getting a banking license… which give us the right to sell you the same product as travel agencies… Nobody in the world can sell you Kayak’s flights directly from their platform — so we have a way to do that.”

TravelPerk also recently plugged trains into its directly bookable options. This mode of transport is an important component of the European business travel market where rail infrastructure is dense, highly developed and often very high speed. (Which means it can be both the most convenient and environmentally friendly travel option to use.)

“Trains are pretty complex technically so we found a great partner,” notes Meir on that, listing major train companies including in Germany, Spain and Italy as among those it’s now able to offer direct bookings for via its platform.

On the product side, the team is also working on integrating travel and expenses management into the platform — to serve its growing numbers of (small) enterprise customers who need more than just a slick trip booking tool.

Meir says getting pulled to these bigger accounts is steering its European expansion — with part of the Series C going to fund a clutch of new offices around the region near where some of its bigger customers are based. Beginning in London, with Berlin, Amsterdam and Paris slated to follow soon.

Picking investors for the long haul

What does the team attribute TravelPerk’s momentum to generally? It comes back to the pain, says Meir. Business travellers are being forced to “tolerate” horrible legacy systems. “So I think the pain-point is so visible and so clear [it sells itself],” he argues, also pointing out this is true for investors (which can’t have hurt TravelPerk’s funding pitch).

“In general we just built a great product and a great service, and we focused on this consumer angle — which is something that really connects well with what people want in this day and age,” he adds. “People want to use something that feels like Slack.”

For the Series C, Meir says TravelPerk was looking for investors who would be comfortable supporting the business for the long haul, rather than pushing for a quick sale. So they are now articulating the possibility of a future IPO.

And while he says TravelPerk hadn’t known much about Swedish investment firm Kinnevik prior to the Series C, Meir says he came away impressed with its focus on “global growth and ambition”, and the “deep pockets and the patience that comes with it”.

“We really aligned on this should be a global play, rather than a European play,” he adds. “We really connected on this should be a very, big independent business that goes to the path of IPO rather than a quick exit to one of the big players.

“So with them we buy patience, and also the condition, when offers do come onto the table, to say no to them.”

Given it’s been just a short six months between the Series B and C, is TravelPerk planning to raise again in the next 12 months?

“We’re never fundraising and we’re always fundraising I guess,” Meir responds on that. “We don’t need to fundraise for the next three years or so, so it will not come out of need, hopefully, unless something really unusual is happening, but it will come more out of opportunity and if it presented a way to grow even faster.

“I think the key here is how fast we grow. And how good a product we certify — and if we have an opportunity to make it even faster or better then we’ll go for it. But it’s not something that we’re actively doing it… So to all investors reading this piece don’t call me!” he adds, most likely inviting a tsunami of fresh investor pitches.

Discussing the challenges of building a business that’s so fast growing it’s also changing incredibly rapidly, Meir says nothing is how he imagined it would be — including fondly thinking it would be easier the bigger and better resourced the business got. But he says there’s an upside too.

“The challenges are just much, much bigger on this scale,” he says. “Numbers are bigger, you have more people around the table… I would say it’s very, very difficult and challenging but also extremely fun.

“So now when we release a feature it goes immediately into the hands of hundreds of thousands of travellers that use it every month. And when you fundraise… it’s much more fun because you have more leverage.

“It’s also fun because — and I don’t want to position myself as the cynical guy — the reality is that most startups don’t cure cancer, right. So we’re not saving the world… but in our little niche of business travel, which is still like $1.3tr per year, we are definitely making a dent.

“So, yes, it’s more challenging and difficult as your grow, and the problems become much bigger, but you can also deliver the feedback to more people.”

A fictional Facebook Portal videochat with Mark Zuckerberg

TechCrunch: Hey Portal, dial Mark

Portal: Do you mean Mark Zuckerberg?

TC: Yes

Portal: Dialling Mark…


TC: Hi Mark! Nice choice of grey t-shirt.

MZ: Uh, new phone who dis? — oh, hi, er, TechCrunch…

TC: Thanks for agreeing to this entirely fictional interview, Mark!

MZ: Sure — anytime. But you don’t mind if I tape over the camera do you? You see I’m a bit concerned about my privacy here at, like, home

TC: We feel you, go ahead.

As you can see, we already took the precaution of wearing this large rubber face mask of, well, of yourself Mark. And covering the contents of our bedroom with these paint-splattered decorator sheets.

MZ: Yeah, I saw that. It’s a bit creepy tbh

TC: Go on and get all taped up. We’ll wait.

[sound of Mark calling Priscilla to bring the tape dispenser]

[Portal’s camera jumps out to assimilate Priscilla Chan into the domestic scene, showing a generous vista of the Zuckerbergs’ living room, complete with kids playing in the corner. Priscilla, clad in an oversized dressing gown and with her hair wrapped in a big fluffy towel, can be seen gesticulating at the camera. She is also coughing]

Priscilla to Mark: I already told you — there’s a camera cover built into into Portal. You don’t need to use tape now

MZ: Oh, right, right!

Okay, going dark! Wow, that feels better already

[sound of knuckles cracking]

TC: So, Mark, let’s talk hardware! What’s your favorite Amazon Echo?

MZ: Uh, well…

TC: We’d guess one with all the bells & whistles, right? There’s definitely something more than a little Echo Show-y about Portal

MZ: Sure, I mean. We think Alexa is a great product

TC: Mhmm. Do you remember when digital photo frames first came out? They were this shiny new thing about, like, a decade ago? One of those gadgets your parents buy you around Thanksgiving, which ends up stuck in a drawer forever?

MZ: Yeah! I think someone gave me one once with a photo of me playing beer pong on it. We had it hanging in the downstairs rest room for the longest time. But then we got an Android tablet with a Wi-Fi connection for in there, so…

TC: Now here we are a decade or so later with Portal advancing the vision of what digital photo frames can be!

MZ: Yeah! I mean, you don’t even have to pick the pictures! It’s pretty awesome. This one here — oh, right you can’t see me but let me describe it for you — this one here is of a Halloween party I went to one year. Someone was dressed as SpongeBob. I think they might have been called Bob, actually… And this is, like, some other Facebook friends doing some other fun stuff. Pretty amazing.

You can also look at album art

TC: But not YouTube, right? But let’s talk about video calling

MZ: It’s an amazing technology

TC: It sure is. Skype, FaceTime… live filters, effects, animoji…

MZ: We’re building on a truly great technology foundation. Portal autozooming means you don’t even have to think about watching the person you’re talking to! You can just be doing stuff in your room and the camera will always be adjusting to capture everything you’re doing! Pretty amazing.

TC: Doing what Mark? Actually, let’s not go there

MZ: Portal will even suggest people for you to call! We think this will be a huge help for our mission to promote Being Well — uh, I mean Time Well Spent because our expert machine learning algorithms will be nudging you to talk to people you should really be talking to

TC: Like my therapist?

MZ: Uh, well, it depends. But our AI can suggest personalized meaningful interactions by suggesting Messenger contacts to call up

TC: It’s not going to suggest I videchat my ex is it?

MZ: Haha! Hopefully not. But maybe your mom? Or your grandma?

TC: Sounds incredibly useful. Well, assuming they didn’t already #deletefacebook.

But let’s talk about kids

MZ: Kids! Yeah we love them. Portal is going to be amazing for kids

TC: You have this storybook thing going on, right? Absent grandparents using Portal to read kids bedtime stories and what not…

MZ: Right! We think kids are going to love it. And grandparents! We’ve got these animal masks if you get bored of looking at your actual family members. It’s good, clean, innovative fun for all the family!

TC: Yeah, although, I mean, nothing beats reading from an actual kid’s book, right?

MZ: Well…

TC: If you do want to involve a device in your kid’s bedtime there are quite a lot of digital ebook apps for that already. Apple has a whole iBooks library of the things with read-aloud narration, for example.

And, maybe you missed this — but quite a few years ago there was a big bunch of indie apps and services all having a good go at selling the same sort of idea of ‘interactive remote reading experiences’ for families with kids. Though not many appear to have gone the distance. Which does sort of suggest there isn’t a huge unmet need for extra stuff beyond, well, actual children’s books and videochat apps like Skype and FaceTime.

Also, I mean, children’s story reading apps and interactive kids’ e-books are pretty much as old as the hills in Internet terms at this point. So, er, you’re not really moving fast and breaking things are you!?

MZ: Actually we’re more focused on stable infrastructure these days

TC: And hardware too, apparently. Which is a pretty radical departure for Facebook. All those years everyone thought you were going to do a Facebook phone but you left it to Amazon to flop into that pit… Who needs hardware when you can put apps and tracker pixels on everything, right?!

But here you are now, kinda working with Amazon for Portal — while also competing with Alexa hardware by selling your own countertop device… Aren’t you at all nervous about screwing this up? Hardware IS hard. And homes have curtains for a reason…

MZ: We’re definitely confident kids aren’t going to try swivelling around on the Portal Plus like it’s a climbing frame, if that’s what you mean. Well, hopefully not anyway

TC: But about you, Facebook Inc, putting an all-seeing-eye-cum-Internet-connected-listening-post into people’s living rooms and kids’ bedrooms…

MZ: What about it?

[MZ speaking to someone else in the room] Does the speaker have an off switch? How do I mute this thing?

TC: Hello? Mark?

[silence]

[sound comes back on briefly and a snatch of conversation can be heard between Mark and Priscilla about the need to buy more diapers. Mark is then heard shouting across the room that his Shake Shack order of a triple cheeseburger and fries plus butterscotch malt is late again]

[silence] 

[crackle and a congested throat clearing sound. A child is heard in the background asking for Legos]

MZ: Not now okay honey. Okay hon-, uh, hello — what were you saying?

TC: Will you be putting a Portal in Max’s room?

MZ: Haha! She’d probably prefer Legos

TC: August?

MZ: She’s only just turned one

TC: Okay, let’s try a more direct question. Do you at all think that you, Facebook Inc,

might have a problem selling a $200+ piece of Internet-connected hardware when your company is known for creeping on people to sell ads?

MZ: Oh no, no! — we’ve, like, totally thought of that!

Let me read you what marketing came up with. Hang on, it’s around here somewhere…

[sound of paper rustling]

Here we go [reading]:

Facebook doesn’t listen to, view, or keep the contents of your Portal video calls. Your Portal conversations stay between you and the people you’re calling. In addition, video calls on Portal are encrypted, so your calls are always secure.

For added security, Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t use facial recognition and doesn’t identify who you are.

Like other voice-enabled devices, Portal only sends voice commands to Facebook servers after you say, ‘Hey Portal.’ You can delete your Portal’s voice history in your Facebook Activity Log at any time.

Pretty cool, huh!

TC: Just to return to your stable infrastructure point for a second, Mark — did you mean Facebook is focused on security too? Because, well, your company keeps leaking personal data like a sieve holds water

MZ: We think of infrastructure as a more holistic concept. And, uh, as a word that sounds reassuring

TC: Okay, so of course you can’t 100% guarantee Portal against hacking risks, though you’re taking precautions by encrypting calls. But Portal might also ‘accidentally’ record stuff adults and kids say in the home — i.e. if its ‘Hey Portal’ local listening function gets triggered when it shouldn’t. And it will then be 100% up to a responsible adult to find their way through Facebook’s labyrinthine settings and delete those wiretaps, won’t it?

MZ: You can control all your information, yes

TC: The marketing bumpf also doesn’t spell out what Facebook does with ‘Hey Portal’ voice recordings, or the personal insights your company is able to glean from them, but Facebook is in the business of profiling people for ad targeting purposes so we must assume that any and all voice commands and interactions, with the sole exception of the contents of videocalls, will go into feeding that beast.

So the metadata of who you talk to via Portal, what you listen to and look at (minus any Alexa-related interactions that you’ve agreed to hand off to Amazon for its own product targeting purposes), and potentially much more besides is all there for Facebook’s taking — given the kinds of things that an always-on listening device located in a domestic setting could be accidentally privy to.

Then, as more services get added to Portal, more personal behavioral data will be generated and can be processed by Facebook for selling ads.

MZ: Well, I mean, like I told that Senator we do sell ads

TC: And smart home hardware too now, apparently.

One more thing, Mark: In Europe, Facebook didn’t used to have face recognition technology switched on did it?

MZ: We had it on pause for a while

TC: But you switched it back on earlier this year right?

MZ: Facebook users in Europe can choose to use it, yes

TC: And who’s in charge of framing that choice?

MZ: Uh, well we are obviously

TC: We’d like you to tap on the Portal screen now, Mark. Tap on the face you can see to make the camera zoom right in on this mask of your own visage. Can you do that for us?

MZ: Uh, sure

[sound of a finger thudding against glass]

MZ: Are you seeing this? It really is pretty creepy!

Or — I mean — it would be if it wasn’t so, like, familiar…

Facebook CEO Mark Zuckerberg arrives to testify before a joint hearing of the US Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee on Capitol Hill, April 10, 2018 in Washington, DC. (Photo: JIM WATSON/AFP/Getty Images)

[sound of a child crying]

Priscilla to Mark: Eeeew! Turn that thing off!

TC: Thanks Mark. We’ll leave you guys to it.

Enjoy your Shake Shack. Again.


Portal: Thanks for calling Mark, TechCrunch! Did you enjoy your Time Well Spent?

This is not fine

A UN report compiled by a coalition of international climate experts has warned that “rapid, far-reaching and unprecedented changes in all aspects of society” are required if global warming is to be limited to just 1.5°C.

The report also sets out some of the dire consequences for both humanity and life on Earth if that threshold is exceeded, and points out that, conversely, limiting global warming would give people and ecosystems “more room to adapt and remain below relevant risk thresholds”.

Decisions made by world leaders today are critical in ensuring a safe and sustainable world for everyone, the authors warn.

“One of the key messages that comes out very strongly from this report is that we are already seeing the consequences of 1°C of global warming through more extreme weather, rising sea levels and diminishing Arctic sea ice, among other changes,” said Panmao Zhai, co-chair of one of the report’s scientific working groups.

“The good news is that some of the kinds of actions that would be needed to limit global warming to 1.5°C are already underway around the world, but they would need to accelerate,” added Valerie Masson-Delmotte, co-chair of the same group.

To limit the damage caused by climate change, global net human-caused emissions of carbon dioxide (CO2) would need to fall by about 45% from 2010 levels by 2030, reaching ‘net zero’ around 2050 — which means that any remaining emissions would need to be balanced by removing CO2 from the air.

If world leaders do not succeeding in keeping warming to 1.5°C humanity will face a range of far more severe impacts, with a 2°C rise meaning an extra 10cm rise in sea levels by 2100 — which would inundate scores more coastal cities and low lying areas, increasing the amount of people who would be displaced in future.

Climate-related risks to health, livelihoods, food security, water supply, human security, and economic growth are also projected to be more severe at the higher temperature rise.

While the report says that limiting global warming to 1.5°C would reduce risks to marine biodiversity, fisheries, and ecosystems, and their functions and services to humans.

Even with a 1.5°C rise coral reefs would still be severely impacted, declining by 70-90% — but virtually all (>99%) reefs would be lost with a 2°C rise.

While the likelihood of an Arctic Ocean free of sea ice in summer would be once per century with global warming of 1.5°C, compared with at least once per decade with 2°C, according to the report.

Likewise, on land, impacts on biodiversity and ecosystems, including species loss and extinction, are projected to be lower at 1.5°C of global warming vs 2°C.

Impacts associated with other biodiversity-related risks — such as forest fires, and the spread of invasive species — would also be less severe if climate change can be contained to a smaller rise.

The Intergovernmental Panel on Climate Change (IPCC) compiled the Special Report on Global Warming in response to an invitation from the UN’s Framework Convention on Climate Change when 195 global leaders adopted the 2015 Paris Agreement to tackle climate change — an accord which President Trump turned his back on last year when he withdrew the US from the agreement.

The report will be a key scientific input for the Katowice Climate Change Conference, which takes place in Poland in December, when other heads of state will meet to review the Paris Agreement.

The group of 91 authors and review editors from 40 countries who prepared the report argue that keeping global temperature rise to 1.5°C would also support a more sustainable and equitable society.

“Limiting global warming to 1.5°C compared with 2°C would reduce challenging impacts on ecosystems, human health and well-being, making it easier to achieve the United Nations Sustainable Development Goals,” said Priyardarshi Shukla, co-chair of IPCC Working Group III, in a statement.

“Every extra bit of warming matters, especially since warming of 1.5°C or higher increases the risk associated with long-lasting or irreversible changes, such as the loss of some ecosystems,” added Hans-Otto Pörtner, Co-Chair of IPCC Working Group II.

Any ‘overshoot’ of 1.5°C would mean a greater reliance on techniques that remove CO2 from the air to return global temperature to below 1.5°C by 2100.

But policymakers are warned that the effectiveness of such techniques are unproven at large scale and some may carry significant risks for sustainable development.

Meme via GIPHY

ePrivacy: An overview of Europe’s other big privacy rule change

Gather round. The EU has a plan for a big update to privacy laws that could have a major impact on current Internet business models.

Um, I thought Europe just got some new privacy rules?

They did. You’re thinking of the General Data Protection Regulation (GDPR), which updated the European Union’s 1995 Data Protection Directive — most notably by making the penalties for compliance violations much larger.

But there’s another piece of the puzzle — intended to ‘complete’ GDPR but which is still in train.

Or, well, sitting in the sidings being mobbed by lobbyists, as seems to currently be the case.

It’s called the ePrivacy Regulation.

ePrivacy Regulation, eh? So I guess that means there’s already an ePrivacy Directive then…

Indeed. Clever cookie. That’s the 2002 ePrivacy Directive to be precise, which was amended in 2009 (but is still just a directive).

Remind me what’s the difference between an EU Directive and a Regulation again… 

A regulation is a more powerful legislative instrument for EU lawmakers as it’s binding across all Member States and immediately comes into legal force on a set date, without needing to be transposed into national laws. In a word it’s self-executing.

Whereas, with a directive, Member States get a bit more flexibility because it’s up to them how they implement the substance of the thing. They could adapt an existing law or create a new one, for example.

With a regulation the deliberation happens among EU institutions and, once that discussion and negotiation process has concluded, the agreed text becomes law across the bloc — at the set time, and without necessarily requiring further steps from Member States.

So regulations are powerful.

So there’s more legal consistency with a regulation? 

In theory. Greater harmonization of data protection rules is certainly an impetus for updating the EU’s legal framework around privacy.

Although, in the case of GDPR, Member States did in fact need to update their national data protections laws to make certain choices allowed for in the framework, and identify competent national data enforcement agencies. So there’s still some variation.

Strengthening the rules around privacy and making enforcement more effective are other general aims for the ePrivacy Regulation.

Europe has had robust privacy rules for many years but enforcement has been lacking.

Another point of note: Where data protection law is concerned, national agencies need to be properly resourced to be able to enforce rules, or that could undermine the impact of regulation.

It’s up to Member States to do this, though GDPR essentially requires it (and the Commission is watching).

Europe’s data protection supervisor, Giovanni Buttarelli, sums up the current resourcing situation for national data protection agencies, as: “Not bad, not enough. But much better than before.”

But why does Europe need another digital privacy law. Why isn’t GDPR enough? 

There is some debate about that, and not everyone agrees with the current approach. But the general idea is that GDPR deals with general (personal) data.

Whereas the proposed update to ePrivacy rules is intended to supplement GDPR — addressing in detail the confidentiality of electronic communications, and the tracking of Internet users more broadly.

So the (draft) ePrivacy Regulation covers marketing, and a whole raft of tracking technologies (including but not just cookies); and is intended to combat problems like spam, as well as respond to rampant profiling and behavioral advertising by requiring transparency and affirmative consent.

One major impulse behind the reform of the rules is to expand the scope to not just cover telcos but reflect how many communications now travel ‘over the top’ of cellular networks, via Internet services.

This means ePrivacy could apply to all sorts of tech firms in future, be it Skype, Facebook, Google, and quite possibly plenty more — given how many apps and services include some ability for users to communicate with each other.

But scope remains one of the contested areas, with critics arguing the regulation could have a disproportionate impact, if — for example — every app with a chat function is going to be ruled.

On the communications front, the updated rules would not just cover message content but metadata too (to respond to how that gets tracked). Aka pieces of data that might not be personal data per se yet certainly pertain to privacy once they are wrapped up in and/or associated with people’s communications.

Although metadata tracking is also used for analytics, for wider business purposes than just profiling users, so you can see the challenge of trying to fashion rules to fit around all this granular background activity.

Simplifying problematic existing EU cookie consent rules — which have also been widely mocked for generating pretty pointless web page clutter — has also been a core part of the Commission’s intention for the update.

EU lawmakers also want the regulation to cover machine to machine comms — to regulate privacy around the still emergent IoT (Internet of Things), to keep pace with the rise of smart home technologies.

Those are some of the high level aims but there have been multiple proposed texts and revisions at this point so goalposts have been shifting around.

So whereabouts in the process are we?

The Commission’s original reform proposal came out in January 2017. More than a year and a half later EU institutions are still stuck trying to reach a consensus. It’s not even 100% certain whether ePrivacy will pass or founder in the attempt at this point.

The underlying problem is really the scope of exploitation of consumers’ online activity going on in the areas ePrivacy seeks to regulate — which is now firmly baked into dominant digital business models — so trying to rule over all that after the fact of mainstream operational execution is a recipe for co-ordinated industry objection and frenzied lobbying. Of which there has been an awful lot.

At the same time, consumer protection groups in Europe are more clear than ever that ePrivacy should be a vehicle for further strengthening the data protection framework put in place by GDPR — pointing out, for example, that data misuse scandals like the Facebook-Cambridge Analytica debacle show that data-driven business models need closer checks to protect consumers and ensure people’s rights are respected.

Safe to say, the two sides couldn’t be further apart.

Like GDPR, the proposed ePrivacy Regulation would also apply to companies offering services in Europe not only those based in Europe. And it also includes major penalties for violations (of up to 2% or 4% of a company’s global annual turnover) — similarly intended to bolster enforcement and support more consistently applied EU privacy rules.

But given the complexity of the proposals, and disagreements over scope and approach, having big fines baked in further complicates the negotiations — because lobbyists can argue that substantial financial penalties should not be attached to ‘ambiguous’ laws and disputed regulatory mechanisms.

The high cost of getting the update wrong is not so much concentrating minds as causing alarms to be yanked and brakes applied. With the risk of no progress at all looking like an increasing possibility.

One thing is clear: The existing ePrivacy rules are outdated and it’s not helpful to have old rules undermining a state-of-the-art data protection framework.

Telcos have also rightly complained it’s not fair for tech giants to be able to operate messaging empires without the same compliance burdens they have.

Just don’t assume telcos love the proposed update either. It’s complicated.

Sounds very messy. 

Indeed.

EU lawmakers could probably have dealt with updating both privacy-related directives together, or even in one ‘super regulation’, but they decided to separate the work to try to simplify the process. In retrospect that looks like a mistake.

On the plus side, it means GDPR is now locked in place — with Buttarelli saying the new framework is intended to stand for as long as its predecessor.

Less good: One shiny worldclass data protection framework is having to work alongside a set of rules long past their sell-by-date.

So, so much for consistency.

Buttarelli tells us he thinks it was a mistake not to do both updates together, describing the blocks being thrown up to try to derail ePrivacy reform as “unacceptable”.

“I would like to say very clearly that the EU made a mistake in not updating earlier the rules for confidentiality for electronic communications at the same time as general data protection,” he told us during an interview this week, about GDPR enforcement, datas ethics and the future of EU privacy regulation.

He argues the patchwork of new and old rules “doesn’t work for data controllers” either, as they’re the ones saddled with dealing with the legal inconsistency.

As Europe’s data protection supervisor, Buttarelli is of course trying to apply pressure on key parties — to “get to the table and start immediately trilogue negotiations to identify a sustainable outcome”.

But the nature of lawmaking across a bloc of 28 Member States is often slow and painful. Certainly no one entity can force progress; it must be achieved via negotiated consensus and compromise across the various institutions and entities.

And when interest groups are so far apart, well, it’s sweating toil to put it mildly.

Entities that don’t want to play ball with a particular legal reform issue can sometimes also throw a delaying spanner in the works by impeding negotiations. Which is what looks to be going on with ePrivacy right now.

The EU parliament confirmed its negotiating mandate on the reform almost a year ago now. But MEPs were then stuck waiting for Member States to take a position and get around the discussion table.

Except Member States seemingly weren’t so keen. Some were probably a bit preoccupied with Brexit.

Currently implicated as an ePrivacy blocker: Austria, which holds the six-month rotating presidency of the EU Council — meaning it gets to set priorities, and can thus kick issues into the long grass (as its right-wing government appears to be doing with ePrivacy). And so the wait goes on.

It now looks like a bit of a divide and conquer situation for anti-privacy lobbyists, who — having failed to derail GDPR — are throwing all their energies at blocking and even derailing/diluting the ePrivacy reform.

Some Member States appear to be trying to attack ePrivacy to weaken the overarching framework of GDPR too. So yes, it’s got very messy indeed.

There’s an added complication around timing because the EU parliament is up for re-election next Spring, and a few months after that the executive Commission will itself turn over, as the current president does not intend to seek reappointment. So it will be all change for the EU, politically speaking, in 2019.

A reconfigured political landscape could then change the entire conversation around ePrivacy. So the current delay could prove fatal unless agreement can be reached in early 2019.

Some EU lawmakers had hoped the reform could be done and dusted in in time to come into force at the same time as GDPR, this May.

That was certainly a major miscalculation.

But what’s all the disagreement about?

That depends on who you ask. There are many contested issues, depending on the interests of the group you’re talking to.

Media and publishing industry associations are terrified about what they say ePrivacy could do to their ad-supported business models, given their reliance on cookies and tracking technologies to try to monetize free content via targeted ads — and so claim it could destroy journalism as we know it if consumers need to opt-in to being tracked.

The ad industry is also of course screaming about ePrivacy as if its hair’s on fire. Big tech included, though it has generally preferred to lobby via proxies on this issue.

Anything that could impede adtech’s ability to track and thus behaviourally target ads at web users is clearly enemy number one, given the current modus operandi. So ePrivacy is a major lobbying target for the likes of the IAB who don’t want it to upend their existing business models.

Even telcos aren’t happy, despite the potential of the regulation to even the playing field somewhat with tech giants — suggesting they will end up with double the regulatory burden, as well as moaning it will make it harder for them to make the necessary investments to roll out 5G networks.

Plus, as I say, there also seems to be some efforts to try to use ePrivacy as a vector to attack and weaken GDPR itself.

Buttarelli had comments to make on this front too, describing some data controllers as being in post-GDPR “revenge mode”.

“They want to move in sort of a vendetta, vendetta — and get back what they lose with the GDPR. But while I respect honest lobbying about which pieces of ePrivacy are not necessary I think ePrivacy will help first small businesses, and not necessarily the big tech startups. And where done properly ePrivacy may give more power to individuals. It may make harder for big tech to snoop on private conversations without meaningful consent,” he told us, appealing to Europe’s publishing industry to get behind the reform process, rather than applying pressure at the Member State level to try to derail it — given the media hardly feels well done by by big tech.

He even makes this appeal to local adtech players — which aren’t exactly enamoured with the dominance of big tech either.

“I see space for market incentives,” he added. “For advertisers and publishers to, let’s say, re-establish direct relations with their readers and customers. And not have to accept the terms dictated by the major platform intermediaries. So I don’t see any other argument to discourage that we have a deal before the elections in May next year of the European legislators.”

There’s no doubt this is a challenging sell though, given how embedded all these players are with the big platforms. So it remains to be seen whether ePrivacy can be talked back on track.

Major progress is certainly very unlikely before 2019.

I’m still not sure why it’s so important though.  

The privacy of personal communications is a fundamental right in Europe. So there’s a need for the legal framework to defend against technological erosion of citizens’ rights.

Add to that, a big part of the problem with the modern adtech industry — aside from the core lack of genuine consent — is its opacity. Who’s doing what; for what specific purposes; and with what exact outcomes.

Existing European privacy rules like GDPR mean there’s more transparency than there’s ever been about what’s going on — if you know and/or can be bothered to dig down into privacy policies and purposes.

If you do, you might, for example, discover a very long list of companies that your data is being shared with (and even be able to switch off that sharing) — entities with weird sounding names like Outbrain and OpenX.

A privacy policy might even state a per company purpose like ‘Advertising exchange’ and ‘Advertising’. Or ‘Customer interaction’, whatever that means.

Thing is, it’s often still very difficult for a consumer to understand what a lot of these companies are really doing with their data.

Thanks to current EU laws, we now have the greatest level of transparency there has ever been about the mechanisms underpinning Internet business models. But yet so much remains murky.

The average Internet user is very likely none the wiser. Can profiling them without proper consent really be fair?

GDPR sets out an expectation of privacy by design and default. So, following that principle, you could argue that cookie consent, for example, should be default opt-out — and that any website must be required to gain affirmative opt in from a visitor for any tracking cookies. The adtech industry would certainly disagree though.

The original ePrivacy proposal even had a bit of a mixed approach to consent which was accused of being too overbearing for some technologies and not strong enough for others.

It’s not just creepy tech giants implicated here either. Publishers and the media (TechCrunch included) are very much caught up in the unpleasant tracking mess, complicit in darting users with cookies and trackers to try to increase what remain fantastically low conversation rates for digital ads.

Most of the time, most Internet users ignore most ads. So — with horribly wonky logic — the behavioral advertising industry, which has been able to grow like a weed because EU privacy rights have not previously been actively enforced, has made it its mission to suck up (and indeed buy up) more and more user data to try to move the ad conversion needle a fraction.

The media is especially desperate because the web has also decimated traditional business models. And European lawmakers can be very sensitive to publishing industry concerns (for e.g., see their backing of controversial copyright reforms which publishers have been pushing for).

Meanwhile Google and Facebook are gobbling up the majority of online ad spending, leaving publishers fighting for crumbs and stuck having to do businesses with the platforms that have so sorely disrupted them.

Platforms they can’t at all control but which are now so popular and powerful they can (and do) algorithmically control the visibility of publishers’ content.

It’s not a happy combination. Well, unless you’re Facebook or Google.

Meanwhile, for web users just wanting to go about their business and do all the stuff people can (and sometimes need to do) online, things have got very bad indeed.

Unless you ignore the fact you’re being creeped on almost all the time, by snoopy entities that double as intelligence traders, selling info on what you like or don’t, so that an unseen adtech collective can create highly detailed profiles of you to try and manipulate your online transactions and purchasing decisions. With what can sometimes be discriminatory impacts.

The rise in popularity of ad blockers illustrates quite how little consumers enjoy being ad-stalked around the Internet.

More recently tracker blockers have been springing up to try to beat back the adtech vampire octopus which also lards the average webpage with myriad data-sucking tentacles, impeding page load times and gobbling bandwidth in the process, in addition to abusing people’s privacy.

There’s also out-and-out malicious stuff to be found already here too as the increasing complexity, opacity and sprawl of the adtech industry’s surveillance apparatus (combined with its general lack of interest in and/or focus on security) offers rich and varied vectors of cyber attack.

And so ads and gnarly page elements sometimes come bundled or injected with actual malware as hackers exploit all this stuff for their own ends and launch man in the middle attacks to grab user data as it’s being routinely siphoned off for tracking purposes.

It’s truly a layer cake of suck.

Ouch. 

The ePrivacy Regulation could, in theory, help to change this, by helping to support alternative business models that don’t use people-tracking as their fuel by putting the emphasis back where it should be: Respect for privacy.

The (seemingly) radical idea underlying all these updates to European privacy legislation is that if you increase consumers’ trust in online services by respecting people’s privacy you can actually grease the wheel of ecommerce and innovation because web users will be more comfortable doing stuff online because they won’t feel like they’re under creepy surveillance.

More than that — you can lay down a solid foundation of trust for the next generation of disruptive technologies to build on.

Technologies like IoT and driverless cars.

Because, well, if consumers hate to feel like websites are spying on them, imagine how disgusted they’ll be to realize their fridge, toaster, kettle and TV are all complicit in snitching. Ditto their connected car.

‘I see you’re driving past McDonald’s. Great news! They have a special on those chocolate donuts you scoffed a whole box of last week…’

Ugh. 

Yeah…

So what are ePrivacy’s chances at this point? 

It’s hard to say but things aren’t looking great right now.

Buttarelli describes himself as “relatively optimistic” about getting an agreement by May, i.e. before the EU parliament elections, but that may well be wishful thinking.

Even if he’s right there would likely still need to be an implementation period before it comes into force — so new rules aren’t likely up and running before 2020.

Yet he also describes the ePrivacy Regulation as “an essential missing piece of the jigsaw”.

Getting that piece in place is not going to be easy though.

Siilo injects $5.1M to try to transplant WhatsApp use in hospitals

Consumer messaging apps like WhatsApp are not only insanely popular for chatting with friends but have pushed deep into the workplace too, thanks to the speed and convenience they offer. They have even crept into hospitals, as time-strapped doctors reach for a quick and easy way to collaborate over patient cases on the ward.

Yet WhatsApp is not specifically designed with the safe sharing of highly sensitive medical information in mind. This is where Dutch startup Siilo has been carving a niche for itself for the past 2.5 years — via a free-at-the-point-of-use encrypted messaging app that’s intended for medical professions to securely collaborate on patient care, such as via in-app discussion groups and being able to securely store and share patient notes.

A business goal that could be buoyed by tighter EU regulations around handling personal data, say if hospital managers decide they need to address compliance risks around staff use of consumer messaging apps.

The app’s WhatsApp-style messaging interface will be instantly familiar to any smartphone user. But Siilo bakes in additional features for its target healthcare professional users, such as keeping photos, videos and files sent via the app siloed in an encrypted vault that’s entirely separate from any personal media also stored on the device.

Messages sent via Siilo are also automatically deleted after 30 days unless the user specifies a particular message should be retained. And the app does not make automated back-ups of users’ conversations.

Other doctor-friendly features include the ability to blur images (for patient privacy purposes); augment images with arrows for emphasis; and export threaded conversations to electronic health records.

There’s also mandatory security for accessing the app — with a requirement for either a PIN-code, fingerprint or facial recognition biometric to be used. While a remote wipe functionality to nix any locally stored data is baked into Siilo in the event of a device being lost or stolen.

Like WhatsApp, Siilo also uses end-to-end encryption — though in its case it says this is based on the opensource NaCl library

It also specifies that user messaging data is stored encrypted on European ISO-27001 certified servers — and deleted “as soon as we can”.

It also says it’s “possible” for its encryption code to be open to review on request.

Another addition is a user vetting layer to manually verify the medical professional users of its app are who they say they are.

Siilo says every user gets vetted. Though not prior to being able to use the messaging functions. But users that have passed verification unlock greater functionality — such as being able to search among other (verified) users to find peers or specialists to expand their professional network. Siilo says verification status is displayed on profiles.

“At Siilo, we coin this phenomenon ‘network medicine’, which is in contrast to the current old-­fashioned, siloed medicine,” says CEO and co-founder Joost Bruggeman in a statement. “The goal is to improve patient care overall, and patients have a network of doctors providing input into their treatment.”

While Bruggeman brings the all-important medical background to the startup, another co-founder, Onno Bakker, has been in the mobile messaging game for a long time — having been one of the entrepreneurs behind the veteran web and mobile messaging platform, eBuddy.

A third co-founder, CFO Arvind Rao, tells us Siilo transplanted eBuddy’s messaging dev team — couching this ported in-house expertise as an advantage over some of the smaller rivals also chasing the healthcare messaging opportunity.

It is also of course having to compete technically with the very well-resourced and smoothly operating WhatsApp behemoth.

“Our main competitor is always WhatsApp,” Rao tells TechCrunch. “Obviously there are also other players trying to move in this space. TigerText is the largest in the US. In the UK we come across local players like Hospify and Forward.

“A major difference we have very experienced in-house dev team… The experience of this team has helped to build a messenger that really can compete in usability with WhatsApp that is reflected in our rapid adoption and usage numbers.”

“Having worked in the trenches as a surgery resident, I’ve experienced the challenges that healthcare professionals face firsthand,” adds Bruggeman. “With Siilo, we’re connecting all healthcare professionals to make them more efficient, enable them to share patient information securely and continue learning and share their knowledge. The directory of vetted healthcare professionals helps ensure they’re successful team­players within a wider healthcare network that takes care of the same patient.”

Siilo launched its app in May 2016 and has since grown to ~100,000 users, with more than 7.5 million messages currently being processed monthly and 6,000+ clinical chat groups active monthly.

“We haven’t come across any other secure messenger for healthcare in Europe with these figures in the App Store/Google Play rankings and therefore believe we are the largest in Europe,” adds Rao. “We have multiple large institutions across Western-Europe where doctors are using Siilo.”

On the security front, as well flagging the ISO 27001 certification the company has gained, he notes that it obtained “the highest NHS IG Toolkit level 3” — aka the now replaced system for organizations to self-assess their compliance with the UK’s National Health Service’s information governance processes, claiming “we haven’t seen [that] with any other messaging company”.

Siilo’s toolkit assessment was finalized at the end of Febuary 2018, and is valid for a year — so will be up for re-assessment under the replacement system (which was introduced this April) in Q1 2019. (Rao confirms they will be doing this “new (re-)assessment” at the end of the year.)

As well as being in active use in European hospitals such as St. George’s Hospital, London, and Charité Berlin, Germany, Siilo says its app has had some organic adoption by medical pros further afield — including among smaller home healthcare teams in California, and “entire transplantation teams” from Astana, Kazakhstan.

It also cites British Medical Journal research that found that of the 98.9% of U.K. hospital clinicians who now have smartphones, around a third are using consumer messaging apps in the clinical workplace. Persuading those healthcare workers to ditch WhatsApp at work is Siilo’s mission and challenge.

The team has just announced a €4.5 million (~$5.1M) seed to help it get onto the radar of more doctors. The round is led by EQT Ventures, with participation from existing investors. It says it will be using the funding to scale­ up its user base across Europe, with a particular focus on the UK and Germany.

Commenting on the funding in a statement, EQT Ventures’ Ashley Lundström, a venture lead and investment advisor at the VC firm, said: “The team was impressed with Siilo’s vision of creating a secure global network of healthcare professionals and the organic traction it has already achieved thanks to the team’s focus on building a product that’s easy to use. The healthcare industry has long been stuck using jurassic technologies and Siilo’s real­time messaging app can significantly improve efficiency
and patient care without putting patients’ data at risk.”

While the messaging app itself is free for healthcare professions to use, Siilo also offers a subscription service to monetize the freemium product.

This service, called Siilo Connect offers organisations and professional associations what it bills as “extensive management, administration, networking and software integration tools”, or just data regulation compliance services if they want the basic flavor of the paid tier.

Facebook is weaponizing security to erode privacy

At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.

“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.

Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.

But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.

You could say Facebook has ‘hostility to privacy‘ as a core value.

Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.

But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.

On the data scandal front Facebook has reigned supreme, whether it’s as an ‘oops we just didn’t think of that’ spreader of socially divisive ads paid for by Kremlin agents (sometimes with roubles!); or as a carefree host for third party apps to party at its users’ expense by silently hovering up info on their friends, in the multi-millions.

Facebook’s response to the Cambridge Analytica debacle was to loudly claim it was ‘locking the platform down‘. And try to paint everyone else as the rogue data sucker — to avoid the obvious and awkward fact that its own business functions in much the same way.

All this scandalabra has kept Facebook execs very busy with year, with policy staffers and execs being grilled by lawmakers on an increasing number of fronts and issues — from election interference and data misuse, to ad transparencyhate speech and abuse, and also directly, and at times closely, on consumer privacy and control

Facebook shielded its founder from one sought for grilling on data misuse, as UK MPs investigated online disinformation vs democracy, as well as examining wider issues around consumer control and privacy. (They’ve since recommended a social media levy to safeguard society from platform power.) 

The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”

As a result, Zuckerberg’s tally of public appearances before lawmakers this year stands at just two domestic hearings, in the US Senate and Congress, and one at a meeting of the EU parliament’s conference of presidents (which switched from a behind closed doors format to being streamed online after a revolt by parliamentarians) — and where he was heckled by MEPs for avoiding their questions.

But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.

He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.

Security, weaponized 

What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.

Simply put, Facebook is weaponizing security to shield its erosion of privacy.

Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.

Only robust privacy laws could bring Facebook’s self-serving house of cards tumbling down. User growth on its main service isn’t what it was but the company has shown itself very adept at picking up (and picking off) potential competitors — applying its surveillance practices to crushing competition too.

In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.

Which would mean billions of dollars in Facebook’s case — vs the pinprick penalties it has been dealing with for data abuse up to now.

Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.

Hence the existential nature of the threat.

The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.

Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.

One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.

It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.

US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.

Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.

So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)

The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.

Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.

While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.

In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.

Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.

So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.

Here, for example, is Zuckerberg responding to a question from an MEP in the EU parliament asking for answers on so-called ‘shadow profiles’ (aka the personal data the company collects on non-users) — emphasis mine:

It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.

At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.

“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.

He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.

“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”

He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).

These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.

But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.

No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.

TechCrunch/Bryce Durbin

Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…

So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.

What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.

Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.

Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.

“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.

So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.

Safe to say, that’s not going to play at all well in Europe.

Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!

Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)

In other contexts this would be called spying — or, well, ‘mass surveillance’.

It’s also how Facebook makes money.

And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.

Except Facebook is a commercial company, not the NSA.

So it’s only fighting to keep being able to carpet-bomb the planet with ads.

Profiting from shadow profiles

Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.

In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.

Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.

It’s just a double whammy of awful, awful behavior.

And of course, there’s more.

A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.

In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.

Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.

Which means it’s expert at extracting outcomes from people by applying these manipulative dark arts. (Don’t forget, it has even directly experimented in manipulating users’ emotions.)

So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)

In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.

One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…

That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.

There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.

These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.

Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.

WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.

On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.

So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.

This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.

No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.

Yet this is a very dangerous strategy, though.

Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.

What’s the best security practice of all? That’s super simple: Not holding data in the first place.