Seven reasons not to trust Facebook to play cupid

This week Facebook has launched a major new product play, slotting an algorithmic dating service inside its walled garden as if that’s perfectly normal behavior for an ageing social network.

Insert your [dad dancing GIF of choice] right here.

Facebook getting into dating looks very much like a mid-life crisis — as a veteran social network desperately seeks a new strategy to stay relevant in an age when app users have largely moved on from social network ‘lifecasting’ to more bounded forms of sharing, via private messaging and/or friend groups inside dedicated messaging and sharing apps.

The erstwhile Facebook status update has long been usurped by the Snapchat (and now Instagram) Story as the social currency of choice for younger app users. Of course Facebook owns the latter product too, and has mercilessly cloned Stories. But it hardly wants its flagship service to just fade away into the background like the old fart it actually is in Internet age terms.

Not if it can reinvigorate the product with a new purpose — and so we arrive at online dating.

Facebook — or should that be ‘Datebook’ now?! — is starting its dating experiment in Colombia, as its beta market. But the company clearly has ambitious designs on becoming a major global force in the increasingly popular online dating arena — to challenge dedicated longtime players like eHarmony and OkCupid, as well as the newer breed of more specialized dating startups, such as female-led app, Bumble.

Zuckerberg is not trying to compete with online dating behemoth Tinder, though. Which Facebook dismisses as a mere ‘hook up’ app — a sub category it claims it wants nothing to do with.

Rather it’s hoping to build something more along the lines of ‘get together with friends of your friends who’re also into soap carving/competitive dog grooming/extreme ironing’ than, for e.g., the raw spank in the face shock of ‘Bang with Friends‘. (The latter being the experimental startup which tried, some six years ago, to combine Facebook and sex — before eventually exiting to a Singapore-based dating app player, Paktor, never to be heard of again. Or, well, not until Facebook decided to get into the dating game and reminded us all how we lol’d about it.)

Mark Zuckerberg’s company doesn’t want to get into anything smutty, though. Oh no, no, NO! No sex please, we’re Facebook!

Facebook Dating has been carefully positioned to avoid sounding like a sex app. It’s being flogged as a tasteful take on the online dating game, with — for instance — the app explicitly architected not to push existing friends together via suggestive matching (though you’ll just have to hope you don’t end up being algorithmically paired with any exes, which judging by Facebook’s penchant for showing users ‘photo memories’ of past stuff with exes may not pan out so well… ). And no ability to swap photo messages with mutual matches in case, well, something pornographic were to pass through.

Facebook is famously no fan of nudes. Unsurprisingly, then, nor is its buttoned up dating app. Only ‘good, old-fashioned wholesome’ text-based chat-up lines (related to ‘good clean pieces of Facebook content’) here please.

If you feel moved to text an up-front marriage proposal — feeling 100% confident in Facebook’s data scientists’ prowess in reading the social media tea leaves and plucking your future life partner out of the mix — its algorithms will probably smile on that though.

The company’s line is that dating will help fulfil its new mission of encouraging ‘time well spent’ — by helping people forge more meaningful (new) relationships thanks to the power of its network (and the data it sucks out of it).

This mission is certainly an upgrade on Facebook’s earlier and baser interest in just trying to connect every human on planet Earth to every other human on planet Earth in some kind of mass data-swinging orgy — regardless of the ethical and/or moral consequences (as Boz memorably penned it), as if it was trying to channel the horror-loving spirit of Pasolini’s Salò. Or, well, a human centipede.

But that was then. These days, in its mid teens, Facebook wants to be seen as grown up and a bit worth. So its take on dating looks a lot more ‘marriage material’ than ‘casual encounters’. Though, well, products don’t always pan out how their makers intend. So it might need to screw its courage to the sticking place and hope things don’t go south.

From the user perspective, there’s a whole other side here too though. Because given how much baggage inevitably comes with Facebook nowadays, the really burning question is whether any sensible person should be letting Mark Zuckerberg fire cupid’s arrows on their behalf?

He famously couldn’t tell malicious Kremlin propaganda from business as usual social networking like latte photos and baby pics — so what makes you think he’s going to be attuned to the subtle nuances of human chemistry?!

Here are just a few reasons why we think you should stay as far away from Facebook’s dalliance with dating as you possibly can…

  1. It’s yet another cynical data grab
    Facebook’s ad-targeting business model relies on continuous people tracking to function — which means it needs your data to exist. Simply put: Your privacy is Facebook’s lifeblood. Dating is therefore just a convenient veneer to slap atop another major data grab as Facebook tries to find less icky ways to worm its way back and/or deeper into people’s lives. Connecting singles to nurture ‘meaningful relationships’ is the marketing gloss being slicked over its latest invitation to ask people to forget how much private information they’re handing it. Worse still, dating means Facebook is asking people to share even more intimate and personal information than they might otherwise willingly divulge — again with a company whose business model relies upon tracking everything everyone does, on or offline, within its walled garden or outside it on the wider web, and whether they’re Facebook a user or not.
    This also comes at a time when users of Facebook’s eponymous social network have been showing signs of Facebook fatigue, and even changing how they use the service after a string of major privacy scandals. So Facebook doing dating also looks intended to function as a fresh distraction — to try to draw attention away from its detractors and prevent any more scales falling away from users’ eyes. The company wants to paper over growing scepticism about ad-targeting business models with algorithmic heart-shaped promises.
    Yet the real underlying passion here is still Facebook’s burning desire to keep minting money off of your private bits and bytes.
  2. Facebook’s history of privacy hostility shows it simply can’t be trusted
    Facebook also has a very long history of being outright hostile to privacy — including deliberately switching settings to make previously private settings public by default (regulatory intervention has been required to push back against that ratchet) — so its claim, with Dating, to be siloing data in a totally separate bucket, and also that information shared for this service won’t be used to further flesh out user profiles or to target people with ads elsewhere across its empire should be treated with extreme scepticism.
    Facebook also said WhatsApp users’ data would not be mingled and conjoined with Facebook user data — and, er, look what ended up happening there…!!
    ————————————————————————————————–>

    And then there’s Facebook record of letting app developers liberally rip user data out of its platform — including (for years and years) ‘friend data’. Which almost sounded cosy. But Facebook’s friends data API meant that an individual Facebook user could have their data sucked out without even agreeing to a particular app’s ToS themselves. Which is part of the reason why users’ personal information has ended up all over the place — and in all sorts of unusual places. (Facebook not enforcing its own policies, and implementing features that could be systematically abused to suck out user data are among some of the many other reasons.)
    The long and short history of Facebook and privacy is that information given to it for one purpose has ended up being used for all sorts of other things — things we likely don’t even know the half of. Even Facebook itself doesn’t know which is why it’s engaged in a major historical app audit right now. Yet this very same company now wants you to tell it intimate details about your romantic and sexual preferences? Uhhhh, hold that thought, truly.

  3. Facebook already owns the majority of online attention — why pay the company any more mind? Especially as dating singles already have amazingly diverse app choice…
    In the West there’s pretty much no escape from Facebook Inc. Not if you want to be able to use the social sharing tools your friends are using. Network effects are hugely powerful for that reason, and Facebook owns not just one popular and dominant social network but a whole clutch of them — given it also bought Instagram and WhatsApp (plus some others it bought and just closed, shutting down those alternative options). But online dating, as it currently is, offers a welcome respite from Facebook.
    It’s arguably also no accident that the Facebook-less zone is so very richly served with startups and services catering to all sorts of types and tastes. There are dating apps for black singlesmatchmaking services for Muslims; several for Jewish people; plenty of Christian dating apps; at least one dating service to match ex-pat Asians; another for Chinese-Americansqueer dating apps for women; gay dating apps for men (and of course gay hook up apps too), to name just a few; there’s dating apps that offer games to generate matches; apps that rely on serendipity and location to rub strangers together via missed connections; apps that let you try live video chats with potential matches; and of course no shortage of algorithmic matching dating apps. No singles are lonely for dating apps to try, that’s for sure.
    So why on earth should humanity cede this very rich, fertile and creative ‘stranger interaction’ space, which caters to singles of all stripes and fancies, to a social network behemoth — just so Facebook can expand its existing monopoly on people’s attention?
    Why shrink the luxury of choice to give Facebook’s business extra uplift? If Facebook Dating became popular it would inexorably pull attention away from alternatives — perhaps driving consolidation among a myriad of smaller dating players, forcing some to band together to try to achieve greater scale and survive the arrival of the 800lb Facebook gorilla. Some services might feel they have to become a bit less specialized, pushed by market forces to go after a more generic (and thus larger) pool of singles. Others might find they just can’t get enough niche users anymore to self-sustain. The loss of the rich choice in dating apps singles currently enjoy would be a crying shame indeed. Which is as good a reason as any to snub Facebook’s overtures here.
  4. Algorithmic dating is both empty promise and cynical attempt to humanize Facebook surveillance
    Facebook typically counters the charge that because it tracks people to target them with ads its in the surveillance business by claiming people tracking benefits humanity because it can serve you “relevant ads”. Of course that’s a paper thin argument since all display advertising is something no one has chosen to see and therefore is necessarily a distraction from whatever a person was actually engaged with. It’s also an argument that’s come under increasing strain in recent times, given all the major scandals attached to Facebook’s ad platform, whether that’s to do with socially divisive Facebook ads, or malicious political propaganda spread via Facebook, or targeted Facebook ads that discriminate against protected groups, or Facebook ads that are actually just spreading scams. Safe to say, the list of problems attached to its ad targeting enterprise is long and keeps growing.
    But Facebook’s follow on claim now, with Dating and the data it intends to hold on people for this matchmaking purpose, is it has the algorithmic expertise to turn a creepy habit of tracking everything everyone does into a formula for locating love.
    So now it’s not just got “relevant” ads to sell you; it’s claiming Facebook surveillance is the special sauce to find your Significant Other!

    Frankly, this is beyond insidious. (It is also literally a Black Mirror episode — and that’s supposed to be dysfunctional sci-fi.) Facebook is moving into dating because it needs a new way to package and sell its unpleasant practice of people surveillance. It’s hoping to move beyond its attempt at normalizing its business line (i.e. that surveillance is necessary to show ads that people might be marginally more likely to click on) — which has become increasingly problematic as its ad platform has been shown to be causing all sorts of knock-on societal problems — by implying that by letting Facebook creep on you 24/7 it could secure your future happiness because its algorithms are working to track down your perfect other half — among all those 1s and 0s it’s continuously manhandling.
    Of course this is total bunkum. There’s no algorithmic formula to determine what makes one person click with another (or not). If there was humans would have figured it out long, long ago — and monetized it mercilessly. (And run into all sorts of horrible ethical problems along the way.)
    Thing is, people aren’t math. Humans cannot be made to neatly sum to the total of their collective parts and interests. Which is why life is a lot more interesting than the stuff you see on Facebook. And also why there’s a near infinite number of dating apps out there, catering to all sorts of people and predilections.
    Sadly Facebook can’t see that. Or rather it can’t admit it. And so we get nonsense notions of ‘expert’ algorithmic matchmaking and ‘data science’ as the underpinning justification for yet another dating app launch. Sorry but that’s all just marketing.
    The idea that Facebook’s data scientists are going to turn out to be bullseye hitting cupids is as preposterous as it is ridiculous. Like any matchmaking service there will be combinations thrown up that work and plenty more than do not. But if the price of a random result is ceaseless surveillance the service has a disproportionate cost attached to it — making it both an unfair and an unattractive exchange for the user. And once again people are being encouraged to give up far more than they’re getting in return.
    If you believe that finding ‘the one’ will be easier if you focus on people with similar interests to you or who are in the same friend group there’s no shortage of existing ‘life avenues’ you can pursue without having to resort to Facebook Dating. (Try joining a club. Or going to your friends’ parties. Or indeed taking your pick from the scores of existing dating apps that already offer interest-based matching.)
    Equally you could just take a hike up a mountain and meet your future wife at the top (as one couple I know did). Safe to say, there’s no formula to love. And thankfully so. Don’t believe anyone trying to sell you a dating service with the claim their nerdtastic data scientists will hook you up good and proper.
    Facebook’s chance of working any ‘love magic’ will be as good/poor as the next app-based matchmaking service. Which is to say it will be random. There’s certainly no formula to be distilled beyond connecting ‘available to date’ singles — which dating apps and websites have been doing very well for years and years and years. No Facebook dates necessary.
    The company has little more to offer the world of online dating than, say, OkCupid, which has scale and already combines the location and stated interests of its users in an attempt to throw up possible clicks. The only extra bit is Facebook’s quasi-bundling of Events into dating, as a potential avenue to try and date in a marginally more informal setting than agreeing to go on an actual date. Though, really, it just sounds like it might be more awkward to organize and pull off.
    Facebook’s generic approach to dating is also going to offer much less for certain singles who benefit from a more specialized and tailored service (such as a female-focused player like Bumble which has created a service to cater to women’s needs; or, indeed, any of the aforementioned community focused offerings cited above which help people meet other likeminded singles).
    Facebook appears to believe that size matters in dating. And seems to want to be a generic giant in a market that’s already richly catering to all sorts of different communities. For many singles that catch-all approach is going to earn it a very hard left swipe.
  5. Dating takes resource and focus away from problems Facebook should actually be fixing
    Facebook’s founder made ‘fixing Facebook’ his personal priority this year. Which underlines quite how many issues the company has smashing through its plate. We’re not talking little bug fixes. Facebook has a huge bunch of existentially awful hellholes burning through its platform and punching various human rights in the process. This is not at all trivial. Some really terrible stuff has been going on with its platforms acting as the conduit.
    Earlier this year, for instance, the UN blasted Facebook saying its platform had became a “beast” in Myanmar — weaponized and used to accelerate ethnic violence against the Rohingya Muslim minority.
    Facebook has admitted it did not have enough local resource to stop its software being used to amplify ethnic hate and violence in the market. Massacres of Rohingya refuges have been described by human rights organizations as a genocide.
    And it’s not an isolated instance. In the Philippines the country has recently been plunged into a major human rights crisis — and the government there, which used Facebook to help get elected, has also been using Facebook to savage its critics at the same time as carrying out thousands of urban killings in a bloody so-called ‘war on drugs’.
    In India, Facebook’s WhatsApp messaging app has been identified as a contributing factor in multiple instances of mob violence and killings — as people have been whipped up by lies spread like lightning via the app.
    Set against such awful problems — where Facebook’s products are at very least not helping — we now see the company ploughing resource into expanding into a new business area, and expending engineering resource to build a whole new interface and messaging system (the latter to ensure Facebook Dating users can only swap texts, and can’t send photos or videos because that might be a dick pic risk).
    So it’s a genuine crying shame that Facebook did not pay so much close attention to goings on in Myanmar — where local organizations have long been calling for intelligent limits to be built in to its products to help stop abusive misuse.
    Yet Facebook only added the option to report conversations in its Messenger app this May
    So the sight of the company expending major effort to launch a dating product at the same time as it stands accused of failing to do enough to prevent its products from being conduits for human rights abuses in multiple markets is ethically uncomfortable, to say the least.
    Prospective users of Facebook Dating might therefore feel a bit queasy to think that their passing fancies have been prioritized by Zuckerberg & co over and above adding stronger safeguards and guardrails to the various platforms they operate to try to safeguard humans from actual death in other corners of the globe.
  6. By getting involved with dating, Facebook is mixing separate social streams
    Talking of feeling queasy, with Facebook Dating the company is attempting to pull off a tricky balancing act of convincing existing users (many of whom will already be married and/or in a long term relationship) that it’s somehow totally normal to just bolt on a dating layer to something that’s supposed to be a generic social network.
    All of a sudden a space that’s always been sold — and traded — as a platonic place for people to forge ‘friendships’ is suddenly having sexual opportunity injected into it. Sure, the company is trying to keep these differently oriented desires entirely separate, by making the Dating component an opt-in feature that lurks within Facebook (and where (it says) any activity is siloed and kept off of mainstream Facebook (at least that’s the claim)). But the very existence of Facebook Dating means anyone in a relationship who is already on Facebook is now, on one level, involved with a dating app company.
    Facebook users may also feel they’re being dangled the opportunity to sign up to online dating on the sly — with the company then committed itself to being the secret-keeping go-between ferrying any flirtatious messages they care to send in a way that would be difficult for their spouse to know about, whether they’re on Facebook or not.
    How comfortable is Facebook going to be with being a potential aid to adultery? I guess we’ll have to wait and see how that pans out. As noted above, Facebook execs have — in the past — suggested the company is in the business of ‘connecting people, period’. So there’s perhaps a certain twisted logic working away as an undercurrent and driving its impulse to push for ever more human connections. But the company could be at risk of applying its famous “it’s complicated” relationship status to itself with the dating launch — and then raining complicated consequences down upon its users as a result. (As, well, it so often seems to do in the name of expanding its own business.)
    So instead of ‘don’t mix the streams’, with dating we’re seeing Facebook trying to get away with running entirely opposite types of social interactions in close parallel. What could possibly go wrong?! Or rather what’s to stop someone in the ‘separate’ Facebook dating pool trying to Facebook-stalk a single they come across there who doesn’t responded to their overtures? (Given Facebook dating users are badged with their real Facebook names there could easily be user attempts to ‘cross over’.)
    And if sentiments from one siloed service spill over into mainstream Facebook things could get very messy indeed — and users could end up being doubly repelled by its service rather than additionally compelled. The risk is Facebook ends up fouling not feathering its own nest by trying to combine dating and social networking. (This less polite phrase also springs to mind.)
  7. Who are you hoping to date anyway?!
    Outside emerging markets Facebook’s growth has stalled. Even social networking’s later stage middle age boom looks tapped out. At the same time today’s teens are not at all hot for Facebook. The youngest web users are more interested in visually engaging social apps. And the company will have its work cut out trying to lure this trend-sensitive youth crowd. Facebook dating will probably sound like a bad joke — or a dad joke — to these kids.
    Going up the age range a bit, the under ~35s are hardly enamoured with Facebook either. They may still have a profile but also hardly think Facebook is cool. Some will have reduced their usage or even taken a mini break. The days of this age-group using Facebook to flirt with old college classmates are as long gone as sending a joke Facebook poke. Some are deleting their Facebook account entirely — and not looking back. Is this prime dating age-group suddenly likely to fall en masse for Facebook’s love match experiment? It seems doubtful.
    And it certainly looks like no accident Facebook is debuting Dating outside the US. Emerging markets, which often have young, app-loving populations, probably represent its best chance at bagging the critical mass of singles absolutely required to make any dating product even vaguely interesting.
    But in its marketing shots for the service Facebook seems to be hoping to attract singles in the late twenties age-range — dating app users who are probably among the ficklest, trickiest people for Facebook to lure with a late-stage, catch-all and, er, cringey proposition.
    After that, who’s left? Those over 35s who are still actively on Facebook are either going to be married — and thus busy sharing their wedding/baby pics — and not in the market for dating anyway; or if they are single they may be less inclined towards getting involved with online dating vs younger users who are now well accustomed to dating apps. So again, for Facebook, it looks like diminishing returns up here.
    And of course a dating app is only as interesting and attractive as the people on it. Which might be the most challenging hurdle for Facebook to make a mark on this well-served playing field — given its eponymous network is now neither young nor cool, hip nor happening, and seems to be having more of an identity crisis with each passing year.
    Perhaps Facebook could carve out a dating niche for itself among middle-age divorcees — by offering to digitally hand-hold them and help get them back into the dating game. (Although there’s zero suggestion that’s what it’s hoping to do with the service it debuted this week.)
    If Zuckerberg really wants to bag the younger singles he seems most interested in — at least judging by Facebook Dating’s marketing — he might have been better off adding a dating stream to Instagram.
    I mean, InstaLovegram almost sounds like it could be a thing.

Call for smart home devices to bake in privacy safeguards for kids

A new research report has raised concerns about how in-home smart devices such as AI virtual voice assistants, smart appliances, and security and monitoring technologies could be gathering and sharing children’s data.

It calls for new privacy measures to safeguard kids and make sure age appropriate design code is included with home automation technologies.

The report, entitled Home Life Data and Children’s Privacy, is the work of Dr Veronica Barassi of Goldsmiths, University of London, who leads a research project at the university investigating the impact of big data and AI on family life.

Barassi wants the UK’s data protection agency to launch a review of what she terms “home life data” — meaning the information harvested by smart in-home devices that can end up messily mixing adult data with kids’ information — to consider its impact on children’s privacy, and “put this concept at the heart of future debates about children’s data protection”.

“Debates about the privacy implications of AI home assistants and Internet of Things focus a lot on the the collection and use of personal data. Yet these debates lack a nuanced understanding of the different data flows that emerge from everyday digital practices and interactions in the home and that include the data of children,” she writes in the report.

“When we think about home automation therefore, we need to recognise that much of the data that is being collected by home automation technologies is not only personal (individual) data but home life data… and we need to critically consider the multiple ways in which children’s data traces become intertwined with adult profiles.”

The report gives examples of multi-user functions and aggregated profiles (such as Amazon’s Household Profiles feature) as constituting a potential privacy risk for children’s privacy.

Another example cited is biometric data — a type of information frequently gathered by in-home ‘smart’ technologies (such as via voice or facial recognition tech) yet the report asserts that generic privacy policies often do not differentiate between adults’ and children’s biometric data. So that’s another grey area being critically flagged by Barassi.

She’s submitted the report to the ICO in response to its call for evidence and views on an Age Appropriate Design Code it will be drafting. This code is a component of the UK’s new data protection legislation intended to support and supplement rules on the handling of children’s data contained within pan-EU privacy regulation — by providing additional guidance on design standards for online information services that process personal data and are “likely to be accessed by children”.

And it’s very clear that devices like smart speakers intended to be installed in homes where families live are very likely to be accessed by children.

The report concludes:

There is no acknowledgement so far of the complexity of home life data, and much of the privacy debates seem to be evolving around personal (individual) data. It seems that companies are not recognizing the privacy implications involved in children’s daily interactions with home automation technologies that are not designed for or targeted at them. Yet they make sure to include children in the advertising of their home technologies. Much of the responsibility of protecting children is in the hands of parents, who struggle to navigate Terms and Conditions even after changes such as GDPR [the European Union’s new privacy framework]. It is for this reason that we need to find new measures and solutions to safeguard children and to make sure that age appropriate design code is included within home automation technologies.

“We’ve seen privacy concerns raised about smart toys and AI virtual assistants aimed at children, but so far there has been very little debate about home hubs and smart technologies aimed at adults that children encounter and that collect their personal data,” adds Barassi commenting in a statement.

“The very newness of the home automation environment means we do not know what algorithms are doing with this ‘messy’ data that includes children’s data. Firms currently fail to recognise the privacy implications of children’s daily interactions with home automation technologies that are not designed or targeted at them.

“Despite GDPR, it’s left up to parents to protect their children’s privacy and navigate a confusing array of terms and conditions.”

The report also includes a critical case study of Amazon’s Household Profiles — a feature that allows Amazon services to be shared by members of a family — with Barassi saying she was unable to locate any information on Amazon’s US or UK privacy policies on how the company uses children’s “home life data” (e.g. information that might have been passively recorded about kids via products such as Amazon’s Alexa AI virtual assistant).

“It is clear that the company recognizes that children interact with the virtual assistants or can create their own profiles connected to the adults. Yet I can’t find an exhaustive description or explanation of the ways in which their data is used,” she writes in the report. “I can’t tell at all how this company archives and sells my home life data, and the data of my children.”

Amazon does make this disclosure on children’s privacy — though it does not specifically state what it does in instances where children’s data might have been passively recorded (i.e. as a result of one of its smart devices operating inside a family home.)

Barassi also points out there’s no link to its children’s data privacy policy on the ‘Create your Amazon Household Profile’ page — where the company informs users they can add up to four children to a profile, noting there is only a tiny generic link to its privacy policy at the very bottom of the page.

We asked Amazon to clarify its handling of children’s data but at the time of writing the company had not responded to multiple requests for comment.

The EU’s new GDPR framework does require data processors to take special care in handling children’s data.

In its guidance on this aspect of the regulation the ICO writes: “You should write clear privacy notices for children so that they are able to understand what will happen to their personal data, and what rights they have.”

The ICO also warns: “The GDPR also states explicitly that specific protection is required where children’s personal data is used for marketing purposes or creating personality or user profiles. So you need to take particular care in these circumstances.”

Five security settings in iOS 12 you should change right now

iOS 12, Apple’s latest mobile software for iPhone and iPad, is finally out. The new software packs in a bunch of new security and privacy features you’ve probably already heard about.

Here’s what you need to do to take advantage of the new settings and lock down your device.

1. Turn on USB Restricted Mode to make hacking more difficult

This difficult-to-find new feature prevents any accessories from connecting to your device — like USB cables and headphones — when your iPhone or iPad has been locked for more than an hour. That prevents police and hackers alike from using tools to bypass your lock screen passcode and get your data.

Go to Settings > Touch ID & Passcode and type in your passcode. Then, scroll down and ensure that USB Accessories are not permitted on the lock screen, so make sure the setting is Off.

2. Make sure automatic iOS updates are turned on

Every time your iPhone or iPad updates, it comes with a slew of security patches to prevent crashes or data theft. Yet, how often do you update your phone? Most don’t bother unless it’s a major update. Now, iOS 12 will update your device behind the scenes, saving you downtime. Just make sure you switch it on.

Go to Settings > General > Software Update and turn on automatic updates.

3. Set a stronger device passcode

iOS has gotten better in recent years with passcodes. For years, it was a four-digit code by default, and now it’s six-digits. That makes it far more difficult to run through every combination — known as brute-forcing.

But did you know that you can set a number-only code of any length? Eight-digits, twelve — even more — and it keeps the number keypad on the lock screen so you don’t have to fiddle around with the keyboard.

Go to Settings > Touch ID & Passcode and enter your passcode. Then, go to Change password and, from the options, set a Custom Numeric Code.

4. Now, switch on two-factor authentication

Two-factor is one of the best ways to keep your account safe. If someone steals your password, they still need your phone to break into your account. For years, two-factor has been cumbersome and annoying. Now, iOS 12 has a new feature that auto-fills the code, so it takes the frustration step out of the equation — so you have no excuse.

You may be asked to switch on two-factor when you set up your phone. You can also go to Settings and tap your name, then go to Password & Security. Just tap Turn on Two-Factor Authentication and follow the prompts.

5. While you’re here… change your reused passwords

iOS 12’s password manager has a new feature: password auditing. If it finds you’ve used the same password on multiple sites, it will warn you and advise you to change those passwords. It prevents password reuse attacks (known as “credential stuffing“) that hackers use to break into multiple sites and services using the same username and password.

Go to Settings > Passwords & Accounts > Website & App Passwords and enter your passcode. You’ll see a small warning symbol next to each account that recognizes a reused password. One tap of the Change Password on Website button and you’re done.

UK’s mass surveillance regime violated human rights law, finds ECHR

In another blow to the UK government’s record on bulk data handling for intelligence purposes the European Court of Human Rights (ECHR) has ruled that state surveillance practices violated human rights law.

Arguments against the UK intelligence agencies’ bulk collection and data sharing practices were heard by the court in November last year.

In today’s ruling the ECHR has ruled that only some aspects of the UK’s surveillance regime violate human rights law. So it’s not all bad news for the government — which has faced a barrage of legal actions (and quite a few black marks against its spying practices in recent years) ever since its love affair with mass surveillance was revealed and denounced by NSA whistleblower Edward Snowden, back in 2013.

The judgement reinforces a sense that the government has been seeking to push as close to the legal line as possible on surveillance, and sometimes stepping over it — reinforcing earlier strikes against legislation for not setting tight enough boundaries to surveillance powers, and likely providing additional fuel for fresh challenges.

The complaints before the ECHR focused on three different surveillance regimes: 1) The bulk interception of communications (aka ‘mass surveillance’); 2) Intelligence sharing with foreign governments; and 3) The obtaining of communications data from communications service providers.

The challenge actually combines three cases, with the action brought by a coalition of civil and human rights campaigners, including the American Civil Liberties Union, Amnesty International, Big Brother Watch, Liberty, Privacy International and nine other human rights and journalism groups based in Europe, Africa, Asia and the Americas.

The Chamber judgment from the ECHR found, by a majority of five votes to two, that the UK’s bulk interception regime violates Article 8 of the European Convention on Human Rights (a right to respect for private and family life/communications) — on the grounds that “there was insufficient oversight both of the selection of Internet bearers for interception and the filtering; search and selection of intercepted communications for examination; and the safeguards governing the selection of ‘related communications data’ for examination were inadequate”.

The judges did not find bulk collection itself to be in violation of the convention but noted that such a regime must respect criteria set down in case law.

In an even more pronounced majority vote, the Chamber found by six votes to one that the UK government’s regime for obtaining data from communications service providers violated Article 8 as it was “not in accordance with the law”.

While both the bulk interception regime and the regime for obtaining communications data from communications service providers were deemed to have violated Article 10 of the Convention (the right to freedom of expression and information,) as the judges found there were insufficient safeguards in respect of confidential journalistic material.

However the Chamber did not rule against the government in two other components of the case — finding that the regime for sharing intelligence with foreign governments did not violate either Article 8 or Article 10.

While the court unanimously rejected complaints made by the third set of applicants, under Article 6 (right to a fair trial), about the domestic procedure for challenging secret surveillance measures, and under Article 14 (prohibition of discrimination).

The complaints in this case were lodged prior to the UK legislating for a new surveillance regime, the 2016 Investigatory Powers Act, so in coming to a judgement the Chamber was considering the oversight regime at the time (and in the case of points 1 and 3 above that’s the Regulation of Investigatory Powers Act 2000).

RIPA has since been superseded by IPA but, as noted above, today’s ruling will likely fuel ongoing human rights challenges to the latter — which the government has already been ordered to amend by other courts on human rights grounds.

Nor is it the only UK surveillance legislation judged to fall foul on that front. A few years ago UK judges agreed with a similar legal challenge to emergency surveillance legislation that predates IPA — ruling in 2015 that DRIPA was unlawful under human rights law. A verdict the UK Court of Appeal agreed with, earlier this year.

Also in 2015 the intelligence agencies’ own oversight court, the IPT, also found multiple violations following challenges to aspects of its historical surveillance operations, after they have been made public by the Snowden revelations.

Such judgements did not stop the government pushing on with the IPA, though — and it went on to cement bulk collection at the core of its surveillance modus operandi at the end of 2016.

Among the most controversial elements of the IPA is a requirement that communications service providers collect and retain logs on the web activity of the digital services accessed by all users for 12 months; state power to require a company to remove encryption, or limit the rollout of end-to-end encryption on a future service; and state powers to hack devices, networks and services, including bulk hacking on foreign soil. It also allows the security agencies to maintain large databases of personal information on U.K. citizens, including individuals suspected of no crime.

On the safeguards front the government legislated for what it claimed was a “double lock” authorization process for interception warrants — which loops in the judiciary to signing off intercept warrants for the first time in the U.K., along with senior ministers. However this does not regulate the collection or accessing of web activity data that’s blanket-retained on all users.

In April this shiny new surveillance regime was also dealt a blow in UK courts — with judges ordering the government to amend the legislation to narrow how and why retained metadata could be accessed, giving ministers a deadline of November 1 to make the necessary changes.

In that case the judges also did not rule against bulk collection in general — declining to find that the state’s current data retention regime is unlawful on the grounds that it constituted “general and indiscriminate” retention of data. (For its part the government has always argued its bulk collection activities do not constitute blanket retention.)

And today’s ECHR ruling further focuses attention on the safeguards placed around bulk collection programs — having found the UK regime lacked sufficient monitoring to be lawful (but not that bulk collection itself is unlawful by default).

Opponents of the current surveillance regime will be busily parsing the ruling to find fresh fronts to attack.

It’s not the first time the ECHR has looked at bulk interception. Most recently, in June 2018, it deemed Swedish legislation and practice in the field of signals intelligence did not violate EU human rights law. Among its reasoning was that it found the Swedish system to have provided “adequate and sufficient guarantees against arbitrariness and the risk of abuse”.

However it said the Big Brother Watch and Others vs United Kingdom case being ruled upon today is the first case in which it specifically considered the extent of the interference with a person’s private life that could result from the interception and examination of communications data (as opposed to content).

In a Q&A about today’s judgement, the court notes that it “expressly recognised” the severity of threats facing states, and also how advancements in technology have “made it easier for terrorists and criminals to evade detection on the Internet”.

“It therefore held that States should enjoy a broad discretion in choosing how best to protect national security. Consequently, a State may operate a bulk interception regime if it considers that it is necessary in the interests of national security. That being said, the Court could not ignore the fact that surveillance regimes have the potential to be abused, with serious consequences for individual privacy. In order to minimise this risk, the Court has previously identified six minimum safeguards which all interception regimes must have,” it writes.

“The safeguards are that the national law must clearly indicate: the nature of offences which may give rise to an interception order; a definition of the categories of people liable to have their communications intercepted; a limit on the duration of interception; the procedure to be followed for examining, using and storing the data obtained; the precautions to be taken when communicating the data to other parties; and the circumstances in which intercepted data may or must be erased or destroyed.”

(Additional elements the court says it considered in an earlier surveillance case, Roman Zakharov v. Russia, also to determine whether legislation breached Article 8, included “arrangements for supervising the implementation of secret surveillance measures, any notification mechanisms and the remedies provided for by national law”.)

Commenting on today’s ruling in a statement, Megan Goulding, a lawyer for Liberty, said: “This is a major victory for the rights and freedom of people in the UK. It shows that there is — and should be — a limit to the extent that states can spy on their citizens.

“Police and intelligence agencies need covert surveillance powers to tackle the threats we face today — but the court has ruled that those threats do not justify spying on every citizen without adequate protections. Our government has built a surveillance regime more extreme than that of any other democratic nation, abandoning the very rights and freedoms terrorists want to attack. It can and must give us an effective, targeted system that protects our safety, data security and fundamental rights.”

A Liberty spokeswoman also told us it will continue its challenge to IPA in the UK High Court, adding: “We continue to believe that mass surveillance can never be compliant in a free, rights-respecting democracy.”

Also commenting in a statement, Silkie Carlo, director of Big Brother Watch, said: “This landmark judgment confirming that the UK’s mass spying breached fundamental rights vindicates Mr Snowden’s courageous whistleblowing and the tireless work of Big Brother Watch and others in our pursuit for justice.

“Under the guise of counter-terrorism, the UK has adopted the most authoritarian surveillance regime of any Western state, corroding democracy itself and the rights of the British public. This judgment is a vital step towards protecting millions of law-abiding citizens from unjustified intrusion. However, since the new Investigatory Powers Act arguably poses an ever greater threat to civil liberties, our work is far from over.”

A spokesperson for Privacy International told us it’s considering taking the case to the ECHR’s Grand Chamber.

Also commenting in a supporting statement, Antonia Byatt, director of English PEN, added: “This judgment confirms that the British government’s surveillance practices have violated not only our right to privacy, but our right to freedom of expression too. Excessive surveillance discourages whistle-blowing and discourages investigative journalism. The government must now take action to guarantee our freedom to write and to read freely online.”

We’ve reached out to the Home Office for comment from the UK government.

On intelligence sharing between governments, which the court had not previously considered, the judges found that the procedure for requesting either the interception or the conveyance of intercept material from foreign intelligence agencies to have been set out with “sufficient clarity in the domestic law and relevant code of practice”, noting: “In particular, material from foreign agencies could only be searched if all the requirements for searching material obtained by the UK security services were fulfilled.”

It also found “no evidence of any significant shortcomings in the application and operation of the regime, or indeed evidence of any abuse” — hence finding the intelligence sharing regime did not violate Article 8.

On the portion of the challenge concerning complaints that UK intelligence agencies’ oversight court, the IPT, lacked independence and impartiality, the court disagreed — finding that the tribunal had “extensive power to consider complaints concerning wrongful interference with communications, and those extensive powers had been employed in the applicants’ case to ensure the fairness of the proceedings”.

“Most notably, the IPT had access to open and closed material and it had appointed Counsel to the Tribunal to make submissions on behalf of the applicants in the closed proceedings,” it also writes.

In addition, it said it accepted the government’s argument that in order to ensure the efficacy of the secret surveillance regime restrictions on the applicants’ procedural rights had been “both necessary and proportionate and had not impaired the essence of their Article 6 rights”.

On the complaints under Article 14, in conjunction with Articles 8 and 10 — that those outside the UK were disproportionately likely to have their communications intercepted as the law only provided additional safeguards to people known to be in Britain — the court also disgareed, rejecting this complaint as manifestly ill-founded.

“The applicants had not substantiated their argument that people outside the UK were more likely to have their communications intercepted. In addition, any possible difference in treatment was not due to nationality but to geographic location, and was justified,” it writes. 

Security flaw in ‘nearly all’ modern PCs and Macs exposes encrypted data

Most modern computers, even devices with disk encryption, are vulnerable to a new attack that can steal sensitive data in a matter of minutes, new research says.

In new findings published Wednesday, F-Secure said that none of the existing firmware security measures in every laptop it tested “does a good enough job” of preventing data theft.

F-Secure principal security consultant Olle Segerdahl told TechCrunch that the vulnerabilities put “nearly all” laptops and desktops — both Windows and Mac users — at risk.

The new exploit is built on the foundations of a traditional cold boot attack, which hackers have long used to steal data from a shut-down computer. Modern computers overwrite their memory when a device is powered down to scramble the data from being read. But Segerdahl and his colleague Pasi Saarinen found a way to disable the overwriting process, making a cold boot attack possible again.

“It takes some extra steps,” said Segerdahl, but the flaw is “easy to exploit.” So much so, he said, that it would “very much surprise” him if this technique isn’t already known by some hacker groups.

“We are convinced that anybody tasked with stealing data off laptops would have already come to the same conclusions as us,” he said.

It’s no secret that if you have physical access to a computer, the chances of someone stealing your data is usually greater. That’s why so many use disk encryption — like BitLocker for Windows and FileVault for Macs — to scramble and protect data when a device is turned off.

But the researchers found that in nearly all cases they can still steal data protected by BitLocker and FileVault regardless.

After the researchers figured out how the memory overwriting process works, they said it took just a few hours to build a proof-of-concept tool that prevented the firmware from clearing secrets from memory. From there, the researchers scanned for disk encryption keys, which, when obtained, could be used to mount the protected volume.

It’s not just disk encryption keys at risk, Segerdahl said. A successful attacker can steal “anything that happens to be in memory,” like passwords and corporate network credentials, which can lead to a deeper compromise.

Their findings were shared with Microsoft, Apple, and Intel prior to release. According to the researchers, only a smattering of devices aren’t affected by the attack. Microsoft said in a recently updated article on BitLocker countermeasures that using a startup PIN can mitigate cold boot attacks, but Windows users with “Home” licenses are out of luck. And, any Apple Mac equipped with a T2 chip are not affected, but a firmware password would still improve protection.

Both Microsoft and Apple downplayed the risk.

Acknowledging that an attacker needs physical access to a device, Microsoft said it encourages customers to “practice good security habits, including preventing unauthorized physical access to their device.” Apple said it was looking into measures to protect Macs that don’t come with the T2 chip.

When reached, Intel would not to comment on the record.

In any case, the researchers say, there’s not much hope that affected computer makers can fix their fleet of existing devices.

“Unfortunately, there is nothing Microsoft can do, since we are using flaws in PC hardware vendors’ firmware,” said Segerdahl. “Intel can only do so much, their position in the ecosystem is providing a reference platform for the vendors to extend and build their new models on.”

Companies, and users, are “on their own,” said Segerdahl.

“Planning for these events is a better practice than assuming devices cannot be physically compromised by hackers because that’s obviously not the case,” he said.

The best security and privacy features in iOS 12 and macOS Mojave

September is Apple hardware season, where we expect new iPhones, a new Apple Watch and more. But what makes the good stuff run is the software within.

First revealed earlier this year at the company’s annual WWDC developer event in June, iOS 12 and macOS Mojave focus on a running theme: security and privacy for the masses.

Ahead of Wednesday big reveal, here’s all the good stuff to look out for.

macOS Mojave

macOS Mojave will be the sixth iteration of the Mac operating system, named after a location in California where Apple is based. It comes with dark mode, file stacks, and group FaceTime calls.

Safari now prevents browser fingerprinting and cross-site tracking

What does it do? Safari will use a new “intelligent tracking prevention” feature to prevent advertisers from following you from site to site. Even social networks like Facebook know which sites you visit because so many embed Facebook’s tools — like the comments section or the “Like” button.

Why does it matter? Tracking prevention will prevent ad firms from building a unique “fingerprint” of your browser, making it difficult to serve you targeted ads — even when you’re in incognito mode or private browsing. That’s an automatic boost for personal privacy as these companies will find it more difficult to build up profiles on you.

Camera, microphone, backups now require permission

What does it do? Just like when an app asks you for access to your contacts and calendar, now Mojave will ask for permission before an app can access your FaceTime camera and microphone, as well as location data, backups and more.

Why does it matter? By expanding this feature, it’s much more difficult for apps to switch on your camera without warning or record from your microphone without you noticing. That’s going to prevent surreptitious ultrasonic ad tracking and surveillance by malware that hijack your camera. But also asking permission for access to your backups — often unencrypted — will prevent malware or hackers from quietly stealing your data.

iOS 12

iOS 12 lands on more recent iPhones and iPads, but will bring significant performance boosts to older supported devices, new Maps, smarter notifications and updated AIKit .

Password manager will warn of password reuse

What does it do? iOS 12’s in-built password manager, which stores all your passwords for easy access, will now tell if you’re using the same password across different sites and apps.

Why does it matter? Password reuse is a real problem. If you use the same password on every site, it only takes one site breach to grab your password for every other site you use. iOS 12 will let you know if you’re using a weak password or the same password on different sites. Your passwords are easily accessible with your fingerprint or your passcode.

Two-factor codes will be auto-filled

What does it do? When you are sent a two-factor code — such as a text message or a push notification — iOS 12 will take that code and automatically enter it into the login box.

Why does it matter? Two-factor authentication is good for security — it adds an extra layer of protection on top of your username and password. But adoption is low because two-factor is cumbersome and frustrating. This feature keeps the feature security intact while making it more seamless and less annoying.

USB Restricted Mode makes hacking more difficult

What does it do? This new security feature will lock any accessories out of your device — including USB cables and headphones — when your iPhone or iPad has been locked for more than an hour.

Why does it matter? This is an optional feature — first added to iOS 11.4.1 but likely to be widely adopted with iOS 12 — will make it more difficult for law enforcement (and hackers) to plug in your device and steal your sensitive data. Because your device is encrypted, not even Apple can get your data, but some devices — like GrayKeys — can brute-force your password. This feature will render these devices largely ineffective.

Apple’s event starts Wednesday at 10am PT (1pm ET).

more iPhone Event 2018 coverage

Dozens of popular iPhone apps caught sending user location data to monetization firms

A group of security researchers say dozens of popular iPhone apps are quietly sharing the location data of “tens of millions of mobile devices” with third-party data monetization firms.

Almost all require access to a user’s location data to work properly, like weather and fitness apps, but share that data often as a way to generate revenue for free-to-download apps.

In many cases, the apps send precise locations and other sensitive, identifiable data “at all times, constantly,” and often with “little to no mention” that location data will be shared with third-parties, say security researchers at the GuardianApp project.

“I believe people should be able to use any app they wish on their phone without fear that granting access to sensitive data may mean that this data will be quietly sent off to some entity who they do not know and do not have any desire to do business with,” said Will Strafach, one of the researchers.

Using tools to monitor network traffic, the researchers found 24 popular iPhone apps that were collecting location data — like Bluetooth beacons to Wi-Fi network names — to know where a person is and where they visit. These data monetization firms also collect other device data from the accelerometer, battery charge status and cell network names.

In exchange for data, often these data firms pay app developers to collect data and grow their databases and often to deliver ads based on a person’s location history.

But although many claim they don’t collect personally identifiable information, Strafach said that latitude and longitude coordinates can pin a person to a house or their work.

To name a few:

ASKfm, a teen-focused anonymous question-and-answer app, has 1,400 ratings on the Apple App Store and touts tens of millions of users. It asks for access to a user’s location that “won’t be shared with anyone.” But the app sends that location data to two data firms, AreaMetrics and Huq. When reached, the app maker said it believes its location collection practices “fit industry standards, and are therefore acceptable for our users.”

NOAA Weather Radar has more than 266,000 reviews and has millions of downloads. Access to your location “is used to provide weather info.” But an earlier version of the app from March was sending location data to three firms, Factual, Sense360 and Teemo. The code has since been removed. A spokesperson for Apalon, which built the app, said it “conducted a limited, brief test with a few of these providers” earlier this year.

Homes.com is a popular app that asks that you switch on your location to help “find nearby homes.” But the code, thought to be old code, still sends precise coordinates to AreaMetrics. The app maker said it used AreaMetrics “for a short period” last year but said the code was deactivated.

Perfect365, an augmented reality beauty app with more than 100 million users, asks for location to “customize your experience based on your location and more,” and refers users to the privacy policy for more — which does state that location data will be used for advertising. The app was briefly pulled after a BuzzFeed News story earlier this year outed the researchers, but returned to the app store days later. The current app version contains code for eight separate data monetization firms in the latest version of the app. The app maker did not return a request for comment.

And the list goes on — including more than a hundred Sinclair-owned local news and weather apps, which share location data with Reveal, a data tracking and monetization firm, which the company says will help the media giant bolster its sales by “providing advertisers with target audiences.”

That can quickly become a lucrative business for developers with popular apps and monetization firms alike, some of which collect billions of locations each day.

Most of the data monetization firms deny any wrongdoing and say that users can opt out at any time. Most said that they demand that app makers explicitly state that they require app developers to explicitly state that they are collecting and sending data to third-party firms.

The team’s research shows that those requirements are almost never verified.

Reveal said it requires customers “state the use cases for location data in their privacy policy” and that users can opt-out at any time. Huq, like Reveal, said it carries out “regular checks on our partner apps to ensure that they have implemented” measures that explain the company’s services. AreaMetrics, which collects primarily Bluetooth beacon data from public areas like coffee shops and retail stores, says it has “no interest” in receiving personal data from users.

Sense360 said the data it collects is anonymous and requires apps to get explicit consent from its users, but Strafach said few apps he’s seen contained text that sought assurances. But the company did not answer a specific question why it no longer works with certain apps. Wireless Registry said it also requires apps seek consent from users, but would not comment on the security measures it uses to ensure user privacy. And in remarks, inMarket said it follows advertising standards and guidelines.

Cuebiq claims to use an “advanced cryptography method” to store and transmit data, but Strafach said he found “no evidence” that any data was scrambled. It says it’s not a “tracker” but says while some app developers look to monetize users’ data, most are said to use it for insights. And, Factual said it uses location data for advertising and analytics, but must obtain in-app consent from users.

When reached, Teemo did not answer our questions. SafeGraph, Mobiquity and Fysical did not respond to requests for comment.

“None of these companies appear to be legally accountable for their claims and practices, instead there is some sort of self-regulation they claim to enforce,” said Strafach.

He said there isn’t much users can do, but limiting ad tracking in your iPhone’s privacy settings can make it more difficult for location trackers to identify users.

Apple’s crackdown on apps that don’t have privacy policies kicks in next month. But given how few people read them in the first place, don’t expect apps to change their behavior any time soon.

Salesforce research: Yep, consumers are worried about their data

Recent headlines at TechCrunch and elsewhere have been filled with news about data breaches, data misuse and other data-related scandals. But has that actually affected how consumers think about their personal data?

A new report from Salesforce Reserach sheds some light on this question. In a survey of 6,723 individuals globally, Salesforce found that 59 percent of of respondents believe their personal information is vulnerable to security breach, while 54 percent believe that the companies with that data don’t have their best interests in mind.

Respondents also said that these feelings will affect their choices as consumers — for example, 86 percent said that if they trust a company, they’re more likely to share their experiences, and that number goes up to 91 percent among millennials and Gen Zers.

The findings seem similar to (if more general than research from Pew showing that Americans have become more cautious and and critical in how they use Facebook.

salesforce research chart

At the same time, it sounds like people do want some degree of personalization in their marketing — the same personalization that requires data. Eighty-four percent of respondents said they want to be treated “like a person, not a number,” and 54 percent said current marketing messages aren’t as relevant as they’d like.

Salesforce says that while this might seem like a paradox, personalization and trust are not mutually exclusive. To illustrate this, it notes that 86 percent of respondents said they’re more likely to trust a company with their personal information if it explains how that information leads to a better customer experience, and 68 percent said they’re more likely to trust companies with that info if they’ll use it to fully personalize the customer experience.

“With technologies like AI driving more personalized customer experiences, customer trust needs to be grounded in a deeper understanding of the technologies’ value,” the report says. “Among millennials and Gen Zers, 91% are more likely to trust companies with their persona information if they explain how its use will deliver a better experience — suggesting that strict security and privacy protocols alone may not be enough.”

You can read the full research brief here.

AnchorFree, maker of Hotspot Shield, raises $295 million in new funding

AnchorFree, a maker of a popular virtual private networking app, has raised $275 million in a new round of funding, the company announced Wednesday.

The Redwood City, Calif.-based app maker’s flagship app Hotspot Shield ranks as one of the most popular VPN apps on the market. The app, based on a freemium model, allows users across the world tunnel their internet connections through AnchorFree’s servers, which masks users’ browsing histories from their internet providers and allows those under oppressive regimes evade state-level censorship.

The app has 650 million users in 190 countries, the company said, and also has a business-focused offering.

The funding was led by WndrCo, a holding company focusing on consumer tech businesses, in addition to Accel Partners, 8VC, SignalFire, and Green Bay Ventures, among others.

“The WndrCo team brings deep operational experience in launching and scaling global tech products, and we look forward to working closely with them in pursuit of our mission to provide secure access to the world’s information for every person on the planet,” said AnchorFree’s chief executive David Gorodyansky in remarks.

The news was first reported by The New York Times.

 

George Church’s genetics on the blockchain startup just raised $4.3 million from Khosla

Nebula Genomics, the startup that wants to put your whole genome on the blockchain, has announced the raise of $4.3 million in Series A from Khosla Ventures and other leading tech VC’s such as Arch Venture Partners, Fenbushi Capital, Mayfield, F-Prime Capital Partners, Great Point Ventures, Windham Venture Partners, Hemi Ventures, Mirae Asset, Hikma Ventures and Heartbeat Labs.

Nebula has also has forged a partnership with genome sequencing company Veritas Genetics.

Veritas was one of the first companies to sequence the entire human genome for less than $1,000 in 2015, later adding all that info to the touch of a button on your smartphone. Both Nebula and Veritas were cofounded by MIT professor and “godfather” of the Human Genome Project, George Church.

The partnership between the two companies will allow the Nebula marketplace, or the place where those consenting to share their genetic data can earn Nebula’s cryptocurrency called “Nebula tokens” to build upon Veritas open-source software platform Arvados, which can process and share large amounts of genetic information and other big data. According to the company, this crossover offers privacy and security for the physical storage and management of various data sets according to local rules and regulations.

“As our own database grows to many petabytes, together with the Nebula team we are taking the lead in our industry to protect the privacy of consumers while enabling them to participate in research and benefit from the blockchain-based marketplace Nebula is building,” Veritas CEO Mirza Cifric said in a statement.

The partnership will work with various academic institutions and industry researchers to provide genomic data from individual consumers looking to cash in by sharing their own data, rather than by freely giving it as they might through another genomics company like 23andMe .

“Compared to centralized databases, Nebula’s decentralized and federated architecture will help address privacy concerns and incentivize data sharing,” added Nebula Genomics co-founder Dennis Grishin. “Our goal is to create a data flow that will accelerate medical research and catalyze a transformation of health care.”