Apple ad focuses on iPhone’s most marketable feature — privacy

Apple is airing a new ad spot in primetime today. Focused on privacy, the spot is visually cued, with no dialog and a simple tagline: Privacy. That’s iPhone.

In a series of humorous vignettes, the message is driven home that sometimes you just want a little privacy. The spot has only one line of text otherwise, and it’s in keeping with Apple’s messaging on privacy over the long and short term. “If privacy matters in your life, it should matter to the phone your life is on.”

The spot will air tonight in primetime in the U.S. and extend through March Madness. It will then air in select other countries.

You’d have to be hiding under a rock not to have noticed Apple positioning privacy as a differentiating factor between itself and other companies. Beginning a few years ago, CEO Tim Cook began taking more and more public stances on what the company felt to be your “rights” to privacy on their platform and how that differed from other companies. The undercurrent being that Apple was able to take this stance because its first-party business relies on a relatively direct relationship with customers who purchase its hardware and, increasingly, its services.

This stands in contrast to the model of other tech giants like Google or Facebook that insert an interstitial layer of monetization strategy on top of that relationship in the forms of application of personal information about you (in somewhat anonymized fashion) to sell their platform to advertisers that in turn can sell to you better.

Turning the ethical high ground into a marketing strategy is not without its pitfalls, though, as Apple has discovered recently with a (now patched) high-profile FaceTime bug that allowed people to turn your phone into a listening device, Facebook’s manipulation of App Store permissions and the revelation that there was some long overdue house cleaning needed in its Enterprise Certificate program.

I did find it interesting that the iconography of the “Private Side” spot very, very closely associates the concepts of privacy and security. They are separate, but interrelated, obviously. This spot says these are one and the same. It’s hard to enforce privacy without security, of course, but in the mind of the public I think there is very little difference between the two.

The App Store itself, of course, still hosts apps from Google and Facebook among thousands of others that use personal data of yours in one form or another. Apple’s argument is that it protects the data you give to your phone aggressively by processing on the device, collecting minimal data, disconnecting that data from the user as much as possible and giving users as transparent a control interface as possible. All true. All far, far better efforts than the competition.

Still, there is room to run, I feel, when it comes to Apple adjudicating what should be considered a societal norm when it comes to the use of personal data on its platform. If it’s going to be the absolute arbiter of what flies on the world’s most profitable application marketplace, it might as well use that power to get a little more feisty with the bigcos (and littlecos) that make their living on our data.

I mention the issues Apple has had above not as a dig, though some might be inclined to view Apple integrating privacy with marketing as boldness bordering on hubris. I, personally, think there’s still a major difference between a company that has situational loss of privacy while having a systemic dedication to privacy and, well, most of the rest of the ecosystem which exists because they operate an “invasion of privacy as a service” business.

Basically, I think stating privacy is your mission is still supportable, even if you have bugs. But attempting to ignore that you host the data platforms that thrive on it is a tasty bit of prestidigitation.

But that might be a little too verbose as a tagline.

Don’t break up big tech — regulate data access, says EU antitrust chief

Breaking up tech giants should be a measure of last resort, the European Union’s competition commissioner, Margrethe Vestager, has suggested.

“To break up a company, to break up private property would be very far reaching and you would need to have a very strong case that it would produce better results for consumers in the marketplace than what you could do with more mainstream tools,” she warned this weekend, speaking in a SXSW interview with Recode’s Kara Swisher. “We’re dealing with private property. Businesses that are built and invested in and become successful because of their innovation.”

Vestager has built a reputation for being feared by tech giants, thanks to a number of major (and often expensive) interventions since she took up the Commission antitrust brief in 2014, with still one big outstanding investigation hanging over Google.

But while opposition politicians in many Western markets — including high profile would-be U.S. presidential candidates — are now competing on sounding tough on tech, the European commissioner advocates taking a scalpel to data streams rather than wielding a break-up hammer to smash market-skewing tech giants.

“When it comes to the very far reaching proposal to split up companies, for us, from a European perspective, that would be a measure of last resort,” she said. “What we do now, we do the antitrust cases, misuse of dominant position, the tying of products, the self-promotion, the demotion of others, to see if that approach will correct and change the marketplace to make it a fair place where there’s no misuse of dominant position but where smaller competitors can have a fair go. Because they may be the next big one, the next one with the greatest idea for consumers.”

She also pointed to an agreement last month, between key European political institutions on regulating online platform transparency, as an example of the kind of fairness-focused intervention she believes can work to counter market imbalance.

The bread and butter work regulators should be focused on where big tech is concerned are things like digital sector enquiries and hearings to examine how markets are operating in detail, she suggested — using careful scrutiny to inform and shape intelligent, data-led interventions.

Albeit ‘break up Google’ clearly makes for a punchier political soundbite.

Vestager is, however, in the final months of her term as antitrust chief — with the Commission due to turn over this year. Her time at the antitrust helm will end on November 1, she confirmed. (Though she remains, at least tentatively, on a shortlist of candidates who could be appointed the next European Commission president.)

The commissioner has spoken up before about regulating access to data as a more interesting option for controlling digital giants vs breaking them up.

And some European regulators appear to be moving in that direction already. Such the German Federal Cartel Office (FCO) which last month announced a decision against Facebook which aims to limit how it can use data from its own services. The FCO’s move has been couched as akin to an internal break up of the company, at the data level, without the tech giant having to be forced to separate and sell off business units like Instagram and WhatsApp.

It’s perhaps not surprising, therefore, that Facebook founder Mark Zuckerberg announced a massive plan to merge all three services at the technical level just last week — billing the switch to encrypted content but merged metadata as a ‘pro-privacy’ move, while clearly also intending to restructure his empire in a way that works against regulatory interventions that separate and control internal data flows at the product level.

The Competition Commission does not have a formal probe of Facebook or the social media sector open at this point but Vestager said her department does have its eye on how social media giants are using data.

“We’re sort of hoovering over social media, Facebook — how data’s being used in that respect,” she said, also flagging the preliminary work it’s doing looking into Amazon’s use of merchant data. (Also still not yet a formal probe.)

“The good thing is now the debate is really sort of taking off,” she added, of competition regulation generally. “When I’ve been visiting and speaking with people on The Hill previously, I’ve sensed a new sort of interest and curiosity as to what can competition achieve for you in a society. Because if you have fair competition then you have markets serving the citizen in our role as consumer and not the other way around.”

Asked whether she’s personally convinced by Facebook’s sudden ‘appreciation’ of privacy Vestager said if the announcement signifies a genuine change of philosophy and direction which leads to shifts in its business practices it would be good news for consumers.

Though she said she’s not simply taking Zuckerberg at his word at this point. “It may be a little far-reaching to assume the best,” she said politely when pushed by Swisher on whether she believed a sincere pivot is possible from a company with such a long privacy-hostile history.

Big tech, small tax

The interview also delved into the issue of big tech and the tiny amounts it pays in tax.

Reforming the global tax system so digital businesses pay a fair share vs traditional businesses is now “urgent” work to do, said Vestager — highlighting how the lack of a consensus position among EU Member States is pushing some countries to move forward with their own measures, given resistance to Commission proposals from other corners of the bloc.

France‘s push for a tax on tech giants this year is “absolutely necessary but very unfortunate”, Vestager said.

“When you do numbers that can be compared we see that digital businesses they would pay on average nine per cent [in taxes] where traditional businesses on average pay 23 per cent,” she continued. “Yet they’re in the same market for capital, for skilled employees, sometimes competing for the same customers. So obviously this is not fair.”

The Commission’s hope is that individual “pushes” from Member States frustrated by the current tax imbalance will generate momentum for “a European-wide way of doing things” — and therefore that any fragmentation of tax policies across the bloc will be short-lived.

She also she Europe is keen for the Organisation for Economic Co-operation and Development to “push forward for this” too, remarking: “Because we sense in the OECD that a number of places in the world take an interest also in the U.S. side of things.”

Is the better way to reset inequalities related to big tech and society achieved via reforming the tax system or are regulators doomed to have to keep fining them “into the next century”, wondered Swisher.

“You get a fine when you do something illegal. You pay your taxes to contribute to society where you do your business. These are two different things and we definitely need both,” responded Vestager. “But we cannot have a situation where some businesses do not contribute and the majority of businesses they do. Because it’s simply not fair in the marketplace or fair towards citizens if this continues.”

She also made short shrift of the favored big tech lobbyist line — to loudly claim privacy regulation helps big guys because it’s easier for them to fund compliance — by pointing out that Europe’s General Data Protection Regulation has “different brackets” and does not simply clobber big and small alike with the same requirements.

Of course small businesses “don’t have the same obligations as Google”, said Vestager.

“I’d say if they find it easy, I’d say they can do better,” she added, raising the much complained about consumer rights issue of consent vs inscrutable T&Cs.

“Because I still find that it’s quite tricky to understand what it is that you accept when you accept your terms and conditions. And I think it would be great if we as citizens could really say ‘oh this is what I am signing up to and I’m perfectly happy with that’.”

Though she admitted there’s still a way to go for European privacy rights to be fully functioning as intended — arguing it’s still too hard for individual consumers to exercise the rights they have in law.

“I know I own my data but I really do not know how to exercise that ownership,” she said. “How to allow for more people to have access to my data if I want to enable innovation, new market participants coming in. If that was done in large scale you could have an innovative input into the marketplace and we’re definitely not there yet,” she said.

Asked about the idea of taxing data flows as another possible means of clipping the wings of big tech Vestager pointed to early signs of an intermediate market spinning up in Europe to help individual extract value from what corporate entities are doing with their information. So not literally a tax on data flows but a way for consumers to claw back some of the value that’s being stripped from them.

“It’s still nascent in Europe but since now we have the rights that establishes your ownership of your data we see there is a beginning market development of intermediaries saying should I enable you yourself to monetize your data, so it’s not just the giants who monetize your data. So that maybe you get a sum every month reflecting how your data has been passed on,” she said. “That is one opportunity.”

She also said the Commission is looking at how to make sure “huge amounts of data will not be a barrier to entry in a marketplace” — or present a barrier to innovation for newcomers. The latter being key given how tech giants’ massive data pools are translating into a meaty advantage in AI R&D.

In another interesting exchange, Vestager suggested the convenience of voice interfaces presents an acute competition challenge — given how the tech could naturally concentrate market power via preferring quick-fire Q&A style interactions which don’t support offering lots of choice options.

“One of the things that is really mindboggling for us is how to have choice if you have voice,” she said, arguing that voice assistance dynamic doesn’t lend itself to multiple suggestions being offered every time a user asks a question. “So how to have competition when you have voice search?.. How would this change the marketplace and how would we deal with such a market? So this is what we’re trying to figure out.”

Again she suggested regulators are thinking about how data flows behind the scenes as a potential route to remedying interfaces that work against choice.

“We’re trying to figure out how access to data will change the marketplace,” she added. “Can you give a different access to data because the one who holds the data, also holds the resources for innovation. And we cannot rely on the big guys to be the innovative ones.”

Asked for her worst case scenario for tech 10 years hence, she said it would be to have “all of the technology but none of the societal positive oversight and direction”.

On the flip side, the best case would be for legislators to be “willing to take sufficient steps in taxation and in regulating access to data and fairness in the marketplace”.

“We would also need to see technology develop to have new players,” she emphasized. “Because we still need to see what will happen with quantum computing, what will happen with blockchain, what other uses are there for all if that new technology. Because I still think that it holds a lot of promise. But only if our democracy will give it direction. Then you will have a positive outcome.”

Taxing your privacy

Data collection through mobile tracking is big business and the potential for companies helping governments monetize this data is huge. For consumers, protecting yourself against the who, what and where of data flow is just the beginning. The question now is: How do you ensure your data isn’t costing you money in the form of new taxes, fees and bills?  Particularly when the entity that stands to benefit from this data — the government — is also tasked with protecting it?

The advances in personal data collection are a source of growing concern for privacy advocates, but whereas most fears tend to focus on what type of data is being collected, who’s watching and to whom is your data being sold, the potential for this same data to be monetized via auditing and compliance fees is even more problematic.

The fact is, you don’t need massive infrastructure to now track/tax businesses and consumers. State governments and municipalities have taken notice.

The result is a potential multi-billion dollar per-year business that, with mobile tracking technology, will only grow exponentially year over year.

Yet, while the revenue upside for companies helping smart cities (and states) with taxing and tolling is significant, it is also rife with contradictions and complications that could, ultimately, pose serious problems to those companies’ underlying business models and for the investors that bet heavily on them.

Internet of Things connecting in cloud over city scape.

Photo courtesy of Getty Images/chombosan

The most common argument when privacy advocates bring up concerns around mobile data collection is that consumers almost always have the control to opt out. When governments utilize this data, however, that option is not always available. And the direct result is the monetization of a consumer’s privacy in the form of taxes and tolls. In an era where states like California and others are stepping up as self-proclaimed defenders of citizen privacy and consent, this puts everyone involved in an awkward position — to say the least.

The marriage of smart cities and next-gen location tracking apps is becoming more commonplace.  AI, always-on data flows, sensor networks and connected devices are all being employed by governments in the name of sustainable and equitable cities as well as new revenue.

New York, LA and Seattle are all implementing (or considering implementing) congestion pricing that would ultimately rely on harvesting personal data in some form or another. Oregon, which passed the first gas tax in 1919, began it’s OreGo Program two years ago utilizing data that measured miles driven to levy fees on drivers so as to address infrastructure issues with its roads and highways.

Image Courtesy of Shutterstock

As more state and local governments look to emulate these kinds of policies the revenue opportunity for companies and investors harvesting this data is obvious.  Populus, (and a portfolio company) a data platform that helps cities manage mobility, captures data from fleets like Uber and Lyft to help cities set policy and collect fees.

Similarly, ClearRoad  is a “road pricing transaction processor” that leverages data from vehicles to help governments determine road usage for new revenue streams.  Safegraph, on the other hand, is a company that daily collects millions of trackers from smartphones via apps, APIs and other delivery methods often leaving the business of disclosure up to third parties. Data like this has begun to make its way into smart city applications which could impact industries as varied as the real estate market to the Gig Economy.

“There are lots of companies that are using location technology, 3D scanning, sensor tracking and more.  So, there are lots of opportunities to improve the effectiveness of services and for governments to find new revenue streams,” says Paul Salama, COO of ClearRoad . “If you trust the computer to regulate, as opposed to the written code, then you can allow for a lot more dynamic types of regulation and that extends beyond vehicles to noise pollution, particulate emissions, temporary signage, etc.”

While most of these platforms and technologies endeavor to do some public good by creating the baseline for good policy and sustainable cities they also raise concerns about individual privacy and the potential for discrimination.  And there is an inherent contradiction for states ostensibly tasked with curbing the excesses of data collection then turning around and utilizing that same data to line the state’s coffers, sometimes without consent or consumer choice.

Image courtesy Bryce Durbin

“People care about their privacy and there are aspects that need to be hashed out”, says Salama. “But we’re talking about a lot of unknowns on that data governance side.  There’s definitely going to be some sort of reckoning at some point but it’s still so early on.”

As policy makers and people become more aware of mobile phone tracking and the largely unregulated data collection associated with it, the question facing companies in this space is how to extract all this societally beneficial data while balancing that against some pretty significant privacy concerns.

“There will be options,” says Salama.  “An example is Utah which, starting next year, will offer electric cars the option to pay a flat fee (for avoiding gas taxes) or pay-by-the-mile.  The pay-by-the-mile option is GPS enabled but it also has additional services, so you pay by your actual usage.”

Ultimately, for governments, regulation plus transparency seems the likeliest way forward.

Image courtesy Getty Images

In most instances, the path to the consumer or tax payer is either through their shared economy vehicle (car, scooter, bike, etc.) or though their mobile device.  While taxing fleets is indirect and provides some measure of political cover for the governments generating revenue off of them, there is no such cover for directly taxing citizens via data gathered through mobile apps.

The best case scenario to short circuit these inherent contradictions for governments is to actually offer choice in the form of their own opt-in for some value exchange or preferred billing method, such as Utah’s opt-in as an alternative way to pay for road use vs. gas tax.   It may not satisfy all privacy concerns, particularly when it is the government sifting through your data, but it at least offers a measure of choice and a tangible value.

If data collection and sharing were still mainly the purview of B2B businesses and global enterprises, perhaps the rising outcry over the methods and usage of data collection would remain relatively muted. But as data usage seeps into more aspects of everyday life and is adopted by smart cities and governments across the nation questions around privacy will invariably get more heated, particularly when citizen consumers start feeling the pinch in their wallet.

As awareness rises and inherent contradictions are laid bare, regulation will surely follow and those businesses not prepared may face fundamental threats to their business models that ultimately threaten their bottom line.

LinkedIn forced to ‘pause’ mentioned in the news feature in Europe after complaints about ID mix-ups

LinkedIn has been forced to ‘pause’ a feature in Europe in which the platform emails members’ connections when they’ve been ‘mentioned in the news’.

The regulatory action follows a number of data protection complaints after LinkedIn’s algorithms incorrect matched members to news articles — triggering a review of the feature and subsequent suspension order.

The feature appears as a case study in the ‘Technology Multinationals Supervision’ section of an annual report published today by the Irish Data Protection Commission (DPC). Although the report does not explicitly name LinkedIn — but we’ve confirmed it is the named professional social network.

The data watchdog’s report cites “two complaints about a feature on a professional networking platform” after LinkedIn incorrectly associated the members with media articles that were not actually about them.

“In one of the complaints, a media article that set out details of the private life and unsuccessful career of a person of the same name as the complainant was circulated to the complainant’s connections and followers by the data controller,” the DPC writes, noting the complainant initially complained to the company itself but did not receive a satisfactory response — hence taking up the matter with the regulator.

The complainant stated that the article had been detrimental to their professional standing and had resulted in the loss of contracts for their business,” it adds.

“The second complaint involved the circulation of an article that the complainant believed could be detrimental to future career prospects, which the data controller had not vetted correctly.”

LinkedIn appears to have been matching members to news articles by simple name matching — with obvious potential for identity mix-ups between people with shared names.

“It was clear from the complaints that matching by name only was insufficient, giving rise to data protection concerns, primarily the lawfulness, fairness and accuracy of the personal data processing utilised by the ‘Mentions in the news’ feature,” the DPC writes.

“As a result of these complaints and the intervention of the DPC, the data controller undertook a review of the feature. The result of this review was to suspend the feature for EU-based members, pending improvements to safeguard its members’ data.”

We reached out to LinkedIn with questions and it pointed us to this blog post where it confirms: “We are pausing our Mentioned in the News feature for our EU members while we reevaluate its effectiveness.”

LinkedIn adds that it is reviewing the accuracy of the feature, writing:

As referenced in the Irish Data Protection Commission’s report, we received useful feedback from our members about the feature and as a result are evaluating the accuracy and functionality of Mentioned in the News for all members.

The company’s blog post also points users to a page where they can find out more about the ‘mentioned in the news’ feature and get information on how to manage their LinkedIn email notification settings.

The Irish DPC’s action is not the first privacy strike against LinkedIn in Europe.

Late last year, in its early annual report, on the pre-GDPR portion of 2018, the watchdog revealed it had investigated complaints about LinkedIn related to it targeting non-users with adverts for its service.

The DPC found the company had obtained emails for 18 million people for whom it did not have consent to process their data. In that case LinkedIn agreed to cease processing the data entirely.

That complaint also led the DPC to audit LinkedIn. It then found a further privacy problem, discovering the company had been using its social graph algorithms to try to build suggested networks of compatible professional connections for non-members.

The regulator ordered LinkedIn to cease this “pre-compute processing” of non-members’ data and delete all personal data associated with it prior to GDPR coming into force.

LinkedIn said it had “voluntarily changed our practices as a result”.

Even years later, Twitter doesn’t delete your direct messages

When does “delete” really mean delete? Not always or even at all if you’re Twitter .

Twitter retains direct messages for years, including messages you and others have deleted, but also data sent to and from accounts that have been deactivated and suspended, according to security researcher Karan Saini.

Saini found years-old messages found in a file from an archive of his data obtained through the website from accounts that were no longer on Twitter. He also filed a similar bug, found a year earlier but not disclosed until now, that allowed him to use a since-deprecated API to retrieve direct messages even after a message was deleted from both the sender and the recipient — though, the bug wasn’t able to retrieve messages from suspended accounts.

Saini told TechCrunch that he had “concerns” that the data was retained by Twitter for so long.

Direct messages once let users to “unsend” messages from someone else’s inbox, simply by deleting it from their own. Twitter changed this years ago, and now only allows a user to delete messages from their account. “Others in the conversation will still be able to see direct messages or conversations that you have deleted,” Twitter says in a help page. Twitter also says in its privacy policy that anyone wanting to leave the service can have their account “deactivated and then deleted.” After a 30-day grace period, the account disappears and along with its data.

But, in our tests, we could recover direct messages from years ago — including old messages that had since been lost to suspended or deleted accounts. By downloading your account’s data, it’s possible to download all of the data Twitter stores on you.

A conversation, dated March 2016, with a suspended Twitter account was still retrievable today. (Image: TechCrunch

Saini says this is a “functional bug” rather than a security flaw, but argued that the bug allows anyone a “clear bypass” of Twitter mechanisms to prevent accessed to suspended or deactivated accounts.

But it’s also a privacy matter, and a reminder that “delete” doesn’t mean delete — especially with your direct messages. That can open up users, particularly high-risk accounts like journalist and activists, to government data demands that call for data from years earlier.

That’s despite Twitter’s claim that once an account has been deactivated, there is “a very brief period in which we may be able to access account information, including tweets,” to law enforcement.

A Twitter spokesperson said the company was “looking into this further to ensure we have considered the entire scope of the issue.”

Retaining direct messages for years may put the company in a legal grey area ground amid Europe’s new data protection laws, which allows users to demand that a company deletes their data.

Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, said there’s “no formality at all” to how a user can ask for their data to be deleted. Any request from a user to delete their data that’s directly communicated to the company “is a valid exercise” of a user’s rights, he said.

Companies can be fined up to four percent of their annual turnover for violating GDPR rules.

“A delete button is perhaps a different matter, as it is not obvious that ‘delete’ means the same as ‘exercise my right of erasure’,” said Brown. Given that there’s no case law yet under the new General Data Protection Regulation regime, it will be up to the courts to decide, he said.

When asked if Twitter thinks that consent to retain direct messages is withdrawn when a message or account is deleted, Twitter’s spokesperson had “nothing further” to add.

UK workplace rights reform doesn’t look disruptive to gig economy giants

The UK government has set out a labor market reform package it bills as a major upgrade to workplace rights in the era of disruptive gig economy platforms.

The reforms, which include new legislation, are intended to take account of changes in working practices including those flowing from tech platforms.

But despite some gig economy platforms standing accused of exploiting workers, the government’s package does not look set to require a radical reworking of existing business models — and unions have attacked the reforms as weak and lacking substance, pointing out that, for example, a right to request a more stable contract doesn’t add up to much of a rights advance.

Among the measures being announced today (some of which have been trailed before) are:

  • a day one statement of rights for all workers setting out leave entitlements and pay, and also including detail on rights such as eligibility for sick leave and pay; and details of other types of paid leave, such as maternity and paternity leave
  • introducing a right for all workers, not just zero-hour and agency, to request a more predictable and stable contract, providing more financial security for those on flexible contracts
  • plans to bring forward proposals for a new single labour market enforcement body to ensure workers rights are properly enforced; and more resource for the Employment Agency Standards (EAS) Inspectorate
  • an end to the legal loophole which enables some firms to pay agency workers less than permanent staff (aka the ‘Swedish derogation’)
  •  an extension to the the holiday pay reference period from 12 to 52 weeks, to “ensure workers in seasonal or atypical roles get the paid time off they are entitled to”
  • enforcing vulnerable workers’ holiday pay for the first time
  • ensuring tips left for workers go to them in full

The government also says it is committed to legislate to improve the clarity of the employment status tests to “reflect the reality of the modern working relationships” — though it does not provide any detail on how exactly it intends to reform such tests.

The labor market package comes ten months after it unveiled a plan slated to expand workers rights. It also kicked off a number of consultations at that time.

With today’s package, the government is drawing heavily on an independent review of modern  working practices it commissioned, which was carried out by Matthew Taylor and published last summer.

It says it’s taking forward 51 of the 53 recommendations in the Taylor review — agreeing with him that banning zero hours contracts in their totality would “negatively impact more people than it helped”.

It has also accepted Taylor’s view that the flexibility of ‘gig working’ — where platforms distribute paid tasks via apps, and use digital technology to remotely manage what can be tens of thousands of individuals providing a service for the business — is “not incompatible with ensuring atypical workers have access to employment and social security protections”; and that platform-based working offers opportunities for “genuine two way flexibility”, as well as opportunities for those who may not be able to work in “more conventional ways”.

That’s likely music to the ears of gig economy giants that have built massive businesses by claiming to offer flexible work opportunities for the ‘self-employed’, using algorithms to distribute jobs and remote-manage a distributed workforce, thereby enabling them to massively shift employment risk onto the individuals who actually provide the core service.

Though the government also claims to be going further than Taylor in some instances.

Business secretary, Greg Clark, said the government’s intention is to build “an economy that works for everyone”, while also lauding what he dubbed “an effective balance between flexibility and worker protections” — which he credited for helping the UK have “the highest employment rate on record”.

Yet, at the start of this year, the government also committed itself to being “accountable for good quality work as well as quantity of jobs”.

And today’s package reiterates that the secretary of state for Business, Energy and Industrial Strategy will take a new responsibility to the ensure the “quality of work”.

So expect a lot of hot air to be expended in the future over what does — and does not — constitute ‘quality’ work. (Albeit, measuring working time is hard enough, from a legal point of view, let alone determining work “quality”… )

“The UK has a labour market of which we can be proud. We have the highest employment rate on record, increased participation amongst historically under-represent groups and wages growing at their fastest pace in almost a decade,” Clark said in a statement.

“This success has been underpinned by policies and employment law which strikes an effective balance between flexibility and worker protections but the world of work is changing, bringing new opportunities for innovative businesses and new business models to flourish, creating jobs across the country and boosting our economy.

“With new opportunity also comes new challenges and that is why the government asked Matthew Taylor to carry out this first of a kind review, to ensure the UK continues to lead the world, through our modern Industrial Strategy, in supporting innovative businesses whilst ensuring workers have the rights they deserve.”

“Today’s largest upgrade in workers’ rights in over a generation is a key part of building a labour market that continues to reward people for hard work, that celebrates good employers and is boosting productivity and earning potential across the UK,” he added.

Last year two parliamentary committees urged the government to close gig-economy employment law loopholes — saying they had enabled “dubious business practices” by letting digital work platforms use flexibility as a tool to circumvent workers rights and entitlements.

The committees went on to call for companies with a self-employed workforce above a certain size to be required to treat individuals as workers by default.

Today’s reform plan certainly does not look to be going so far.

Much will rest on how exactly the government changes the law around employment status tests — and that’s still tbc.

Rachel Farr, a senior professional support lawyer in the Employment, Pensions & Mobility group at law firm TaylorWessing, told us it’s also rather easier said than done — suggesting it’s difficult to see how the government will “truly improve clarity”.

This element of the reform is a key consideration where gig economy businesses are concerned, as they typically allow for so-called ‘multi-apping’ — meaning those providing a service on one platform can be logged into multiple (rival) platforms simultaneously available to work. So the issue — for employment law purposes — is how to determine what constitutes working time in a platform context. (Which you need to be able to measure in order to determine employment status.)

“Simply codifying the existing case law tests will still mean each case is dependent on its specific facts, so is the government proposing to change the boundaries with some ‘check box’ style tests as they have in other EU jurisdictions?” wondered Farr. “This means greater clarity through simplifying the law but would probably mean we lose the nuances of existing U.K. tests and that some people who are currently genuinely self-employed will find that they may become workers (or vice versa).”

Uber was back in court in the UK two months ago for its latest appeal against a 2016 employment tribunal ruling which found that a group of Uber drivers were workers, not self-employed contractors as it contends.

The company has previously suggested it would cost its UK business “tens of millions” of pounds if it reclassified the circa 50,000 ‘self-employed’ drivers operating on its platform as workers.

So the devil will be in the detail of the commitment to clarify employment status tests — and where those tests end up drawing the line.

But the government’s embrace of the overarching notion of a “balance” between flexibility and worker protections looks very friendly to current-gen gig economy business models.

A decision on Uber’s latest tribunal appeal is rumored to be due this week, though it’s not clear that it will provide much clarity on the multi-apping/working time issue. TaylorWessing litigator Sean Nesbitt also told us the tribunal has not been able to spend much time debating those elements.

Responding to the government’s reform package today, the ride-hailing giant sounded pleased. An Uber spokesperson said: “We welcome more clarity from the Government and look forward to working closely with them to make sure drivers can keep all the benefits that come from being your own boss.”

“The majority of drivers choose to partner with Uber because of the freedom and flexibility on offer. A recent Oxford University study found that most drivers want to choose if, when and where they drive,” it added.

At the same time UK unions offered a downbeat assessment of the reform package.

Reuters reports the Unite Union making critical comments. “People on zero hour contracts and workers in the insecure economy need much more than a weak right to request a contract and more predictable hours,” it quoted Unite general secretary Len McCluskey responding to the reform package.

While the Independent Workers Union of Great Britain general secretary, Jason Moyer-Lee, tweeted that “exploited workers are sick of press releases, rhetoric and self-congratulatory announcements”. He added that a “real update in rights and a serious enforcement regime” does not seem to be on offer with the government’s package.

The IWGB union has supported a number of legal and protest actions by gig economy workers in recent years, including a (so far unsuccessful) human rights challenge to Deliveroo’s opposition to collective bargaining for riders.

This summer an inquiry into pay and conditions for Deliveroo riders, carried out by UK MP Frank Field, likened the model to casual labor practices at British dockyards until the middle of the 20th century — finding a dual labour market that he said works very well for some but very poorly for others.

In its response to Field’s report, as well as claiming Deliveroo riders choose flexibility, the company emphasized it had been pushing the government to update employment rules to end what it dubbed “the trade-off between flexibility and security and enable platforms to offer riders even more benefits without putting their employment status at risk”.

Responding to the government’s reform package today, a Deliveroo spokesperson reiterated this line, saying: “Court judgements have consistently found Deliveroo riders to be self-employed. However, Deliveroo has consistently said that we would like to see the rules change to end the trade-off between flexibility and security that currently exists in employment law, to allow companies such as ours to offer more benefits to riders.”

“Independent research has shown riders are consciously choosing to opt out of traditional employment in favour of new ways of working where they have more control,” the spokesperson added. “On-demand working is set to grow as people want to fit work around their lives, not vice versa, and we will work with the Government to ensure the interests of Deliveroo riders can be advanced.”

Oath agrees to pay $5M to settle charges it violated children’s privacy

TechCrunch’s Verizon-owned parent, Oath, an ad tech division made from the merging of AOL and Yahoo, has agreed to pay around $5 million to settle charges that it violated a federal children’s privacy law.

The penalty is said to be the largest ever issued under COPPA.

The New York Times reported the story yesterday, saying the settlement will be announced by the New York attorney general’s office today.

At the time of writing the AG’s office could not be reached for comment.

We reached out to Oath with a number of questions about this privacy failure. But a spokesman did not engage with any of them directly — emailing a short statement instead, in which it writes: “We are pleased to see this matter resolved and remain wholly committed to protecting children’s privacy online.”

The spokesman also did not confirm nor dispute the contents of the NYT report.

According to the newspaper, which cites the as-yet unpublished settlement documents, AOL, via its ad exchange, helped place adverts on hundreds of websites that it knew were targeted at children under 13 — such as and

The ads were placed used children’s personal data, including cookies and geolocation, which the attorney general’s office said violated the Children’s Online Privacy Protection Act (COPPA) of 1998.

The NYT quotes attorney general, Barbara D. Underwood, describing AOL’s actions as “flagrantly” in violation of COPPA.

The $5M fine for Oath comes at a time when scrutiny is being dialled up on online privacy and ad tech generally, and around kids’ data specifically — with rising concern about how children are being tracked and ‘datafied’ online.

Earlier this year, a coalition of child advocacy, consumer and privacy groups in the US filed a complaint with the FTC asking it to investigate Google-owned YouTube over COPPA violations — arguing that while the site’s terms claim it’s aimed at children older than 13 content on YouTube is clearly targeting younger children, including by hosting cartoon videos, nursery rhymes, and toy ads.

COPPA requires that companies provide direct notice to parents and verifiable consent parents before collecting under 13’s information online.

Consent must also be sought for using or disclosing personal data from children. Or indeed for targeting kids with adverts linked to what they do online.

Personal data under COPPA includes persistent identifiers (such as cookies) and geolocation information, as well as data such as real names or screen names.

In the case of Oath, the NYT reports that even though AOL’s policies technically prohibited the use of its display ad exchange to auction ad space on kids’ websites, the company did so anyway —  citing settlement documents covering the ad tech firm’s practices between October 2015 and February 2017.

According to these documents, an account manager for AOL in New York repeatedly — and erroneously — told a client, Playwire Media (which represents children’s websites such as, that AOL’s ad exchange could be used to sell ad space while complying with Coppa.

Playwire then used the exchange to place more than a billion ads on space that should have been covered by Coppa, the newspaper adds.

The paper also reports that AOL (via also bought ad space on websites flagged as COPPA-covered from other ad exchanges.

It says Oath has since introduced technology to identify when ad space is deemed to be covered by Coppa and ‘adjust its practices’ accordingly — again citing the settlement documents.

As part of the settlement the ad tech division of Verizon has agreed to create a COPPA compliance program, to be overseen by a dedicated executive or officer; and to provide annual training on COPPA compliance to account managers and other employees who work with ads on kids’ websites.

Oath also agreed to destroy personal information it has collected from children.

It’s not clear whether the censured practices ended in February 2017 or continued until more recently. We asked Oath for clarification but it did not respond to the question.

It’s also not clear whether AOL was also tracking and targeting adverts at children in the EU. If Oath was doing so but stopped before May 25 this year it should avoid the possibility of any penalty under Europe’s tough new privacy framework, GDPR, which came into force in May this year — beefing up protection around children’s data by setting a cap of between 16- and 13-years-old for children being able to consent to their own data being processed.

GDPR also steeply hikes penalties for privacy violations (up to a maximum of 4% of global annual turnover).

Prior to the regulation a European data protection directive was in force across the bloc but it’s GDPR that has strengthened protections in this area with the new provision on children’s data.

A long and winding road to new copyright legislation

Back in May, as part of a settlement, Spotify agreed to pay more than $112 million to clean up some copyright problems. Even for a service with millions of users, that had to leave a mark. No one wants to be dragged into court all the time, not even bold, disruptive technology start-ups.

On October 11th, the President signed the Hatch-Goodlatte Music Modernization Act (the “Act”, or “MMA”). The MMA goes back, legislatively, to at least 2013, when Chairman Goodlatte (R-VA) announced that, as Chairman of the House Judiciary Committee, he planned to conduct a “comprehensive” review of issues in US copyright law. Ranking Member Jerry Nadler (D-NY) was also deeply involved in this process, as were Senators Hatch (R-UT) Leahy (D-VT), and Wyden (D-OR). But this legislation didn’t fall from the sky; far from it.

After many hearings, several “roadshow” panels around the country, and a couple of elections, in early 2018 Goodlatte announced his intent to move forward on addressing several looming issues in music copyright before his planned retirement from Congress at the end of his current term (January 2019).  With that deadline in place, the push was on, and through the spring and summer, the House Judiciary Committee and their colleagues in the Senate worked to complete the text of the legislation and move it through to process. By late September, the House and Senate versions had been reconciled and the bill moved to the President’s desk.

What’s all this about streaming?

As enacted, the Act instantiates several changes to music copyright in the US, especially as regards streaming music services. What does “streaming” refer to in this context? Basically, it occurs when a provider makes music available to listeners, over the internet, without creating a downloadable or storable copy: “Streaming differs from downloads in that no copy of the music is saved to your hard drive.”

“It’s all about the Benjamins.”

One part, by far the largest change in terms of money, provides that a new royalty regime be created for digital streaming of musical works, e.g. by services like Spotify and Apple Music. Pre-1972 recordings — and the creators involved in making them (including, for the first time, for audio engineers, studio mixers and record producers) — are also brought under this royalty umbrella.

These are significant, generally beneficial results for a piece of legislation. But to make this revenue bounty fully effective, a to-be-created licensing entity will have to be set up with the ability to first collect, and then distribute, the money. Think “ASCAP/BMI for streaming.” This new non-profit will be the first such “collective licensing” copyright organization set up in the US in quite some time.

Collective Licensing: It’s not “Money for Nothing”, right?

What do we mean by “collective licensing” in this context, and how will this new organization be created and organized to engage in it? Collective licensing is primarily an economically efficient mechanism for (A) gathering up monies due for certain uses of works under copyright– in this case, digital streaming of musical recordings, and (B) distributing the royalty checks back to the rights-holding parties ( e.g. recording artists, their estates in some cases, and record labels).  Generally speaking, in collective licensing:

 “…rights holders collect money that would otherwise be in tiny little bits that they could not afford to collect, and in that way they are able to protect their copyright rights. On the flip side, substantial users of lots of other people’s copyrighted materials are prepared to pay for it, as long as the transaction costs are not extreme.”

—Fred Haber, VP and Corporate Counsel, Copyright Clearance Center

The Act envisions the new organization as setting up and implementing a new, extensive —and, publicly accessible —database of musical works and the rights attached to them. Nothing quite like this is currently available, although resources like SONY’s Gracenote suggest a good start along those lines. After it is set up and the initial database has a sufficient number of records, the new collective licensing agency will then get down to the business of offering licenses:

“…a blanket statutory license administered by a nonprofit mechanical licensing collective. This collective will collect and distribute royalties, work to identify songs and their owners for payment, and maintain a comprehensive, publicly accessible database for music ownership information.”

— Regan A. Smith, General Counsel and Associate Register of Copyrights

(AP Photo) The Liverpool beat group The Beatles, with John Lennon, Paul McCartney, George Harrison and Ringo Starr, take it easy resting their feet on a table, during a break in rehearsals for the Royal variety show at the Prince of Wales Theater, London, England, November 4, 1963. (AP Photo)

You “Can’t Buy Me Love”, so who is all this going to benefit?

In theory, the listening public should be the primary beneficiary. More music available through digital streaming services means more exposure —and potentially more money —for recording artists. For students of music, the new database of recorded works and licenses will serve to clarify who is (or was) responsible for what. Another public benefit will be fewer actions on digital streaming issues clogging up the courts.

There’s an interesting wrinkle in the Act providing for the otherwise authorized use of “orphaned” musical works such that these can now be played in library or archival (i.e. non-profit) contexts. “Orphan works” are those which may still protected under copyright, but for which the legitimate rights holders are unknown, and, sometimes, undiscoverable. This is the first implementation of orphan works authorization in US copyright law.  Cultural services – like Open Culture – can look forward to being able to stream more musical works without incurring risk or hindrance (provided that the proper forms are filled out) and this implies that some great music is now more likely to find new audiences and thereby be preserved for posterity. Even the Electronic Frontier Foundation (EFF), generally no great fan of new copyright legislation, finds something to like in the Act.

In the land of copyright wonks, and in another line of infringement suits, this resolution of the copyright status of musical recordings released before 1972 seems, in my opinion, fair and workable. In order to accomplish that, the Act also had to address the matter of the duration of these new copyright protections, which is always (post-1998) a touchy subject:

  • For recordings first published before 1923, the additional time period ends on December 31, 2021.
  • For recordings created between 1923-1946, the additional time period is 5 years after the general 95-year term.
  • For recordings created between 1947-1956, the additional time period is 15 years after the general 95-year term.
  • For works first published between 1957-February 15, 1972 the additional time period ends on February 15, 2067.

(Source: US Copyright Office)

 (Photo by Theo Wargo/Getty Images for Live Nation)

Money (That’s What I Want – and lots and lots of listeners, too.)

For the digital music services themselves, this statutory or ‘blanket’ license arrangement should mean fewer infringement actions being brought; this might even help their prospects for investment and encourage  new and more innovative services to come into the mix.

“And, in The End…”

This new legislation, now the law of the land, extends the history of American copyright law in new and substantial ways. Its actual implementation is only now beginning. Although five years might seem like a lifetime in popular culture, in politics it amounts to several eons. And let’s not lose sight of the fact that the industry got over its perceived short-term self-interests enough, this time, to agree to support something that Congress could pass. That’s rare enough to take note of and applaud.

This law lacks perfection, as all laws do. The licensing regime it envisions will not satisfy everyone, but every constituent, every stakeholder, got something. From the perspective of right now, chances seem good that, a few years from now, the achievement of the Hatch-Goodlatte Music Modernization Act will be viewed as a net positive for creators of music, for the distributors of music, for scholars, fans of ‘open culture’, and for the listening public. In copyright, you can’t do better than that.

Apple’s Tim Cook makes blistering attack on the “data industrial complex”

Apple’s CEO Tim Cook has joined the chorus of voices warning that data itself is being weaponized again people and societies — arguing that the trade in digital data has exploded into a “data industrial complex”.

Cook did not namecheck the adtech elephants in the room: Google, Facebook and other background data brokers that profit from privacy-hostile business models. But his target was clear.

“Our own information — from the everyday to the deeply personal — is being weaponized against us with military efficiency,” warned Cook. “These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded and sold.

“Taken to the extreme this process creates an enduring digital profile and lets companies know you better than you may know yourself. Your profile is a bunch of algorithms that serve up increasingly extreme content, pounding our harmless preferences into harm.”

“We shouldn’t sugarcoat the consequences. This is surveillance,” he added.

Cook was giving the keynote speech at the 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC), which is being held in Brussels this year, right inside the European Parliament’s Hemicycle.

“Artificial intelligence is one area I think a lot about,” he told an audience of international data protection experts and policy wonks, which included the inventor of the World Wide Web itself, Sir Tim Berners-Lee, another keynote speaker at the event.

“At its core this technology promises to learn from people individually to benefit us all. But advancing AI by collecting huge personal profiles is laziness, not efficiency,” Cook continued.

“For artificial intelligence to be truly smart it must respect human values — including privacy. If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”

That sense of responsibility is why Apple puts human values at the heart of its engineering, Cook said.

In the speech, which we previewed yesterday, he also laid out a positive vision for technology’s “potential for good” — when combined with “good policy and political will”.

“We should celebrate the transformative work of the European institutions tasked with the successful implementation of the GDPR. We also celebrate the new steps taken, not only here in Europe but around the world — in Singapore, Japan, Brazil, New Zealand. In many more nations regulators are asking tough questions — and crafting effective reform.

“It is time for the rest of the world, including my home country, to follow your lead.”

Cook said Apple is “in full support of a comprehensive, federal privacy law in the United States” — making the company’s clearest statement yet of support for robust domestic privacy laws, and earning himself a burst of applause from assembled delegates in the process.

Cook argued for a US privacy law to prioritize four things:

  1. data minimization — “the right to have personal data minimized”, saying companies should “challenge themselves” to de-identify customer data or not collect it in the first place
  2. transparency — “the right to knowledge”, saying users should “always know what data is being collected and what it is being collected for, saying it’s the only way to “empower users to decide what collection is legitimate and what isn’t”. “Anything less is a shame,” he added
  3. the right to access — saying companies should recognize that “data belongs to users”, and it should be made easy for users to get a copy of, correct and delete their personal data
  4. the right to security — saying “security is foundational to trust and all other privacy rights”

“We see vividly, painfully how technology can harm, rather than help,” he continued, arguing that platforms can “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense or what is true or false”.

“This crisis is real. Those of us who believe in technology’s potential for good must not shrink from this moment”, he added, saying the company hopes “to work with you as partners”, and that: “Our missions are closely aligned.”

He also made a sideswipe at tech industry efforts to defang privacy laws — saying that some companies will “endorse reform in public and then resist and undermine it behind closed doors”.

“They may say to you our companies can never achieve technology’s true potential if there were strengthened privacy regulations. But this notion isn’t just wrong it is destructive — technology’s potential is and always must be rooted in the faith people have in it. In the optimism and the creativity that stirs the hearts of individuals. In its promise and capacity to make the world a better place.”

“It’s time to face facts,” Cook added. “We will never achieve technology’s true potential without the full faith and confidence of the people who use it.”

Opening the conference before the Apple CEO took to the stage, Europe’s data protection supervisor Giovanni Buttarelli argued that digitization is driving a new generational shift in the respect for privacy — saying there is an urgent need for regulators and indeed societies to agree on and establish “a sustainable ethics for a digitised society”.

“The so-called ‘privacy paradox’ is not that people have conflicting desires to hide and to expose. The paradox is that we have not yet learned how to navigate the new possibilities and vulnerabilities opened up by rapid digitization,” Buttarelli argued.

“To cultivate a sustainable digital ethics, we need to look, objectively, at how those technologies have affected people in good ways and bad; We need a critical understanding of the ethics informing decisions by companies, governments and regulators whenever they develop and deploy new technologies.”

The EU’s data protection supervisor told an audience largely made up of data protection regulators and policy wonks that laws that merely set a minimum standard are not enough, including the EU’s freshly painted GDPR.

“We need to ask whether our moral compass been suspended in the drive for scale and innovation,” he said. “At this tipping point for our digital society, it is time to develop a clear and sustainable moral code.”

“We do not have a[n ethical] consensus in Europe, and we certainly do not have one at a global level. But we urgently need one,” he added.

“Not everything that is legally compliant and technically feasible is morally sustainable,” Buttarelli continued, pointing out that “privacy has too easily been reduced to a marketing slogan. But ethics cannot be reduced to a slogan.”

“For us as data protection authorities, I believe that ethics is among our most pressing strategic challenges,” he added.

“We have to be able to understand technology, and to articulate a coherent ethical framework. Otherwise how can we perform our mission to safeguard human rights in the digital age?”

ePrivacy: An overview of Europe’s other big privacy rule change

Gather round. The EU has a plan for a big update to privacy laws that could have a major impact on current Internet business models.

Um, I thought Europe just got some new privacy rules?

They did. You’re thinking of the General Data Protection Regulation (GDPR), which updated the European Union’s 1995 Data Protection Directive — most notably by making the penalties for compliance violations much larger.

But there’s another piece of the puzzle — intended to ‘complete’ GDPR but which is still in train.

Or, well, sitting in the sidings being mobbed by lobbyists, as seems to currently be the case.

It’s called the ePrivacy Regulation.

ePrivacy Regulation, eh? So I guess that means there’s already an ePrivacy Directive then…

Indeed. Clever cookie. That’s the 2002 ePrivacy Directive to be precise, which was amended in 2009 (but is still just a directive).

Remind me what’s the difference between an EU Directive and a Regulation again… 

A regulation is a more powerful legislative instrument for EU lawmakers as it’s binding across all Member States and immediately comes into legal force on a set date, without needing to be transposed into national laws. In a word it’s self-executing.

Whereas, with a directive, Member States get a bit more flexibility because it’s up to them how they implement the substance of the thing. They could adapt an existing law or create a new one, for example.

With a regulation the deliberation happens among EU institutions and, once that discussion and negotiation process has concluded, the agreed text becomes law across the bloc — at the set time, and without necessarily requiring further steps from Member States.

So regulations are powerful.

So there’s more legal consistency with a regulation? 

In theory. Greater harmonization of data protection rules is certainly an impetus for updating the EU’s legal framework around privacy.

Although, in the case of GDPR, Member States did in fact need to update their national data protections laws to make certain choices allowed for in the framework, and identify competent national data enforcement agencies. So there’s still some variation.

Strengthening the rules around privacy and making enforcement more effective are other general aims for the ePrivacy Regulation.

Europe has had robust privacy rules for many years but enforcement has been lacking.

Another point of note: Where data protection law is concerned, national agencies need to be properly resourced to be able to enforce rules, or that could undermine the impact of regulation.

It’s up to Member States to do this, though GDPR essentially requires it (and the Commission is watching).

Europe’s data protection supervisor, Giovanni Buttarelli, sums up the current resourcing situation for national data protection agencies, as: “Not bad, not enough. But much better than before.”

But why does Europe need another digital privacy law. Why isn’t GDPR enough? 

There is some debate about that, and not everyone agrees with the current approach. But the general idea is that GDPR deals with general (personal) data.

Whereas the proposed update to ePrivacy rules is intended to supplement GDPR — addressing in detail the confidentiality of electronic communications, and the tracking of Internet users more broadly.

So the (draft) ePrivacy Regulation covers marketing, and a whole raft of tracking technologies (including but not just cookies); and is intended to combat problems like spam, as well as respond to rampant profiling and behavioral advertising by requiring transparency and affirmative consent.

One major impulse behind the reform of the rules is to expand the scope to not just cover telcos but reflect how many communications now travel ‘over the top’ of cellular networks, via Internet services.

This means ePrivacy could apply to all sorts of tech firms in future, be it Skype, Facebook, Google, and quite possibly plenty more — given how many apps and services include some ability for users to communicate with each other.

But scope remains one of the contested areas, with critics arguing the regulation could have a disproportionate impact, if — for example — every app with a chat function is going to be ruled.

On the communications front, the updated rules would not just cover message content but metadata too (to respond to how that gets tracked). Aka pieces of data that might not be personal data per se yet certainly pertain to privacy once they are wrapped up in and/or associated with people’s communications.

Although metadata tracking is also used for analytics, for wider business purposes than just profiling users, so you can see the challenge of trying to fashion rules to fit around all this granular background activity.

Simplifying problematic existing EU cookie consent rules — which have also been widely mocked for generating pretty pointless web page clutter — has also been a core part of the Commission’s intention for the update.

EU lawmakers also want the regulation to cover machine to machine comms — to regulate privacy around the still emergent IoT (Internet of Things), to keep pace with the rise of smart home technologies.

Those are some of the high level aims but there have been multiple proposed texts and revisions at this point so goalposts have been shifting around.

So whereabouts in the process are we?

The Commission’s original reform proposal came out in January 2017. More than a year and a half later EU institutions are still stuck trying to reach a consensus. It’s not even 100% certain whether ePrivacy will pass or founder in the attempt at this point.

The underlying problem is really the scope of exploitation of consumers’ online activity going on in the areas ePrivacy seeks to regulate — which is now firmly baked into dominant digital business models — so trying to rule over all that after the fact of mainstream operational execution is a recipe for co-ordinated industry objection and frenzied lobbying. Of which there has been an awful lot.

At the same time, consumer protection groups in Europe are more clear than ever that ePrivacy should be a vehicle for further strengthening the data protection framework put in place by GDPR — pointing out, for example, that data misuse scandals like the Facebook-Cambridge Analytica debacle show that data-driven business models need closer checks to protect consumers and ensure people’s rights are respected.

Safe to say, the two sides couldn’t be further apart.

Like GDPR, the proposed ePrivacy Regulation would also apply to companies offering services in Europe not only those based in Europe. And it also includes major penalties for violations (of up to 2% or 4% of a company’s global annual turnover) — similarly intended to bolster enforcement and support more consistently applied EU privacy rules.

But given the complexity of the proposals, and disagreements over scope and approach, having big fines baked in further complicates the negotiations — because lobbyists can argue that substantial financial penalties should not be attached to ‘ambiguous’ laws and disputed regulatory mechanisms.

The high cost of getting the update wrong is not so much concentrating minds as causing alarms to be yanked and brakes applied. With the risk of no progress at all looking like an increasing possibility.

One thing is clear: The existing ePrivacy rules are outdated and it’s not helpful to have old rules undermining a state-of-the-art data protection framework.

Telcos have also rightly complained it’s not fair for tech giants to be able to operate messaging empires without the same compliance burdens they have.

Just don’t assume telcos love the proposed update either. It’s complicated.

Sounds very messy. 


EU lawmakers could probably have dealt with updating both privacy-related directives together, or even in one ‘super regulation’, but they decided to separate the work to try to simplify the process. In retrospect that looks like a mistake.

On the plus side, it means GDPR is now locked in place — with Buttarelli saying the new framework is intended to stand for as long as its predecessor.

Less good: One shiny worldclass data protection framework is having to work alongside a set of rules long past their sell-by-date.

So, so much for consistency.

Buttarelli tells us he thinks it was a mistake not to do both updates together, describing the blocks being thrown up to try to derail ePrivacy reform as “unacceptable”.

“I would like to say very clearly that the EU made a mistake in not updating earlier the rules for confidentiality for electronic communications at the same time as general data protection,” he told us during an interview this week, about GDPR enforcement, datas ethics and the future of EU privacy regulation.

He argues the patchwork of new and old rules “doesn’t work for data controllers” either, as they’re the ones saddled with dealing with the legal inconsistency.

As Europe’s data protection supervisor, Buttarelli is of course trying to apply pressure on key parties — to “get to the table and start immediately trilogue negotiations to identify a sustainable outcome”.

But the nature of lawmaking across a bloc of 28 Member States is often slow and painful. Certainly no one entity can force progress; it must be achieved via negotiated consensus and compromise across the various institutions and entities.

And when interest groups are so far apart, well, it’s sweating toil to put it mildly.

Entities that don’t want to play ball with a particular legal reform issue can sometimes also throw a delaying spanner in the works by impeding negotiations. Which is what looks to be going on with ePrivacy right now.

The EU parliament confirmed its negotiating mandate on the reform almost a year ago now. But MEPs were then stuck waiting for Member States to take a position and get around the discussion table.

Except Member States seemingly weren’t so keen. Some were probably a bit preoccupied with Brexit.

Currently implicated as an ePrivacy blocker: Austria, which holds the six-month rotating presidency of the EU Council — meaning it gets to set priorities, and can thus kick issues into the long grass (as its right-wing government appears to be doing with ePrivacy). And so the wait goes on.

It now looks like a bit of a divide and conquer situation for anti-privacy lobbyists, who — having failed to derail GDPR — are throwing all their energies at blocking and even derailing/diluting the ePrivacy reform.

Some Member States appear to be trying to attack ePrivacy to weaken the overarching framework of GDPR too. So yes, it’s got very messy indeed.

There’s an added complication around timing because the EU parliament is up for re-election next Spring, and a few months after that the executive Commission will itself turn over, as the current president does not intend to seek reappointment. So it will be all change for the EU, politically speaking, in 2019.

A reconfigured political landscape could then change the entire conversation around ePrivacy. So the current delay could prove fatal unless agreement can be reached in early 2019.

Some EU lawmakers had hoped the reform could be done and dusted in in time to come into force at the same time as GDPR, this May.

That was certainly a major miscalculation.

But what’s all the disagreement about?

That depends on who you ask. There are many contested issues, depending on the interests of the group you’re talking to.

Media and publishing industry associations are terrified about what they say ePrivacy could do to their ad-supported business models, given their reliance on cookies and tracking technologies to try to monetize free content via targeted ads — and so claim it could destroy journalism as we know it if consumers need to opt-in to being tracked.

The ad industry is also of course screaming about ePrivacy as if its hair’s on fire. Big tech included, though it has generally preferred to lobby via proxies on this issue.

Anything that could impede adtech’s ability to track and thus behaviourally target ads at web users is clearly enemy number one, given the current modus operandi. So ePrivacy is a major lobbying target for the likes of the IAB who don’t want it to upend their existing business models.

Even telcos aren’t happy, despite the potential of the regulation to even the playing field somewhat with tech giants — suggesting they will end up with double the regulatory burden, as well as moaning it will make it harder for them to make the necessary investments to roll out 5G networks.

Plus, as I say, there also seems to be some efforts to try to use ePrivacy as a vector to attack and weaken GDPR itself.

Buttarelli had comments to make on this front too, describing some data controllers as being in post-GDPR “revenge mode”.

“They want to move in sort of a vendetta, vendetta — and get back what they lose with the GDPR. But while I respect honest lobbying about which pieces of ePrivacy are not necessary I think ePrivacy will help first small businesses, and not necessarily the big tech startups. And where done properly ePrivacy may give more power to individuals. It may make harder for big tech to snoop on private conversations without meaningful consent,” he told us, appealing to Europe’s publishing industry to get behind the reform process, rather than applying pressure at the Member State level to try to derail it — given the media hardly feels well done by by big tech.

He even makes this appeal to local adtech players — which aren’t exactly enamoured with the dominance of big tech either.

“I see space for market incentives,” he added. “For advertisers and publishers to, let’s say, re-establish direct relations with their readers and customers. And not have to accept the terms dictated by the major platform intermediaries. So I don’t see any other argument to discourage that we have a deal before the elections in May next year of the European legislators.”

There’s no doubt this is a challenging sell though, given how embedded all these players are with the big platforms. So it remains to be seen whether ePrivacy can be talked back on track.

Major progress is certainly very unlikely before 2019.

I’m still not sure why it’s so important though.  

The privacy of personal communications is a fundamental right in Europe. So there’s a need for the legal framework to defend against technological erosion of citizens’ rights.

Add to that, a big part of the problem with the modern adtech industry — aside from the core lack of genuine consent — is its opacity. Who’s doing what; for what specific purposes; and with what exact outcomes.

Existing European privacy rules like GDPR mean there’s more transparency than there’s ever been about what’s going on — if you know and/or can be bothered to dig down into privacy policies and purposes.

If you do, you might, for example, discover a very long list of companies that your data is being shared with (and even be able to switch off that sharing) — entities with weird sounding names like Outbrain and OpenX.

A privacy policy might even state a per company purpose like ‘Advertising exchange’ and ‘Advertising’. Or ‘Customer interaction’, whatever that means.

Thing is, it’s often still very difficult for a consumer to understand what a lot of these companies are really doing with their data.

Thanks to current EU laws, we now have the greatest level of transparency there has ever been about the mechanisms underpinning Internet business models. But yet so much remains murky.

The average Internet user is very likely none the wiser. Can profiling them without proper consent really be fair?

GDPR sets out an expectation of privacy by design and default. So, following that principle, you could argue that cookie consent, for example, should be default opt-out — and that any website must be required to gain affirmative opt in from a visitor for any tracking cookies. The adtech industry would certainly disagree though.

The original ePrivacy proposal even had a bit of a mixed approach to consent which was accused of being too overbearing for some technologies and not strong enough for others.

It’s not just creepy tech giants implicated here either. Publishers and the media (TechCrunch included) are very much caught up in the unpleasant tracking mess, complicit in darting users with cookies and trackers to try to increase what remain fantastically low conversation rates for digital ads.

Most of the time, most Internet users ignore most ads. So — with horribly wonky logic — the behavioral advertising industry, which has been able to grow like a weed because EU privacy rights have not previously been actively enforced, has made it its mission to suck up (and indeed buy up) more and more user data to try to move the ad conversion needle a fraction.

The media is especially desperate because the web has also decimated traditional business models. And European lawmakers can be very sensitive to publishing industry concerns (for e.g., see their backing of controversial copyright reforms which publishers have been pushing for).

Meanwhile Google and Facebook are gobbling up the majority of online ad spending, leaving publishers fighting for crumbs and stuck having to do businesses with the platforms that have so sorely disrupted them.

Platforms they can’t at all control but which are now so popular and powerful they can (and do) algorithmically control the visibility of publishers’ content.

It’s not a happy combination. Well, unless you’re Facebook or Google.

Meanwhile, for web users just wanting to go about their business and do all the stuff people can (and sometimes need to do) online, things have got very bad indeed.

Unless you ignore the fact you’re being creeped on almost all the time, by snoopy entities that double as intelligence traders, selling info on what you like or don’t, so that an unseen adtech collective can create highly detailed profiles of you to try and manipulate your online transactions and purchasing decisions. With what can sometimes be discriminatory impacts.

The rise in popularity of ad blockers illustrates quite how little consumers enjoy being ad-stalked around the Internet.

More recently tracker blockers have been springing up to try to beat back the adtech vampire octopus which also lards the average webpage with myriad data-sucking tentacles, impeding page load times and gobbling bandwidth in the process, in addition to abusing people’s privacy.

There’s also out-and-out malicious stuff to be found already here too as the increasing complexity, opacity and sprawl of the adtech industry’s surveillance apparatus (combined with its general lack of interest in and/or focus on security) offers rich and varied vectors of cyber attack.

And so ads and gnarly page elements sometimes come bundled or injected with actual malware as hackers exploit all this stuff for their own ends and launch man in the middle attacks to grab user data as it’s being routinely siphoned off for tracking purposes.

It’s truly a layer cake of suck.


The ePrivacy Regulation could, in theory, help to change this, by helping to support alternative business models that don’t use people-tracking as their fuel by putting the emphasis back where it should be: Respect for privacy.

The (seemingly) radical idea underlying all these updates to European privacy legislation is that if you increase consumers’ trust in online services by respecting people’s privacy you can actually grease the wheel of ecommerce and innovation because web users will be more comfortable doing stuff online because they won’t feel like they’re under creepy surveillance.

More than that — you can lay down a solid foundation of trust for the next generation of disruptive technologies to build on.

Technologies like IoT and driverless cars.

Because, well, if consumers hate to feel like websites are spying on them, imagine how disgusted they’ll be to realize their fridge, toaster, kettle and TV are all complicit in snitching. Ditto their connected car.

‘I see you’re driving past McDonald’s. Great news! They have a special on those chocolate donuts you scoffed a whole box of last week…’



So what are ePrivacy’s chances at this point? 

It’s hard to say but things aren’t looking great right now.

Buttarelli describes himself as “relatively optimistic” about getting an agreement by May, i.e. before the EU parliament elections, but that may well be wishful thinking.

Even if he’s right there would likely still need to be an implementation period before it comes into force — so new rules aren’t likely up and running before 2020.

Yet he also describes the ePrivacy Regulation as “an essential missing piece of the jigsaw”.

Getting that piece in place is not going to be easy though.