Adblock Plus’s Till Faida on the shifting shape of ad blocking

Publishers hate ad blockers, but millions of internet users embrace them — and many browsers even bake it in as a feature, including Google’s own Chrome. At the same time, growing numbers of publishers are walling off free content for visitors who hard-block ads, even asking users directly to be whitelisted.

It’s a fight for attention from two very different sides.

Some form of ad blocking is here to stay, so long as advertisements are irritating and the adtech industry remains deaf to genuine privacy reform. Although the nature of the ad-blocking business is generally closer to filtering than blocking, where is it headed?

We chatted with Till Faida, co-founder and CEO of eyeo, maker of Adblock Plus (ABP), to take the temperature of an evolving space that’s never been a stranger to controversy — including fresh calls for his company to face antitrust scrutiny.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.

But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.

The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.

“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”

However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).

The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.

These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.

The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.

Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.

Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.

Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.

“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”

EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.

For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.

Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.

If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.

“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.

“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”

An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.

But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.

In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

Mass surveillance for national security does conflict with EU privacy rights, court advisor suggests

Mass surveillance regimes in the UK, Belgium and France which require bulk collection of digital data for a national security purpose may be at least partially in breach of fundamental privacy rights of European Union citizens, per the opinion of an influential advisor to Europe’s top court issued today.

Advocate general Campos Sánchez-Bordona’s (non-legally binding) opinion, which pertains to four references to the Court of Justice of the European Union (CJEU), takes the view that EU law covering the privacy of electronic communications applies in principle when providers of digital services are required by national laws to retain subscriber data for national security purposes.

A number of cases related to EU states’ surveillance powers and citizens’ privacy rights are dealt with in the opinion, including legal challenges brought by rights advocacy group Privacy International to bulk collection powers enshrined in the UK’s Investigatory Powers Act; and a La Quadrature du Net (and others’) challenge to a 2015 French decree related to specialized intelligence services.

At stake is a now familiar argument: Privacy groups contend that states’ bulk data collection and retention regimes have overreached the law, becoming so indiscriminately intrusive as to breach fundamental EU privacy rights — while states counter-claim they must collect and retain citizens’ data in bulk in order to fight national security threats such as terrorism.

Hence, in recent years, we’ve seen attempts by certain EU Member States to create national frameworks which effectively rubberstamp swingeing surveillance powers — that then, in turn, invite legal challenge under EU law.

The AG opinion holds with previous case law from the CJEU — specifically the Tele2 Sverige and Watson judgments — that “general and indiscriminate retention of all traffic and location data of all subscribers and registered users is disproportionate”, as the press release puts it.

Instead the recommendation is for “limited and discriminate retention” — with also “limited access to that data”.

“The Advocate General maintains that the fight against terrorism must not be considered solely in terms of practical effectiveness, but in terms of legal effectiveness, so that its means and methods should be compatible with the requirements of the rule of law, under which power and strength are subject to the limits of the law and, in particular, to a legal order that finds in the defence of fundamental rights the reason and purpose of its existence,” runs the PR in a particularly elegant passage summarizing the opinion.

The French legislation is deemed to fail on a number of fronts, including for imposing “general and indiscriminate” data retention obligations, and for failing to include provisions to notify data subjects that their information is being processed by a state authority where such notifications are possible without jeopardizing its action.

Belgian legislation also falls foul of EU law, per the opinion, for imposing a “general and indiscriminate” obligation on digital service providers to retain data — with the AG also flagging that its objectives are problematically broad (“not only the fight against terrorism and serious crime, but also defence of the territory, public security, the investigation, detection and prosecution of less serious offences”).

The UK’s bulk surveillance regime is similarly seen by the AG to fail the core “general and indiscriminate collection” test.

There’s a slight carve out for national legislation that’s incompatible with EU law being, in Sánchez-Bordona’s view, permitted to maintain its effects “on an exceptional and temporary basis”. But only if such a situation is justified by what is described as “overriding considerations relating to threats to public security or national security that cannot be addressed by other means or other alternatives, but only for as long as is strictly necessary to correct the incompatibility with EU law”.

If the court follows the opinion it’s possible states might seek to interpret such an exceptional provision as a degree of wiggle room to keep unlawful regimes running further past their legal sell-by-date.

Similarly, there could be questions over what exactly constitutes “limited” and “discriminate” data collection and retention — which could encourage states to push a ‘maximal’ interpretation of where the legal line lies.

Nonetheless, privacy advocates are viewing the opinion as a positive sign for the defence of fundamental rights.

In a statement welcoming the opinion, Privacy International dubbed it “a win for privacy”. “We all benefit when robust rights schemes, like the EU Charter of Fundamental Rights, are applied and followed,” said legal director, Caroline Wilson Palow. “If the Court agrees with the AG’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”

The CJEU will issue its ruling at a later date — typically between three to six months after an AG opinion.

The opinion comes at a key time given European Commission lawmakers are set to rethink a plan to update the ePrivacy Directive, which deals with the privacy of electronic communications, after Member States failed to reach agreement last year over an earlier proposal for an ePrivacy Regulation — so the AG’s view will likely feed into that process.

The opinion may also have an impact on other legislative processes — such as the talks on the EU e-evidence package and negotiations on various international agreements on cross-border access to e-evidence — according to Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo.

“It is worth noting that, under Article 4(2) of the Treaty on the European Union, “national security remains the sole responsibility of each Member State”. Yet, the advocate general’s opinion suggests that this provision does not exclude that EU data protection rules may have direct implications for national security,” Tosoni also pointed out. 

“Should the Court decide to follow the opinion… ‘metadata’ such as traffic and location data will remain subject to a high level of protection in the European Union, even when they are accessed for national security purposes.  This would require several Member States — including Belgium, France, the UK and others — to amend their domestic legislation.”

At CES, companies slowly start to realize that privacy matters

Every year, Consumer Electronics Show attendees receive a branded backpack, but this year’s edition was special; made out of transparent plastic, the bag’s contents were visible without the wearer needing to unzip. It isn’t just a fashion decision. Over the years, security has become more intense and cumbersome, but attendees with transparent backpacks didn’t have to open their bags when entering.

That cheap backpack is a metaphor for an ongoing debate — how many of us are willing to exchange privacy for convenience?

Privacy was on everyone’s mind at this year’s CES in Las Vegas, from CEOs to policymakers, PR agencies and people in charge of programming the panels. For the first time in decades, Apple had a formal presence at the event; Senior Director of Global Privacy Jane Horvath spoke on a panel focused on privacy with other privacy leaders.

Cookie consent tools are being used to undermine EU privacy rules, study suggests

Most cookie consent pop-ups served to Internet users in the European Union — ostensibly seeking permission to track people’s web activity — are likely to be flouting regional privacy laws, a new study by researchers at MIT, UCL and Aarhus University suggests.

“The results of our empirical survey of CMPs [consent management platforms] today illustrates the extent to which illegal practices prevail, with vendors of CMPs turning a blind eye to — or worse, incentivising — clearly illegal configurations of their systems,” the researchers argue, adding that: “Enforcement in this area is sorely lacking.”

Their findings, published in a paper entitled Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence, chime with another piece of research we covered back in August — which also concluded a majority of the current implementations of cookie notices offer no meaningful choice to Europe’s Internet users — even though EU law requires one.

When consent is being relied upon as the legal basis for processing web users’ personal data, the bar for valid (i.e. legal) consent that’s set by the EU’s General Data Protection Regulation (GDPR) is clear: It must be informed, specific and freely given.

Recent jurisprudence by the Court of Justice of the European Union also further crystalized the law around cookies, making it clear that consent must be actively signalled — meaning a digital service cannot infer consent to tracking by indirect actions (such as the pop-up being closed by the user without a response or ignored in favor of interacting with the service).

Many websites use a so-called CMP to solicit consent to tracking cookies. But if it’s configured to contain pre-ticked boxes that opt users into sharing data by default — requiring an affirmative user action to opt out — any gathered ‘consent’ also isn’t legal.

Consent to tracking must also be obtained prior to a digital service dropping or accessing a cookie; Only service-essential cookies can be deployed without asking first.

All of which means — per EU law — it should be equally easy for website visitors to choose not to be tracked as to agree to their personal data being processed.

However the Dark Patterns after the GDPR study found that’s very far from the case right now.

“We found that dark patterns and implied consent are ubiquitous,” the researchers write in summary, saying that only slightly more than one in ten (11.8%) of the CMPs they looked at “meet the minimal requirements that we set based on European law” — which they define as being “if it has no optional boxes pre-ticked, if rejection is as easy as acceptance, and if consent is explicit”.

For the study, the researchers scraped the top 10,000 UK websites, as ranked by Alexa, to gather data on the most prevalent CMPs in the market — which are made by five companies: QuantCast, OneTrust, TrustArc, Cookiebot, and Crownpeak — and analyzed how the design and configurations of these tools affected Internet users’ choices. (They obtained a data set of 680 CMP instances via their method — a sample they calculate is representative of at least 57% of the total population of the top 10k sites that run a CMP, given prior research found only around a fifth do so.)

Implicit consent — aka (illegally) inferring consent via non-affirmative user actions (such as the user visiting or scrolling on the website or a failure to respond to a consent pop-up or closing it without a response) — was found to be common (32.5%) among the studied sites.

“Popular CMP implementation wizards still allow their clients to choose implied consent, even when they have already indicated the CMP should check whether the visitor’s IP is within the geographical scope of the EU, which should be mutually exclusive,” they note, arguing that: “This raises significant questions over adherence with the concept of data protection by design in the GDPR.”

They also found that the vast majority of CMPs make rejecting all tracking “substantially more difficult than accepting it” — with a majority (50.1%) of studied sites not having a ‘reject all’ button. While only a tiny minority (12.6%) of sites had a ‘reject all’ button accessible with the same or fewer number of clicks as an ‘accept all’ button.

Or, to put it another way, ‘Ohhai dark pattern design‘…

“An ‘accept all’ button was never buried in a second layer,” the researchers go on to point out, also finding that “74.3% of reject all buttons were one layer deep, requiring two clicks to press; 0.9% of them were two layers away, requiring at minimum three.”

Pre-ticked boxes were found to be widely deployed in the studied CMPs as well — despite such a setting not being legally valid. (On this they found: “56.2% of sites pre-ticked optional vendors or purposes/categories, with 54.1% of sites pre-ticking optional purposes, 32.3% pre-ticking optional categories, and 30.3% pre-ticking both”.)

They also point out that the high number of third-party trackers routinely being used by sites poses a major problem for the EU consent model — given it requires a “prohibitively long time” for users to become clearly informed enough to be able to legally consent.

The exact number of third party trackers they found being packed like sardines into CMPs varied — with between tens and several hundreds in play depending on the site.

Fifty-eight was the lowest number they encountered. While the highest instance was 542 vendors — on an implementation of QuantCast’s CMP. (And, well, just imagine the ‘friction’ involved in manually unticking all those, assuming that was one of the sites that also lacked a ‘reject all’ button… )

Sites relied on a large number of third party trackers, which would take a prohibitively long time for users to inform themselves about clearly. Out of the 85.4% of sites that did list vendors (e.g. third party trackers) within the CMP, there was a median number of 315 vendors (low. quartile 58, upp. quartile 542). Different CMP vendors have different average numbers of vendors, with the highest being QuantCast at 542… 75% of sites had over 58 vendors. 76.47% of sites provide some descriptions of their vendors. The mean total length of these descriptions per site is 7,985 words: roughly 31.9 minutes of reading for the average 250 words-per-minute reader, not counting interaction time to e.g. unfold collapsed boxes or navigating to and reading specific privacy policies of a vendor.

A second part of the research involved a field experiment involving 40 participants to investigate how the eight most common CMP designs affect Internet users’ consent choices.

“We found that notification style (banner or barrier) has no effect [on consent choice]; removing the opt-out button from the first page increases consent by 22–23 percentage points; and providing more granular controls on the first page decreases consent by 8–20 percentage points,” they write in summary on that.

They argue this portion of the study supports the notion that two of the most common consent interface designs – “not showing a ‘reject all’ button on the first page; and showing bulk options before showing granular control” – make it more likely for users to provide consent, thereby “violating the [GDPR] principle of “freely given””.

They also make reference to “qualitative reflections” of the participants in the paper — which were obtained via  survey after individuals’ consent choices had been registered during the field study — suggesting these responses “put into question the entire notice-and-consent model not because of specific design decisions but merely because an action is required before the user can accomplish their main task and because they appear too frequently if they are shown on a website-by-website basis”.

So, in other words, just the fact of interrupting a web user to ask them to make a choice may itself apply substantial enough pressure that it might render any resulting ‘consent’ invalid.

The study’s finding of the prevalence of manipulative designs and configurations intended to nudge or even force consent suggests Internet users in Europe are not actually benefiting from a legal framework that’s supposed to protection their digital data from unwanted exploitation — and are rather being subject to a lot of noisy, distracting and disingenuous ‘consent theatre’.

Cookie notices not only generate friction and frustration for the average Internet user, as they try to go about their daily business online, but the current situation is creating a faux veneer of compliance — atop what is actually a massive trampling of rights via what amounts to digital daylight robbery of people’s data at scale.

The problem here is that EU regulators have for years looked the other way where online tracking is concerned, failing entirely to enforce the on-paper standard.

Enforcement is indeed sorely lacking, as the researchers note. (Industry lobbying/political pressure, limited resources, risk aversion and regulatory capture, and a legacy of inaction around digital rights are all likely to blame.)

And while the GDPR only started being applied in May 2018, Europe has had regulations on data-gathering mechanisms like cookies for approaching two decades — with the paper pointing out that an amendment to the ePrivacy Directive all the way back in 2002 made it a requirement that “storing or accessing information on a user’s device not ‘strictly necessary’ for providing an explicitly requested service requires both clear and comprehensive information and opt-in consent”.

Asked about the research findings, lead author, Midas Nouwens, questioned why CMP vendors are selling so called ‘compliance’ tools that allow for non-compliant configurations in the first place.

“It’s sad, but I don’t think anyone is surprised anymore by how few pop-ups comply with the GDPR,” he told TechCrunch. “What is shocking is how non-compliant interface designs are allowed by the companies that provide consent pop-ups. Why do they let their clients count scrolling as consent or bury the decline button somewhere on the third page?”

“Enforcement is really the next big challenge if we don’t want the GDPR to go down the same path as the ePrivacy directive,” he added. “Since enforcement agencies have limited resources, focusing on the popular consent pop-up providers could be a much more effective strategy than targeting individual websites.

“Unfortunately, while we wait for enforcement, the dark patterns in these pop-ups are still manipulating people into being tracked.”

Another of the researchers behind the paper, Michael Veale, a lecturer in digital rights and regulation at UCL, also expressed shock that CMP vendors are allowing their tools to be configured in ways which are clearly intended to manipulate Internet users — thereby flouting the law.

In the paper the researchers urge regulators to take a smarter approach to tackling such widespread violation, such as by making use of automated tools “to expedite discovery and enforcement” of non-compliant cookie notices, and suggest they work “further upstream” — such as by placing requirements on the vendors of CMPs “to only allow compliant designs to be placed on the market”.

“It’s shocking to see how many of the large providers of consent pop-ups allow their systems to be misconfigured, such as through implicit consent, in ways that clearly infringe data protection law,” Veale told us, adding: “I suspect data protection authorities see this widespread illegality and are not sure exactly where to start. Yet if they do not start enforcing these guidelines, it’s unclear when this widespread illegality will start to stop.”

“This study even overestimates compliance, as we don’t focus on what actually happens to the tracking when you click on these buttons, which other recent studies have emphasised in many cases mislead individuals and do nothing at all,” he also pointed out.

We reached out to the UK’s data protection watchdog, the ICO, for a response to the research — and a spokeswoman pointed us to this cookie advice blog post it published last year, saying the advice it contains “still stands”.

In the blog Ali Shah, the ICO’s head of technology policy, suggests there could be some (albeit limited) action from the regulator this year to clean up cookie consent, with Shah writing that: “Cookie compliance will be an increasing regulatory priority for the ICO in the future. However, as is the case with all our powers, any future action would be proportionate and risk-based.”

While European citizens wait for data protection regulators to take meaningful action over systematic breaches of the GDPR — including those attached to consent-less tracking of web users — there is one step European web users can take to shrink the pain of cookie consent pop-ups: The researchers behind the study have built an open source browser extension that can automatically answer pop-ups based on user-customizable preferences.

It’s called Consent-o-Matic — and there are versions available for Firefox and Chrome.

At release the tool can automatically respond to cookie banners built by the five big CMP suppliers (QuantCast, OneTrust, TrustArc, Cookiebot, and Crownpeak).

But being as it’s open source, the hope is others will build on it to expand the types of pop-ups it’s able to auto-respond to. In the absence of a legally enforced ‘Do Not Track’ browser standard this is about as good as it gets for Internet users desperately seeking easier agency over the online tracking industry.

In a Twitter thread last month announcing the tool, Nouwens described the project as making use of “adversarial interoperability” as a pro-privacy tactic.

“Automating consent and privacy preferences is not new (DNT and P3P), but this project uses adversarial interoperability, rather than rely on industry self-regulation or buy-in from fundamentally opposed stakeholders (browsers, advertisers, publishers),” he observed.

However he added one caveat, reminding users to be on their guard for further non-compliance from the data suckers — pointing to the earlier research paper also flagged by Veale which found a small portion of sites (~7%) entirely ignore responses to cookie pop-ups and track users regardless of response.

So sometimes even a seamlessly automated ‘no’ to tracking might still sum to being tracked…

DuckDuckGo still critical of Google’s EU Android choice screen auction, after wining a universal slot

Google has announced which search engines have won an auction process it has devised for an Android ‘choice screen’ — as its response to an antitrust intervention by the region’s competition regulator.

The prompt is shown to users of Android smartphones in the European Union as they set up a device, asking them to choose a search engine from a list of four which always includes Google’s own search engine.

In mid-2018 the European Commission fined Google $5BN for antitrust violations attached to how it operates the Android platform, including related to how it bundles its own services with the dominant smartphone OS, and ordered it to remedy the infringements — while leaving it up to the tech giant to devise a fix.

Google responded by creating a choice screen for Android users to pick a search engine from a short list — with the initial choices seemingly based on local marketshare. But last summer it announced it would move to auctioning slots on the screen via a fixed sealed bid auction process.

The big winners of the initial auction, for the period March 1, 2020 to June 30, 2020, are pro-privacy search engine DuckDuckGo — which gets one of the three slots in all 31 European markets — and a product called Info.com, which will also be shown as an option in all those markets. (Per Wikipedia, the latter is a veteran metasearch engine that provides results from multiple search engines and directories, including Google.)

French pro-privacy search engine Qwant will be shown as an option to Android users in eight European markets. While Russia’s Yandex will appears as an option in five markets in the east of the region.

Other search engines that will appear as choices in a minority of the European markets are GMX, Seznam, Givero and PrivacyWall.

At a glance the big loser looks to be Microsoft’s Bing search engine — which will only appear as an option on the choice screen shown in the UK.

Tree-planting search engine Ecosia does not appear anywhere on the list at all, despite appearing on some initial Android choice screens — having taken the decision to boycott the auction to objects to Google’s ‘pay-to-play’ approach.

Ecosia CEO Christian Kroll told the BBC: “We believe this auction is at odds with the spirit of the July 2018 EU Commission ruling. Internet users deserve a free choice over which search engine they use and the response of Google with this auction is an affront to our right to a free, open and federated internet. Why is Google able to pick and choose who gets default status on Android?”

It’s not the only search engine critical of Google’s move, with Qwant and DuckDuckGo both raising concerns immediately the move to a paid auction was announced last year.

Despite participating in the process — and winning a universal slot — DuckDuckGo told us it still does not agree with Google’s pay-to-play auction.

“We believe a search preference menu is an excellent way to meaningfully increase consumer choice if designed properly. Our own research has reinforced this point and we look forward to the day when Android users in Europe will have the opportunity to easily make DuckDuckGo their default search engine while setting up their phones. However, we still believe a pay-to-play auction with only 4 slots isn’t right because it means consumers won’t get all the choices they deserve and Google will profit at the expense of the competition,” a spokesperson said in a statement.

How Ring is rethinking privacy and security

Ring is now a major player when it comes to consumer video doorbells, security cameras — and privacy protection.

Amazon acquired the company and promotes its devices heavily on its e-commerce websites. Ring has even become a cultural phenomenon with viral videos being shared on social networks and the RingTV section on the company’s website.

But that massive success has come with a few growing pains; as Motherboard found out, customers don’t have to use two-factor authentication, which means that anybody could connect to their security camera if they re-use the same password everywhere.

When it comes to privacy, Ring’s Neighbors app has attracted a ton of controversy. Some see it as a libertarian take on neighborhood watch that empowers citizens to monitor their communities using surveillance devices.

Others have questioned partnerships between Ring and local police to help law enforcement authorities request videos from Ring users.

In a wide-ranging interview, Ring founder Jamie Siminoff looked back at the past six months, expressed some regrets and defended his company’s vision. The interview was edited for clarity and brevity.


TechCrunch: Let’s talk about news first. You started mostly focused on security cameras, but you’ve expanded way beyond security cameras. And in particular, I think the light bulb that you introduced is pretty interesting. Do you want to go deeper in this area and go head to head against Phillips Hue for instance?

Jamie Siminoff: We try not to ever look at competition — like the company is going head to head with… we’ve always been a company that has invented around a mission of making neighborhoods safer.

Sometimes, that puts us into a place that would be competing with another company. But we try to look at the problem and then come up with a solution and not look at the market and try to come up with a competitive product.

No one was making — and I still don’t think there’s anyone making — a smart outdoor light bulb. We started doing the floodlight camera and we saw how important light was. We literally saw it through our camera. With motion detection, someone will come over a fence, see the light and jump back over. We literally could see the impact of light.

So you don’t think you would have done it if it wasn’t a light bulb that works outside as well as inside?

For sure. We’ve seen the advantage of linking all the lights around your home. When you walk up on a step light and that goes off, then everything goes off at the same time. It’s helpful for your own security and safety and convenience.

The light bulbs are just an extension of the floodlight. Now again, it can be used indoor because there’s no reason why it can’t be used indoor.

Following Amazon’s acquisition, do you think you have more budget, you can hire more people and you can go faster and release all these products?

It’s not a budget issue. Money was never a constraint. If you had good ideas, you could raise money — I think that’s Silicon Valley. So it’s not money. It’s knowledge and being able to reach a critical mass.

As a consumer electronics company, you need to have specialists in different areas. You can’t just get them with money, you kind of need to have a big enough thing. For example, wireless antennas. We had good wireless antennas. We did the best we thought we could do. But we get into Amazon and they have a group that’s super highly focused on each individual area of that. And we make much better antennas today.

Our reviews are up across the board, our products are more liked by our customers than they were before. Jamie Siminoff

Our reviews are up across the board, our products are more liked by our customers than they were before. To me, that’s a good measure — after Amazon, we have made more products and they’re more beloved by our customers. And I think part of that is that we can tap into resources more efficiently.

And would you say the teams are still very separate?

Amazon is kind of cool. I think it’s why a lot of companies that have been bought by Amazon stay for a long time. Amazon itself is almost an amalgamation of a lot of little startups. Internally, almost everyone is a startup CEO — there’s a lot of autonomy there.

Zuckerberg ditches annual challenges, but needs cynics to fix 2030

Mark Zuckerberg won’t be spending 2020 focused on wearing ties, learning Mandarin, or just fixing Facebook. “Rather than having year-to-year challenges, I’ve tried to think about what I hope the world and my life will look in 2030” he wrote today on Facebook. As you might have guessed, though, Zuckerberg’s vision for an improved planet involves a lot more of Facebook’s family of apps.

His biggest proclamations in today’s notes include that:

  • AR – Phones will remain the primary computing platform for most of the decade by augmented reality could get devices out from between us so we can be present together — Facebook is building AR glasses
  • VR – Better virtual reality technology could address the housing crisis by letting people work from anywhere — Facebook is building Oculus
  • Privacy – The internet has created a global community where people find it hard to establish themselves as unique, so smaller online groups could make people feel special again – Facebook is building more private groups and messaging options
  • Regulation – That the big questions facing technology are too thorny for private companies to address by themselves, and governments must step in around elections, content moderation, data portability, and privacy — Facebook is trying to self-regulate on these and everywhere else to deter overly onerous lawmaking

Zuckerberg Elections

These are all reasonable predictions and suggestions. However, Zuckerberg’s post does little to address how the broadening of Facebook’s services in the 2010s also contributed to a lot of the problems he presents.

  • Isolation – Constant passive feed scrolling on Facebook and Instagram has created a way to seem like you’re being social without having true back-and-forther interaction with friends
  • Gentrification – Facebook’s shuttled employees have driven up rents in cities around the world, especially the Bay Area
  • Envy – Facebook’s algorithms can make anyone without a glamorous, Instagram-worthy life look less important, while hackers can steal accounts and its moderation systems can accidentally suspend profiles with little recourse for most users
  • Negligence – The growth-first mentality led Facebook’s policies and safety to lag behind its impact, creating the kind of democracy, content, anti-competition, and privacy questions its now asking the government to answer for it

Noticibly absent from Zuckerberg’s post are explicit mentions some of Facebook’s more controversial products and initiatives. He writes about “decentralizing opportunity” by giving small businesses commerce tools, but never mentions cryptocurrency, blockchain, or Libra directly. Instead he seems to suggest that Instagram store fronts, Messenger customer support, and WhatsApp remittance might be sufficient. He also largely leaves out Portal, Facebook’s smart screen that could help distant families stay closer, but that some see as a surveillance and data collection tool.

I’m glad Zuckerberg is taking his role as a public figure and the steward of one of humanity’s fundamental utilities more seriously. His willingness to even think about some of these long-term issues instead of just quarterly-profits is important. Optimism is necessary to create what doesn’t exist.

Still, if Zuckerberg wants 2030 to look better for the world, and for the world to look more kindly on Facebook, he may need to hire more skeptics and cynics that see a dystopic future instead. Their foresight on where societal problems could arise from Facebook’s products could help temper Zuckerberg’s team of idealists to create a company that balances the potential of the future with the risks to the present.

Every new year of the last decade I set a personal challenge. My goal was to grow in new ways outside my day-to-day work…

Posted by Mark Zuckerberg on Thursday, January 9, 2020