Reddit links UK-US trade talk leak to Russian influence campaign

Reddit has linked account activity involving the leak and amplification of sensitive UK-US trade talks on its platform during the ongoing UK election campaign to a suspected Russian political influence operation.

Or, to put it more plainly, the social network suspects that Russian operatives are behind the leak of sensitive trade data — likely with the intention of impacting the UK’s General Election campaign.

The country goes to the polls next week, on December 12.

The UK has been politically deadlocked since mid 2016 over how to implement the result of the referendum to leave the European Union . The minority Conservative government has struggled to negotiate a brexit deal that parliament backs. Another hung parliament or minority government would likely result in continued uncertainty.

In a post discussing the “Suspected campaign from Russia”, Reddit writes:

We were recently made aware of a post on Reddit that included leaked documents from the UK. We investigated this account and the accounts connected to it, and today we believe this was part of a campaign that has been reported as originating from Russia.

Earlier this year Facebook discovered a Russian campaign on its platform, which was further analyzed by the Atlantic Council and dubbed “Secondary Infektion.” Suspect accounts on Reddit were recently reported to us, along with indicators from law enforcement, and we were able to confirm that they did indeed show a pattern of coordination. We were then able to use these accounts to identify additional suspect accounts that were part of the campaign on Reddit. This group provides us with important attribution for the recent posting of the leaked UK documents, as well as insights into how adversaries are adapting their tactics.

Reddit says that an account, called gregoratior, originally posted the leaked trade talks document. Later a second account, ostermaxnn, reposted it. The platform also found a “pocket of accounts” that worked together to manipulate votes on the original post in an attempt to amplify it. Though fairly fruitlessly, as it turned out; the leak gained little attention on Reddit, per the company.

As a result of the investigation Reddit says it has banned 1 subreddit and 61 accounts — under policies against vote manipulation and misuse of its platform.

The story doesn’t end there, though, because whoever was behind the trade talk leak appears to have resorted to additional tactics to draw attention to it — including emailing campaign groups and political activists directly.

This activity did bear fruit this month when the opposition Labour party got hold of the leak and made it into a major campaign issue, claiming the 451-page document shows the Conservative party, led by Boris Johnson, is plotting to sell off the country’s free-at-the-point-of-use National Health Service (NHS) to US private health insurance firms and drug companies.

Labour party leader, Jeremy Corbyn, showed a heavily redacted version of the document during a TV leaders debate earlier this month, later calling a press conference to reveal a fully un-redacted version of the data — arguing the document proves the NHS is in grave danger if the Conservatives are re-elected.

Johnson has denied Labour’s accusation that the NHS will be carved up as the price of a Trump trade deal. But the leaked document itself is genuine.

It details preliminary meetings between UK and US trade negotiators, which took place between July 2017 and July 2019, in which discussion of the NHS does take place, in addition to other issues such as food standards.

Although the document does not confirm what position the UK might seek to adopt in any future trade talks with the US.

The source of the heavily redacted version of the document appears to be a Freedom of Information (FOI) request by campaigning organisation, Global Justice Now — which told Vice it made an FOI request to the UK’s Department for International Trade around 18 months ago.

The group said it was subsequently emailed a fully unredacted version of the document by an unknown source which also appears to have sent the data directly to the Labour party. So while the influence operation looks to have originated on Reddit, the agents behind it seem to have resorted to more direct means of data dissemination in order for the leak to gain the required attention to become an election-influencing issue.

Experts in online influence operations had already suggested similarities between the trade talks leak and an earlier Russian operation, dubbed Secondary Infektion, which involved the leak of fake documents on multiple online platforms. Facebook identified and took down that operation in May.

In a report analysing the most recent leak, social network mapping and analysis firm Graphika says the key question is how the trade document came to be disseminated online a few weeks before the election.

“The mysterious [Reddit] user seemingly originated the leak of a diplomatic document by posting it around online, just six weeks before the UK elections. This raises the question of how the user got hold of the document in the first place,” it writes. “This is the single most pressing question that arises from this report.”

Graphika’s analysis concludes that the manner of leaking and amplifying the trade talks data “closely resembles” the known Russian information operation, Secondary Infektion.

“The similarities to Secondary Infektion are not enough to provide conclusive attribution but are too close to be simply a coincidence. They could indicate a return of the actors behind Secondary Infektion or a sophisticated attempt by unknown actors to mimic it,” it adds.

Internet-enabled Russian influence operations that feature hacking and strategically timed data dumps of confidential/sensitive information, as well as the seeding and amplification of political disinformation which is intended to polarize, confuse and/or disengage voters, have become a regular feature of Western elections in recent years.

The most high profile example of Russian election interference remains the 2016 hack of documents and emails from Hillary Clinton’s presidential campaign and Democratic National Committee — which went on to be confirmed by US investigators as an operation by Russia’s GRU intelligence agency.

In 2017 emails were also leaked from French president Emmanuel Macron’s campaign shortly before his election — although with apparently minimal impact in that case. (Attribution is also less clear-cut.)

Russian activity targeting UK elections and referendums remains a matter of intense interest and investigation — and had been raised publicly as a concern by former prime minister, Theresa May, in 2017.

Although her government failed to act on recommendations to strengthen UK election and data laws to respond to the risks posed by Internet-enabled interference. She also did nothing to investigate questions over the extent of foreign interference in the 2016 brexit referendum.

May was finally unseated by the ongoing political turmoil around brexit this summer, when Johnson took over as prime minister. But he has also turned a wilfully blind eye to the risks around foreign election interference — while fully availing himself of data-fuelled digital campaign methods whose ethics have been questioned by multiple UK oversight bodies.

A report into Russian interference in UK politics which was compiled by the UK’s intelligence and security parliamentary committee — and had been due to be published ahead of the general election — was also personally blocked from publication by the prime minister.

Voters won’t now get to see that information until after the election. Or, well, barring another strategic leak…

No Libra style digital currencies without rules, say EU finance ministers

European Union finance ministers have agreed a defacto ban on the launch in the region of so-called global ‘stablecoins’ such as Facebook’s planned Libra digital currency until the bloc has a common approach to regulation that can mitigate the risks posed by the technology.

In a joint statement the European Council and Commission write that “no global ‘stablecoin’ arrangement should begin operation in the European Union until the legal, regulatory and oversight challenges and risks have been adequately identified and addressed”.

The statement includes recognition of potential benefits of the crypto technology, such as cheaper and faster payments across borders, but says they pose “multifaceted challenges and risks related for example to consumer protection, privacy, taxation, cyber security and operational resilience, money laundering, terrorism financing, market integrity, governance and legal certainty”.

“When a ‘stablecoin’ initiative has the potential to reach a global scale, these concerns are likely to be amplified and new potential risks to monetary sovereignty, monetary policy, the safety and efficiency of payment systems, financial stability, and fair competition can arise,” they add.

All options are being left open to ensure effective regulation, per the statement, with ministers and commissioners stating this should include “any measures to prevent the creation of unmanageable risks by certain global “stablecoins”.”

The new European Commission is already working on a regulation for global stablecoins, per Reuters.

In a speech at a press conference, Commission VP Valdis Dombrovskis, said: “Today the Ecofin endorsed a joint statement with the Commission on stablecoins. These are part of a much broader universe of crypto assets. If we properly address the risks, innovation around crypto assets has the potential to play a positive role for investors, consumers and the efficiency of our financial system.

“A number of Member States like France, Germany or Malta introduced national crypto asset laws, but most people agree with the advice of the European Supervisory Authorities that these markets go beyond borders and so we need a common European framework.

“We will now move to implement this advice. We will launch a public consultation very shortly, before the end of the year.”

The joint statement also hits out at the lack of legal clarity around some major global projects in this area — which looks like a tacit reference to Facebook’s Libra project (though the text does not include any named entities).

“Some recent projects of global dimension have provided insufficient information on how precisely they intend to manage risks and operate their business. This lack of adequate information makes it very difficult to reach definitive conclusions on whether and how the existing EU regulatory framework applies. Entities that intend to issue ‘stablecoins’, or carry out other activities involving ‘stablecoins’ in the EU should provide full and adequate information urgently to allow for a proper assessment against the applicable existing rules,” they warn.

Facebook’s Libra project was only announced this summer — with a slated launch of the first half of 2020 — but was quickly dealt major blows by the speedy departure of key founder members from the vehicle set up to steer the initiative, as giants including Visa, Stripe and eBay apparently took fright at the regulatory backlash. Though you’d never know it from reading the Libra Association PR.

One perhaps unintended effective of Facebook’s grand design on disrupting global financial systems is to amp up pressure on traditional payment providers to innovate and improve their offerings for consumers.

EU ministers write that the emergence of stablecoin initiatives “highlight the importance of continuous improvements to payment arrangements in order to meet market and consumer expectations for convenient, fast, efficient and inexpensive payments – especially cross-border”.

“While European payment systems have already made significant progress, European payment actors, including payment services providers, also have a key role to play in this respect,” they continue. “We note that the ECB and other central banks and national competent authorities will explore further the ongoing digital transformation of the payment system and, in particular, the consequences of initiatives such as ‘stablecoins’. We welcome that central banks in cooperation with other relevant authorities continue to assess the costs and benefits of central bank digital currencies as well as engage with European payment actors regarding the role of the private sector in meeting expectations for efficient, fast and inexpensive cross-border payments.”

Facebook launches a photo portability tool, starting in Ireland

It’s not friend portability, but Facebook has announced the launch today of a photo transfer tool to enable users of its social network to port their photos directly to Google’s photo storage service, via encrypted transfer.

The photo portability feature is initially being offered to Facebook users in Ireland, where the company’s international HQ is based. Facebook says it is still testing and tweaking the feature based on feedback but slates “worldwide availability” as coming in the first half of 2020.

It also suggests porting to other photo storage services will be supported in the future, in addition to Google Photos — which specifying which services it may seek to add.

Facebook says the tool is based on code developed via its participation in the Data Transfer Project — a collaborative effort started last year that’s currently backed by five tech giants (Apple, Facebook, Google, Microsoft and Twitter) who have committed to build “a common framework with open-source code that can connect any two online service providers, enabling a seamless, direct, user initiated portability of data between the two platforms”.

Facebook also points to a white paper it published in September — where it advocates for “clear rules” to govern the types of data that should be portable and “who is responsible for protecting that data as it moves to different providers”.

Behind all these moves is of course the looming threat of antitrust regulation, with legislators and agencies on both sides of the Atlantic now closely eyeing platforms’ grip on markets, eyeballs and data.

Hence Facebook’s white paper couching portability tools as “helping keep competition vibrant among online services”. (Albeit, if the ‘choice’ being offered is to pick another tech giant to get your data that’s not exactly going to reboot the competitive landscape.)

It’s certainly true that portability of user uploaded data can be helpful in encouraging people to feel they can move from a dominant service.

However it is also something of a smokescreen — especially when A) the platform in question is a social network like Facebook (because it’s people who keep other people stuck to these types of services); and B) the value derived from the data is retained by the platform regardless of whether the photos themselves travel elsewhere.

Facebook processes user uploaded data such as photos to gain personal insights to profile users for ad targeting purposes. So even if you send your photos elsewhere that doesn’t diminish what Facebook has already learned about you, having processed your selfies, groupies, baby photos, pet shots and so on. (It has also designed the portability tool to send a copy of the data; ergo, Facebook still retains your photos unless you take additional action — such as deleting your account.)

The company does not offer users any controls (portability tools or access rights) over the inferences it makes based on personal data such as photos.

Or indeed control over insights it services from its analysis of usage of its platform or wider browsing of the Internet (Facebook tracks both users and non users across the web via tools like social plug-ins and tracking pixels).

Given its targeted ads business is powered by a vast outgrowth of tracking (aka personal data processing), there’s little risk to Facebook to offer a portability feature buried in a sub-menu somewhere that lets a few in-the-know users click to send a copy of their photos to another tech giant.

Indeed, it may hope to benefit from similar incoming ports from other platforms in future.

“We hope this product can help advance conversations on the privacy questions we identified in our white paper,” Facebook writes. “We know we can’t do this alone, so we encourage other companies to join the Data Transfer Project to expand options for people and continue to push data portability innovation forward.”

Competition regulators looking to reboot digital markets will need to dig beneath the surface of such self-serving initiatives if they are to alight on a meaningful method of reining in platform power.

European parliament’s NationBuilder contract under investigation by data regulator

Europe’s lead data regulator has issued its first ever sanction of an EU institution — taking enforcement action against the European parliament over its use of US-based digital campaign company, NationBuilder, to process citizens’ voter data ahead of the spring elections.

NationBuilder is a veteran of the digital campaign space — indeed, we first covered the company back in 2011— which has become nearly ubiquitous for digital campaigns in some markets.

But in recent years European privacy regulators have raised questions over whether all its data processing activities comply with regional data protection rules, responding to growing concern around election integrity and data-fuelled online manipulation of voters.

The European parliament had used NationBuilder as a data processor for a public engagement campaign to promote voting in the spring election, which was run via a website called thistimeimvoting.eu.

The website collected personal data from more than 329,000 people interested in the EU election campaign — data that was processed on behalf of the parliament by NationBuilder.

The European Data Protection Supervisor (EDPS), which started an investigation in February 2019, acting on its own initiative — and “taking into account previous controversy surrounding this company” as its press release puts it — found the parliament had contravened regulations governing how EU institutions can use personal data related to the selection and approval of sub-processors used by NationBuilder.

The sub-processors in question are not named. (We’ve asked for more details.)

The parliament received a second reprimand from the EDPS after it failed to publish a compliant Privacy Policy for the thistimeimvoting website within the deadline set by the EDPS. Although the regulator says it acted in line with its recommendations in the case of both sanctions.

The EDPS also has an ongoing investigation into whether the Parliament’s use of the voter mobilization website, and related processing operations of personal data, were in accordance with rules applicable to EU institutions (as set out in Regulation (EU) 2018/1725).

The enforcement actions had not been made public until a hearing earlier this week — when assistant data protection supervisor, Wojciech Wiewiórowski, mentioned the matter during a Q&A session in front of MEPs.

He referred to the investigation as “one of the most important cases we did this year”, without naming the data processor. “Parliament was not able to create the real auditing actions at the processor,” he told MEPs. “Neither control the way the contract has been done.”

“Fortunately nothing bad happened with the data but we had to make this contract terminated the data being erased,” he added.

When TechCrunch asked the EDPS for more details about this case on Tuesday a spokesperson told us the matter is “still ongoing” and “being finalized” and that it would communicate about it soon.

Today’s press release looks to be the upshot.

Provided canned commentary in the release Wiewiórowski writes:

The EU parliamentary elections came in the wake of a series of electoral controversies, both within the EU Member States and abroad, which centred on the the threat posed by online manipulation. Strong data protection rules are essential for democracy, especially in the digital age. They help to foster trust in our institutions and the democratic process, through promoting the responsible use of personal data and respect for individual rights. With this in mind, starting in February 2019, the EDPS acted proactively and decisively in the interest of all individuals in the EU to ensure that the European Parliament upholds the highest of standards when collecting and using personal data. It has been encouraging to see a good level of cooperation developing between the EDPS and the European Parliament over the course of this investigation.

One question that arises is why no firmer sanction has been issued to the European parliament — beyond a (now public) reprimand, some nine months after the investigation began.

Another question is why the matter was not more transparently communicated to EU citizens.

The EDPS’ PR emphasizes that its actions “are not limited to reprimands”, without explaining why the two enforcements thus far didn’t merit tougher action. (At the time of writing the EDPS had not responded to questions about why no fines have so far been issued.)

There may be more to come, though.

The regulator says it will “continue to check the parliament’s data protection processes” — revealing that the European Parliament has finished informing individuals of a revised intention to retain personal data collected by the thistimeimvoting website until 2024.

“The outcome of these checks could lead to additional findings,” it warns, adding that it intends to finalise the investigation by the end of this year.

Asked about the case, a spokeswoman for the European parliament told us that the thistimeimvoting campaign had been intended to motivate EU citizens to participate in the democratic process, and that it used a mix of digital tools and traditional campaigning techniques in order to try to reach as many potential voters as possible. 

She said NationBuilder had been used as a customer relations management platform to support staying in touch with potential voters — via an offer to interested citizens to sign up to receive information from the parliament about the elections (including events and general info).

Subscribers were also asked about their interests — which allowed the parliament to send personalized information to people who had signed up.

Some of the regulatory concerns around NationBuilder have centered on how it allows campaigns to match data held in their databases (from people who have signed up) with social media data that’s publicly available, such as an unlocked Twitter account or public Facebook profile.

TechCrunch understands the European parliament was not using this feature.

In 2017 in France, after an intervention by the national data watchdog, NationBuilder suspended the data matching tool in the market.

The same feature has attracted attention from the UK’s Information Commissioner — which warned last year that political parties should be providing a privacy notice to individuals whose data is collected from public sources such as social media and matched. Yet aren’t.

“The ICO is concerned about political parties using this functionality without adequate information being provided to the people affected,” the ICO said in the report, while stopping short of ordering a ban on the use of the matching feature.

Its investigation confirmed that up to 200 political parties or campaign groups used NationBuilder during the 2017 UK general election.

Brexit ad blitz data firm paid by Vote Leave broke privacy laws, watchdogs find

joint investigation by watchdogs in Canada and British Columbia has found that Cambridge Analytica-linked data firm, Aggregate IQ, broke privacy laws in Facebook ad-targeting work it undertook for the official Vote Leave Brexit campaign in the UK’s 2016 EU referendum.

A quick reminder: Vote Leave was the official leave campaign in the referendum on the UK’s membership of the European Union. While Cambridge Analytica is the (now defunct) firm at the center of a massive Facebook data misuse scandal which has dented the company’s fortunes and continues to tarnish its reputation.

Vote Leave’s campaign director, Dominic Cummings — now a special advisor to the UK prime minister — wrote in 2017 that the winning recipe for the leave campaign was data science. And, more specifically, spending 98% of its marketing budget on “nearly a billion targeted digital adverts”.

Targeted at Facebook users.

The problem is, per the Canadian watchdogs’ conclusions, AIQ did not have proper legal consents from UK voters for disclosing their personal information to Facebook for the Brexit ad blitz which Cummings ordered.

Either for “the purpose of advertising to those individuals (via ‘custom audiences’) or for the purpose of analyzing their traits and characteristics in order to locate and target others like them (via ‘lookalike audiences’)”.

Oops.

Last year the UK’s Electoral Commission also concluded that Vote Leave breached election campaign spending limits by channeling money to AIQ to run the targeting political ads on Facebook’s platform, via undeclared joint working with another Brexit campaign, BeLeave. So there’s a full sandwich of legal wrongdoings stuck to the brexit mess that UK society remains mired in, more than three years later.

Meanwhile, the current UK General Election is now a digital petri dish for data scientists and democracy hackers to run wild experiments in microtargeted manipulation — given election laws haven’t been updated to take account of the outgrowth of the adtech industry’s tracking and targeting infrastructure, despite multiple warnings from watchdogs and parliamentarians.

Data really is helluva a drug.

The Canadian investigation cleared AIQ of any wrongdoing in its use of phone numbers to send SMS messages for another pro-Brexit campaign, BeLeave; a purpose the watchdogs found had been authorized by the consent provided by individuals who gave their information to that youth-focused campaign.

But they did find consent problems with work AIQ undertook for various US campaigns on behalf of Cambridge Analytica affiliate, SCL Elections — including for a political action committee, a presidential primary campaign and various campaigns in the 2014 midterm elections.

And, again — as we know — Facebook is squarely in the frame here too.

“The investigation finds that the personal information provided to and used by AIQ comes from disparate sources. This includes psychographic profiles derived from personal information Facebook disclosed to Dr. Aleksandr Kogan, and onward to Cambridge Analytica,” the watchdogs write.

“In the case of their work for US campaigns… AIQ did not attempt to determine whether there was consent it could rely on for its use and disclosure of personal information.”

The investigation also looked at AIQ’s work for multiple Canadian campaigns — finding fewer issues related to consent. Though the report states that in: “certain cases, the purposes for which individuals are informed, or could reasonably assume their personal information is being collected, do not extend to social media advertising and analytics”.

AIQ also gets told off for failing to properly secure the data it misused.

This element of the probe resulted from a data breach reported by UpGuard after it found AIQ running an unsecured GitLab repository — holding what the report dubs “substantial personal information”, as well as encryption keys and login credentials which it says put the personal information of 35 million+ people at risk.

Double oops.

“The investigation determined that AIQ failed to take reasonable security measures to ensure that personal information under its control was secure from unauthorized access or disclosure,” is the inexorable conclusion.

Turns out if an entity doesn’t have a proper legal right to people’s information in the first place it may not be majorly concerned about where else the data might end up.

The report flows from an investigation into allegations of unauthorized access and use of Facebook user profiles which was started by the Office of the Information and Privacy Commissioner for BC in late 2017. A separate probe was opened by the Office of the Privacy Commissioner of Canada last year. The two watchdogs subsequently combined their efforts.

The upshot for AIQ from the joint investigation’s finding of multiple privacy and security violations is a series of, er, “recommendations”.

On the data use front it is suggested the company take “reasonable measures” to ensure any third-party consent it relies on for collection, use or disclosure of personal information on behalf of clients is “adequate” under the relevant Canadian and BC privacy laws.

“These measures should include both contractual measures and other measures, such as reviewing the consent language used by the client,” the watchdogs suggest. “Where the information is sensitive, as with political opinions, AIQ should ensure there is express consent, rather than implied.”

On security, the recommendations are similarly for it to “adopt and maintain reasonable security measures to protect personal information, and that it delete personal information that is no longer necessary for business or legal purposes”.

“During the investigation, AIQ took steps to remedy its security breach. AIQ has agreed to implement the Offices’ recommendations,” the report adds.

The upshot of political ‘data science’ for Western democracies? That’s still tbc. Buckle up.

Gift Guide: STEM toys for your builders-in-training

Welcome to TechCrunch’s 2019 Holiday Gift Guide! Need help with gift ideas? We’re here to help! We’ll be rolling out gift guides from now through the end of December, so check back regularly.

We’ve refreshed our annual STEM toy gift guide with the latest wares clamoring to entice and inspire kids with coding tricks and electronic wizardry. Yes folks! Another year, another clutch of shiny gizmos making grand claims of computing smarts in child-friendly packaging.

But lean in to this market and you’ll find a number of STEM toy makers have winked out of existence since this time last year, or else been folded into others’ empires. Such as littleBits selling to Sphero this fall, or Root Robotics being picked up by robot vac giant iRobot in June.

Some of the remaining indie players are leaning heavily on IP licensing deals from big brands (e.g Kano’s co-branded Disney kit) as a tactic to grab attention. Others are concentrating their effort on selling direct to schools (e.g. Sphero, after a pivot last year — now with an expanded educational toolbox having picked up littleBits). Ozobot is another that’s been dialing up its focus on classrooms. Though, as we’ve reported, selling complex STEM learning devices to schools isn’t always easy. More consolidation and exits seem highly likely.

It’s perhaps also a sign of tricky times in the kid-tech/edtech category that Kano, one of the earliest of the alternative STEM computer makers, has jumped into bed with tech giant Microsoft too — selling its first Windows powered PC this year.

It’s clear that some of the experimental energy which fired up the category a few years ago has faded, as sales and outcomes haven’t gone the distance or lived up to the hype. Kids are fickle customers, as parents know. The market has responded by shaking out a bit. It also means some of what’s offered is starting to feel a bit formulaic and same-y. (And, well, Disney.)

Still, kids of all ages remain roundly spoilt for techie stuff to interact with. Not least because it’s never been easier for toymakers to bolt-on a bit of drag-and-drop in-app coding to give their plaything a STEM dimension. Mainstream giants like LEGO are also staying the course to try and grab a bigger chunk of the action. Generally you’ll find products with more polish than in years past, if not always as original and ambitious.

It’s also fair to say that promises of clever gadgets to power a kids’ coding revolution are looking rather less pristine than they used to after all the unboxing and, er, abandoning. Reality bytes, you could say.

Affordable smartphones and tablets maintain their competitive squeeze at the top of the category. They can be a more versatile option than most STEM gizmos, though rising concern about children’s screen time may push parents to seek out physical and tactile alternatives. Meanwhile, a mobile device is typically required to bring a STEM toy to life — as most (though not all) are essentially Bluetooth add-ons.

All that said there are still original and inspiring gifts to be had — and it’s good to see more focus on teaching creative skills, not only tech and engineering. Of course it’s always a case of horses for courses in this category. If your child won’t touch anything unless it’s wearing a Frozen princess dress/Star Wars cloak then you’ll be resigned to shelling out for the usual merch. May the tech force be with you as you search!

 

Adafruit

1161 02

Product: Python for Kids
Price: $35
Age: 10+

Description: Maker-focused and electronics hobbyist brand Adafruit sells all sorts of electronics goodies. It also has a dedicated sub-section for Young Engineers where it offers a range of own brand kits and third party wares for kids of all ages with the aim of sparking an interest in computing and electronics. Such as this Python for Kids book which takes a child-friendly approaching to seriously learning the Python programming language — so instead of a dull grey textbook you get text interspersed with cartoony illustrations, fun examples, puzzles and plenty of color. The book is intended for kids aged 10+.

For even younger children Adafruit is ranging this Snap Circuits Jr kit: A tool-free box for kids aged 8+ which gives them more than 100 projects to build from snap together modules.

For older children comfortable with a little soldering, there’s this Solar Powered SKULL Blinky LED Pendant, devised by Lumen Electronic Jewellery — for a little creative, battery-less maker bling.

Adafruit also ranges kits from UK startup, Tech Will Save Us — such as this DIY Gamer Kit for budding techies. The first challenge is to put all its pieces together (soldering required). If done right your child will have an Arduino-based handheld games console with a matrix screen perfect for playing classics like Snake and Tetris.

That’s just a taster. Adafruit’s marketplace site offers plenty more ideas and kits for little makers.

Brilliant

Brilliant product screenshot all devices

 

Product: Gift subscription courses
Price: From $25 for one month
Age: 13+

Description: If you don’t want to gift a learning toy, Brilliant.org has you covered with gift subscriptions for its STEM-focused digital courses (options include one month, or a full year). The philosophy behind its courses is to teach core concepts in math, science, and engineering through fun but challenging puzzles and problem solving — with the visual sweetener of surreal cartoony illustrations to keep you inspired.

The courses aren’t exclusively designed for children so may not suit every teen. But for children already firmly engaged with math and science there’s plenty of mind-tickling stuff here to push logic and curiosity further.

GoldieBlox

goldieblox cloud light

Product: DIY Floating Cloud Light
Price: $30
Age: 8+

Description: Slime-wrangling, glitter-bespeckled tween YouTuber (stuff) hackers, GoldieBlox, have put a crafty twist on STEM maker kits this year. The children’s multimedia company has built up a maker following online for its DIY project videos. You can see them assemble this DIY cloud light in this video — and gift it to your own budding hardware hacker in handy kit form. The box includes all the necessary parts to put the lamp together, plus a couple of cards offering STEM facts. It’s pretty light touch learning, though. The main focus is clearly on fun and practical making. Glue and scissors at the ready!

Kano

Kano Disney Frozen II Coding Kit

Product: Disney Frozen II Coding Kit
Price: $80
Age: 6+

Description: UK startup Kano was one of the first in the modern wave of STEM device builders. It began with the idea of offering kids a build-it-yourself computer to learn coding, before expanding into brightly colored DIY IoT gizmos. More recently it’s got into co-branded e-products. First a Harry Potter Coding Kit — offering a motion-sensitive wand as the interface between real-world gestures and on-screen code. Now it’s thrown its lot in with Disney, inking a two-year IP licensing deal. So enter the Disney Frozen II Coding Kit, new for 2019, which packages a build-it-yourself gesture-sensor with a Disney-flavored block-based coding bundle accessed via the companion app. Kids use hand gestures to manipulate cartoon versions of their favorite characters and Disney landscapes on screen. So the e-product requires a compatible tablet or computer to function.

For parents of youngsters who prefer Disney’s other mega franchise, Star Wars, to Frozen‘s singing princesses and snowmen you only need point your peepers at Kano’s The Force Coding Kit instead, which offers much the same experience — but in a sci-fi wrapper. 

Product: Kano PC

kano pc

Price: $300
Age: K-12 (from 4+ to 19)

Description: Kano has more learn-to-code machinery to sell you this year. It’s latest DIY computer — the Kano PC — is a fully fledged Windows 10 computer. This is a radical departure from its alternative origins building atop the single-board Raspberry Pi. Now your Kano dollars get you an Intel Atom Quad core chipset running at 1.44 GHz powering a plug-and-play hardware bundle comprised of a touchscreen unit plus keyboard case. If the Microsoft Surface had a kid this would basically be it.

At this point, and at this price-point, you might be wondering why not just buy an actual Windows PC? To try to answer that Kano is touting “exclusive apps” of its own design that come pre-loaded on the device — offering guided learning in the areas of coding skills, programmable graphics and for understanding the inner workings of computers. The company’s approach to teaching coding runs the gamut from block-based drag-and-drop interfaces through to typed code, with projects offered in Python, Javascript and Terminal commands. Hence the Kano PC is targeted at a very broad age range. Though, as it’s also a Windows PC, you might find your kids just using it to play Minecraft instead…  

KinderLab

KinderLab Kibo

Product: KIBO robot kits
Price: From $200
Age: 4-7

Description: KinderLab has been making screen-free programmable STEAM (that ‘A’ is for arts) robotics kits since 2014 but the company is now making a wider push to get individual parents on board by selling its kits on Amazon. How does Kibo work? Kids play and learn by plugging a variety of proprietary sensors and outputs into ports on the wheeled bot. Such as motion and light sensors. Another add-on, which company calls an “art platform”, lets kids embellish and customize the robot by designing paper hats to stick on it to dress it in a new context or character. The coding element comes in via a built in barcode scanner that’s used to read instructions off of physical wooden code blocks. This means kids can ‘program’ the robot without using any screens at all. 

KinderLab’s approach to teaching foundational engineering design concepts began life as a publicly funded research project. The company says Kibo draws on 20 years of learning science (as well as several years of active prototype testing in classrooms) to firm up its educational value. The academic backstory means there’s a wealth of curriculum-aligned content accompanying Kibo. This definitely feels like one of the more substantial and thoughtful STEM products on the market. It’s also great to see a product that leaves room for kids to introduce their own ideas.

Learning Resources

Learning Resources Coding Critters

Product: Coding Critters
Price: $40
Age: 4-10

Description: Learning Resources has been teaching young kids to grok the basics of sequential coding since late 2017, with its Botley programmable robot. New in its range of STEM toys for 2019 are Coding Critters: Remote control programmable pets targeted at young preschoolers. The entirely screen-free approach to teaching basic STEM concepts combines button-based controls on the battery-powered animal characters, themed code cards for reference and a storybook for parents to play a role in the narrative.

LEGO

lego boost star wars

Product: Star Wars Boost Droid Commander set
Price: $200
Age: 8+

Another one for Star Wars fans: This Lego Boost kit gives kids a bunch of Lego bricks and robot parts to put together three classic droids from the Disney-owned movie franchise. Boost being Lego’s more elementary robotics kit offering (vs the veteran Mindstorms platform). Once the Bluetooth-controlled droids have been assembled the companion app lets kids control and program them to carry out a series of missions, using a simple, block-based drag-and-drop coding interface.

Star Wars sound effects and music are included. But you’ll need to supply your own tablet to run the software. Or, well, you could just buy your kids a box of basic Lego bricks and let their imagination go wild.

Makeblock

Makeblock mTiny

Product: mTiny

Price: $180
Age: 4+
Description:
Shenzhen-based STEM kit maker, Makeblock, has unboxed a new cutesy learning robot for toddlers this year. Given the preschooler target there’s no screens on mTiny (unless you count the bot’s emotive, shape-shifting eyes). Instead the package includes coding cards, themed map pieces and a storybook for controlling and interacting with the sensor-laden bot. The company says the product is designed to foster logical thinking via interactive play. And its marketing materials make grand claims about exposing kids to a range of cross-curricular concepts, from math to art, as well as coding logic.

To control mTiny, kids use the companion tap pen. Either as a joystick, or to execute code-based programming — by tapping it on the code cards. The sequence of these cards determines its movements and actions. The bot can also read and respond to scenery markings on the themed floor tiles.

Mand Labs

Mand Labs

Produce: KIT-1
Price: $150
Age: 8+

Description: Budding engineers won’t be short of experiments if gifted this electronics breadboarding project kit from Mand Labs. Kit-1 contains 165 electronics components — the real-deal; not adapted for child’s play — plus tools and reference books for carrying out 54+ projects and experiments. Step-by-step projects like building an automatic night lamp, a security alarm or temperature sensor. The kit is intended as an entry into electronics so kids build circuits on a breadboard, rather than messing around with soldering. The kit is housed in a toolbox-style carry-case so it’s portable enough to take to a friend’s house. The product also comes with nine-hours’ worth of HD learning videos for extended learning support.

Pai Technology

Botzees

Product: Botzees Robotics Kit
Price: $100
Age: 4+
Description: 
Pai Technology‘s range of STEM toys have an augmented reality twist. So as well as physical stuff to play with — block-based robots in this case — there’s a ‘code your own’ virtual adventure element adding a digital dimension. New in its range for this year is the Botzees Robotics Kit. In the box are six sensor-laden programmable robots in press-together block form. They’re controlled via a companion app with a basic, block-based coding interface. The app also offers 30 interactive AR puzzles for blended real-and-virtual world play, which the company says help teach foundational coding concepts like sequencing, looping, and conditional coding. Although tor parents wanting to reduce kids’ screen time the focus on AR probably won’t be welcomed as kids will need to be stuck in front of a tablet to get the most out of Botzees.

Raspberry Pi

Raspberry Pi 4

Product: Raspberry Pi 4 Model B
Price: $35
Age: It depends

Description: The latest Raspberry Pi single board computer, the Pi 4, dials up memory, speed and power, packs plenty of ports and boasts onboard wireless networking and Bluetooth. For seasoned makers the possibilities really are endless. But for parents wanting to inspire kids to learn coding the Pi Foundation‘s philosophy may look daunting. It’s not one of lots of hand-holding out of the box. The theory is that hard challenge to required to really learn. That means if you buy Pi as is you’re getting the raw board, an OS to grapple with and an engaged community for learning support. It certainly won’t suit every child — but if you want to challenge a capable young mind that’s already showing a talent for digging into detail and figuring things out the Pi 4 is a low budget, high potential option vs the many more basic (but pricey) plug-and-play devices which have piled into the market since Pi arrived to shake up the microprocessor scene.

Sphero

rvr launchweb 0211

Product: RVR
Price: $250
Age: 5+
Description: 
Sphero is best know for its spherical remote-controlled robots but the company has this year branched out with a crowdsourced rover robot design called RVR. The more traditional four-wheeled design is sensor-packed and touted as an all terrain beast. The RVR can be driven (via app) right out of the box but has been designed for customization, with ports to accommodate third-party hardware — such as Raspberry Pi, Arduino, BBC micro:bit, or Sphero’s own littleBits. So it’s an extensible, hardware hackable, programmable robotics platform. On the software side, the Sphero Edu app offers a choice of coding styles to expand its educational potential — namely: Draw & Drive, Scratch Blocks and JavaScript.

Product: Specdrums 

Specdrums

Price: From $65
Age: 5+

Description: Musical edtech startup Specdrums is another Sphero acquisition. The premise behind its learning product is simple: Tap a color to make a sound. It achieves this with a wearable — a Bluetooth-connected, light-sensing ring (or pair of rings) — linked to its app. So it’s learning to jam, rather than learning to code but with plenty of techie smarts. The Specdrums’ Mix app offers musical loops and curated sound packs; playback and sound production tools; plus the ability to record your own samples. Aka everything a budding musician needs to tap out and mix impromptu beats, while looping in the real world as their musical playground.

Note the standard kit only contains one ring plus a colored playpad; for two rings the price steps up to $100. For the kit to work your child needs access to a smartphone or tablet to run the app and playback the music.

Ubtech

Ubtech

Product: JIMU Robot Mythical Series: FireBot Kit
Price: $130
Age: 8+

Description: Shenzhen-based Ubtech has been in the STEM robotics kit game for a number of years. New for 2019 is this motorized, LED-light-breathing dragon. As with previous kits in its brick-based JIMU series, the first step for the budding techie is to follow instructions and assemble their robot from all the constituent parts. Then the companion app offers a drag-and-drop code-block interface for programming FireBot and bringing its sensing powers to life.

Uber has again been denied licence renewal in London over safety risks

Two months after being given a two-month reprieve on its licence to operate in London, Uber has once again been denied a full renewal by the city’s transport regulator — which said today that it had found a “pattern of failures” which put “passenger safety and security at risk”.

Uber has confirmed it will appeal the decision.

The UK capital is a major European market for Uber, which claims to have 3.5 million users and 45,000 registered drivers in the city.

The ride-hailing giants’ troubles in London began in 2017 when Transport for London (TfL) made the shock decision to deny its licence renewal, citing a range of concerns including how Uber reported criminal offences; carried out background checks on drivers; and its use of proprietary software it developed that could be used to block regulatory oversight.

In the latest decision against Uber TfL concludes the company is not “fit and proper” to hold a private hire vehicle licence, saying it identified thousands of regulatory breaches — with a key issue being a change to Uber’s systems that allowed unauthorised drivers to upload their photos to other Uber driver accounts.

“This allowed [unauthorised drivers] to pick up passengers as though they were the booked driver, which occurred in at least 14,000 trips — putting passenger safety and security at risk,” TfL writes.

“This means all the journeys were uninsured and some passenger journeys took place with unlicensed drivers, one of which had previously had their licence revoked by TfL.”

It also identified another safety and security failure that allowed dismissed or suspended drivers to create an Uber account and carry passengers.

“TfL recognises the steps that Uber has put in place to prevent this type of activity. However, it is a concern that Uber’s systems seem to have been comparatively easily manipulated,” it adds.

The regulator says it identified further serious breaches, including several insurance-related issues. Some of these led it to prosecute Uber, earlier this year, for causing and permitting the use of vehicles without the correct hire or reward insurance in place.

While TfL highlights “a number of positive changes and improvements to [Uber’s] culture, leadership and systems”, since the company was granted a 15-month provisional licence by a magistrate in June 2018 — including noting that it has interacted with TfL in “a transparent and productive manner” — it concludes it cannot ignore the risks posed by “a pattern of failures” from “weak systems and processes”.

“This pattern of regulatory breaches led TfL to commission an independent assessment of Uber’s ability to prevent incidents of this nature happening again. This work has led TfL to conclude that it currently does not have confidence that Uber has a robust system for protecting passenger safety, while managing changes to its app,” it says.

Uber can continue to operate in London during the appeals process. So passengers will likely see no change in the short term. TfL says Uber has 21 days to file an appeal.

During the appeals process the company may also seek to implement changes to demonstrate to a magistrate that it is fit and proper by the time of the appeal hearing. So, again, it’s possible Uber could win another provisional licence in future, depending on the steps it takes to improve its safety systems. But there’s no doubt the regulator is in the driving seat at this point.

TfL says it will continue to “closely scrutinise” Uber during any continued operation, including checking it meets the 20 conditions it set out in September 2019.

“Particular attention will be paid to ensuring that the management have robust controls in place to manage changes to the Uber app so that passenger safety is not put at risk,” it adds.

Commenting in a statement, Helen Chapman, director of licensing, regulation and charging at TfL, said: “Safety is our absolute top priority. While we recognise Uber has made improvements, it is unacceptable that Uber has allowed passengers to get into minicabs with drivers who are potentially unlicensed and uninsured.

“It is clearly concerning that these issues arose, but it is also concerning that we cannot be confident that similar issues won’t happen again in future. If they choose to appeal, Uber will have the opportunity to publicly demonstrate to a magistrate whether it has put in place sufficient measures to ensure potential safety risks to passengers are eliminated. If they do appeal, Uber can continue to operate and we will closely scrutinise the company to ensure the management has robust controls in place to ensure safety is not compromised during any changes to the app.”

Responding to TfL’s decision in a statement, Uber’s regional general manager for Northern & Eastern Europe, Jamie Heywood, dubbed it “extraordinary and wrong”.

“We have fundamentally changed our business over the last two years and are setting the standard on safety. TfL found us to be a fit and proper operator just two months ago, and we continue to go above and beyond,” he said. “On behalf of the 3.5 million riders and 45,000 licensed drivers who depend on Uber in London, we will continue to operate as normal and will do everything we can to work with TfL to resolve this situation.”

On driver ID specifically, Heywood added: “Over the last two months we have audited every driver in London and further strengthened our processes. We have robust systems and checks in place to confirm the identity of drivers and will soon be introducing a new facial matching process, which we believe is a first in London taxi and private hire.”

Amnesty International latest to slam surveillance giants Facebook and Google as “incompatible” with human rights

Human rights charity Amnesty International is the latest to call for reform of surveillance capitalism — blasting the business models of “surveillance giants” Facebook and Google in a new report which warns the pair’s market dominating platforms are “enabling human rights harm at a population scale”.

“[D]despite the real value of the services they provide, Google and Facebook’s platforms come at a systemic cost,” Amnesty warns. “The companies’ surveillance-based business model forces people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse. Firstly, an assault on the right to privacy on an unprecedented scale, and then a series of knock-on effects that pose a serious risk to a range of other rights, from freedom of expression and opinion, to freedom of thought and the right to non-discrimination.”

“This isn’t the internet people signed up for,” it adds.

What’s most striking about the report is the familiarly of the arguments. There is now a huge weight of consensus criticism around surveillance-based decision-making — from Apple’s own Tim Cook through scholars such as Shoshana Zuboff and Zeynep Tufekci to the United Nations — that’s itself been fed by a steady stream of reportage of the individual and societal harms flowing from platforms’ pervasive and consentless capturing and hijacking of people’s information for ad-based manipulation and profit.

This core power asymmetry is maintained and topped off by self-serving policy positions which at best fiddle around the edges of an inherently anti-humanitarian system. While platforms have become practiced in dark arts PR — offering, at best, a pantomime ear to the latest data-enabled outrage that’s making headlines, without ever actually changing the underlying system. That surveillance capitalism’s abusive modus operandi is now inspiring governments to follow suit — aping the approach by developing their own data-driven control systems to straitjacket citizens — is exceptionally chilling.

But while the arguments against digital surveillance are now very familiar what’s still sorely lacking is an effective regulatory response to force reform of what is at base a moral failure — and one that’s been allowed to scale so big it’s attacking the democratic underpinnings of Western society.

“Google and Facebook have established policies and processes to address their impacts on privacy and freedom of expression – but evidently, given that their surveillance-based business model undermines the very essence of the right to privacy and poses a serious risk to a range of other rights, the companies are not taking a holistic approach, nor are they questioning whether their current business models themselves can be compliant with their responsibility to respect human rights,” Amnesty writes.

“The abuse of privacy that is core to Facebook and Google’s surveillance-based business model is starkly demonstrated by the companies’ long history of privacy scandals. Despite the companies’ assurances over their commitment to privacy, it is difficult not to see these numerous privacy infringements as part of the normal functioning of their business, rather than aberrations.”

Needless to say Facebook and Google do not agree with Amnesty’s assessment. But, well, they would say that wouldn’t they?

Amnesty’s report notes there is now a whole surveillance industry feeding this beast — from adtech players to data brokers — while pointing out that the dominance of Facebook and Google, aka the adtech duopoly, over “the primary channels that most of the world relies on to engage with the internet” is itself another harm, as it lends the pair of surveillance giants “unparalleled power over people’s lives online”.

“The power of Google and Facebook over the core platforms of the internet poses unique risks for human rights,” it warns. “For most people it is simply not feasible to use the internet while avoiding all Google and Facebook services. The dominant internet platforms are no longer ‘optional’ in many societies, and using them is a necessary part of participating in modern life.”

Amnesty concludes that it is “now evident that the era of self-regulation in the tech sector is coming to an end” — saying further state-based regulation will be necessary. Its call there is for legislators to follow a human rights-based approach to rein in surveillance giants.

You can read the report in full here (PDF).

A 10-point plan to reboot the data industrial complex for the common good

A posthumous manifesto by Giovanni Buttarelli, who until his death this summer was Europe’s chief data protection regulator, seeks to join the dots of surveillance capitalism’s rapacious colonization of human spaces, via increasingly pervasive and intrusive mapping and modelling of our data, with the existential threat posed to life on earth by manmade climate change.

In a dense document rich with insights and ideas around the notion that “data means power” — and therefore that the unequally distributed data-capture capabilities currently enjoyed by a handful of tech platforms sums to power asymmetries and drastic social inequalities — Buttarelli argues there is potential for AI and machine learning to “help monitor degradation and pollution, reduce waste and develop new low-carbon materials”. But only with the right regulatory steerage in place.

“Big data, AI and the internet of things should focus on enabling sustainable development, not on an endless quest to decode and recode the human mind,” he warns. “These technologies should — in a way that can be verified — pursue goals that have a democratic mandate. European champions can be supported to help the EU achieve digital strategic autonomy.”

“The EU’s core values are solidarity, democracy and freedom,” he goes on. “Its conception of data protection has always been the promotion of responsible technological development for the common good. With the growing realisation of the environmental and climatic emergency facing humanity, it is time to focus data processing on pressing social needs. Europe must be at the forefront of this endeavour, just as it has been with regard to individual rights.”

One of his key calls is for regulators to enforce transparency of dominant tech companies — so that “production processes and data flows are traceable and visible for independent scrutiny”.

“Use enforcement powers to prohibit harmful practices, including profiling and behavioural targeting of children and young people and for political purposes,” he also suggests.

Another point in the manifesto urges a moratorium on “dangerous technologies”, citing facial recognition and killer drones as examples, and calling generally for a pivot away from technologies designed for “human manipulation” and toward “European digital champions for sustainable development and the promotion of human rights”.

In an afterword penned by Shoshana Zuboff, the US author and scholar writes in support of the manifesto’s central tenet, warning pithily that: “Global warming is to the planet what surveillance capitalism is to society.”

There’s plenty of overlap between Buttarelli’s ideas and Zuboff’s — who has literally written the book on surveillance capitalism. Data concentration by powerful technology platforms is also resulting in algorithmic control structures that give rise to “a digital underclass… comprising low-wage workers, the unemployed, children, the sick, migrants and refugees who are required to follow the instructions of the machines”, he warns.

“This new instrumentarian power deprives us not only of the right to consent, but also of the right to combat, building a world of no exit in which ignorance is our only alternative to resigned helplessness, rebellion or madness,” she agrees.

There are no less than six afterwords attached to the manifesto — a testament to the store in which Buttarelli’s ideas are held among privacy, digital and human rights campaigners.

The manifesto “goes far beyond data protection”, says writer Maria Farrell in another contribution. “It connects the dots to show how data maximisation exploits power asymmetries to drive global inequality. It spells out how relentless data-processing actually drives climate change. Giovanni’s manifesto calls for us to connect the dots in how we respond, to start from the understanding that sociopathic data-extraction and mindless computation are the acts of a machine that needs to be radically reprogrammed.”

At the core of the document is a 10-point plan for what’s described as “sustainable privacy”, which includes the call for a dovetailing of the EU’s digital priorities with a Green New Deal — to “support a programme for green digital transformation, with explicit common objectives of reducing inequality and safeguarding human rights for all, especially displaced persons in an era of climate emergency”.

Buttarelli also suggests creating a forum for civil liberties advocates, environmental scientists and machine learning experts who can advise on EU funding for R&D to put the focus on technology that “empowers individuals and safeguards the environment”.

Another call is to build a “European digital commons” to support “open-source tools and interoperability between platforms, a right to one’s own identity or identities, unlimited use of digital infrastructure in the EU, encrypted communications, and prohibition of behaviour tracking and censorship by dominant platforms”.

“Digital technology and privacy regulation must become part of a coherent solution for both combating and adapting to climate change,” he suggests in a section dedicated to a digital Green New Deal — even while warning that current applications of powerful AI technologies appear to be contributing to the problem.

“AI’s carbon footprint is growing,” he points out, underlining the environmental wastage of surveillance capitalism. “Industry is investing based on the (flawed) assumption that AI models must be based on mass computation.

“Carbon released into the atmosphere by the accelerating increase in data processing and fossil fuel burning makes climatic events more likely. This will lead to further displacement of peoples and intensification of calls for ‘technological solutions’ of surveillance and border controls, through biometrics and AI systems, thus generating yet more data. Instead, we need to ‘greenjacket’ digital technologies and integrate them into the circular economy.”

Another key call — and one Buttarelli had been making presciently in recent years — is for more joint working between EU regulators towards common sustainable goals.

“All regulators will need to converge in their policy goals — for instance, collusion in safeguarding the environment should be viewed more as an ethical necessity than as a technical breach of cartel rules. In a crisis, we need to double down on our values, not compromise on them,” he argues, going on to voice support for antitrust and privacy regulators to co-operate to effectively tackle data-based power asymmetries.

“Antitrust, democracies’ tool for restraining excessive market power, therefore is becoming again critical. Competition and data protection authorities are realising the need to share information about their investigations and even cooperate in anticipating harmful behaviour and addressing ‘imbalances of power rather than efficiency and consent’.”

On the General Data Protection Regulation (GDPR) specifically — Europe’s current framework for data protection — Buttarelli gives a measured assessment, saying “first impressions indicate big investments in legal compliance but little visible change to data practices”.

He says Europe’s data protection authorities will need to use all the tools at their disposal — and find the necessary courage — to take on the dominant tracking and targeting digital business models fuelling so much exploitation and inequality.

He also warns that GDPR alone “will not change the structure of concentrated markets or in itself provide market incentives that will disrupt or overhaul the standard business model”.

“True privacy by design will not happen spontaneously without incentives in the market,” he adds. “The EU still has the chance to entrench the right to confidentiality of communications in the ePrivacy Regulation under negotiation, but more action will be necessary to prevent further concentration of control of the infrastructure of manipulation.”

Looking ahead, the manifesto paints a bleak picture of where market forces could be headed without regulatory intervention focused on defending human rights. “The next frontier is biometric data, DNA and brainwaves — our thoughts,” he suggests. “Data is routinely gathered in excess of what is needed to provide the service; standard tropes, like ‘improving our service’ and ‘enhancing your user  experience’ serve as decoys for the extraction of monopoly rents.”

There is optimism too, though — that technology in service of society can be part of the solution to existential crises like climate change; and that data, lawfully collected, can support public good and individual self-realization.

“Interference with the right to privacy and personal data can be lawful if it serves ‘pressing social needs’,” he suggests. “These objectives should have a clear basis in law, not in the marketing literature of large companies. There is no more pressing social need than combating environmental degradation” — adding that: “The EU should promote existing and future trusted institutions, professional bodies and ethical codes to govern this exercise.”

In instances where platforms are found to have systematically gathered personal data unlawfully Buttarelli trails the interesting idea of an amnesty for those responsible “to hand over their optimisation assets”– as a means of not only resetting power asymmetries and rebalancing the competitive playing field but enabling societies to reclaim these stolen assets and reapply them for a common good.

While his hope for Europe’s Data Protection Board — the body which offers guidance and coordinates interactions between EU Member States’ data watchdogs — is to be “the driving force supporting the Global Privacy Assembly in developing a common vision and agenda for sustainable privacy”.

The manifesto also calls for European regulators to better reflect the diversity of people whose rights they’re being tasked with safeguarding.

The document, which is entitled Privacy 2030: A vision for Europe, has been published on the website of the International Association of Privacy Professionals ahead of its annual conference this week.

Buttarelli had intended — but was finally unable — to publish his thoughts on the future of privacy this year, hoping to inspire discussion in Europe and beyond. In the event, the manifesto has been compiled posthumously by Christian D’Cunha, head of his private office, who writes that he has drawn on discussions with the data protection supervisor in his final months — with the aim of plotting “a plausible trajectory of his most passionate convictions”.

A 10-point plan to reboot the data industrial complex for the common good

A posthumous manifesto by Giovanni Buttarelli, who until his death this summer was Europe’s chief data protection regulator, seeks to join the dots of surveillance capitalism’s rapacious colonization of human spaces, via increasingly pervasive and intrusive mapping and modelling of our data, with the existential threat posed to life on earth by manmade climate change.

In a dense document rich with insights and ideas around the notion that “data means power” — and therefore that the unequally distributed data-capture capabilities currently enjoyed by a handful of tech platforms sums to power asymmetries and drastic social inequalities — Buttarelli argues there is potential for AI and machine learning to “help monitor degradation and pollution, reduce waste and develop new low-carbon materials”. But only with the right regulatory steerage in place.

“Big data, AI and the internet of things should focus on enabling sustainable development, not on an endless quest to decode and recode the human mind,” he warns. “These technologies should — in a way that can be verified — pursue goals that have a democratic mandate. European champions can be supported to help the EU achieve digital strategic autonomy.”

“The EU’s core values are solidarity, democracy and freedom,” he goes on. “Its conception of data protection has always been the promotion of responsible technological development for the common good. With the growing realisation of the environmental and climatic emergency facing humanity, it is time to focus data processing on pressing social needs. Europe must be at the forefront of this endeavour, just as it has been with regard to individual rights.”

One of his key calls is for regulators to enforce transparency of dominant tech companies — so that “production processes and data flows are traceable and visible for independent scrutiny”.

“Use enforcement powers to prohibit harmful practices, including profiling and behavioural targeting of children and young people and for political purposes,” he also suggests.

Another point in the manifesto urges a moratorium on “dangerous technologies”, citing facial recognition and killer drones as examples, and calling generally for a pivot away from technologies designed for “human manipulation” and toward “European digital champions for sustainable development and the promotion of human rights”.

In an afterword penned by Shoshana Zuboff, the US author and scholar writes in support of the manifesto’s central tenet, warning pithily that: “Global warming is to the planet what surveillance capitalism is to society.”

There’s plenty of overlap between Buttarelli’s ideas and Zuboff’s — who has literally written the book on surveillance capitalism. Data concentration by powerful technology platforms is also resulting in algorithmic control structures that give rise to “a digital underclass… comprising low-wage workers, the unemployed, children, the sick, migrants and refugees who are required to follow the instructions of the machines”, he warns.

“This new instrumentarian power deprives us not only of the right to consent, but also of the right to combat, building a world of no exit in which ignorance is our only alternative to resigned helplessness, rebellion or madness,” she agrees.

There are no less than six afterwords attached to the manifesto — a testament to the store in which Buttarelli’s ideas are held among privacy, digital and human rights campaigners.

The manifesto “goes far beyond data protection”, says writer Maria Farrell in another contribution. “It connects the dots to show how data maximisation exploits power asymmetries to drive global inequality. It spells out how relentless data-processing actually drives climate change. Giovanni’s manifesto calls for us to connect the dots in how we respond, to start from the understanding that sociopathic data-extraction and mindless computation are the acts of a machine that needs to be radically reprogrammed.”

At the core of the document is a 10-point plan for what’s described as “sustainable privacy”, which includes the call for a dovetailing of the EU’s digital priorities with a Green New Deal — to “support a programme for green digital transformation, with explicit common objectives of reducing inequality and safeguarding human rights for all, especially displaced persons in an era of climate emergency”.

Buttarelli also suggests creating a forum for civil liberties advocates, environmental scientists and machine learning experts who can advise on EU funding for R&D to put the focus on technology that “empowers individuals and safeguards the environment”.

Another call is to build a “European digital commons” to support “open-source tools and interoperability between platforms, a right to one’s own identity or identities, unlimited use of digital infrastructure in the EU, encrypted communications, and prohibition of behaviour tracking and censorship by dominant platforms”.

“Digital technology and privacy regulation must become part of a coherent solution for both combating and adapting to climate change,” he suggests in a section dedicated to a digital Green New Deal — even while warning that current applications of powerful AI technologies appear to be contributing to the problem.

“AI’s carbon footprint is growing,” he points out, underlining the environmental wastage of surveillance capitalism. “Industry is investing based on the (flawed) assumption that AI models must be based on mass computation.

“Carbon released into the atmosphere by the accelerating increase in data processing and fossil fuel burning makes climatic events more likely. This will lead to further displacement of peoples and intensification of calls for ‘technological solutions’ of surveillance and border controls, through biometrics and AI systems, thus generating yet more data. Instead, we need to ‘greenjacket’ digital technologies and integrate them into the circular economy.”

Another key call — and one Buttarelli had been making presciently in recent years — is for more joint working between EU regulators towards common sustainable goals.

“All regulators will need to converge in their policy goals — for instance, collusion in safeguarding the environment should be viewed more as an ethical necessity than as a technical breach of cartel rules. In a crisis, we need to double down on our values, not compromise on them,” he argues, going on to voice support for antitrust and privacy regulators to co-operate to effectively tackle data-based power asymmetries.

“Antitrust, democracies’ tool for restraining excessive market power, therefore is becoming again critical. Competition and data protection authorities are realising the need to share information about their investigations and even cooperate in anticipating harmful behaviour and addressing ‘imbalances of power rather than efficiency and consent’.”

On the General Data Protection Regulation (GDPR) specifically — Europe’s current framework for data protection — Buttarelli gives a measured assessment, saying “first impressions indicate big investments in legal compliance but little visible change to data practices”.

He says Europe’s data protection authorities will need to use all the tools at their disposal — and find the necessary courage — to take on the dominant tracking and targeting digital business models fuelling so much exploitation and inequality.

He also warns that GDPR alone “will not change the structure of concentrated markets or in itself provide market incentives that will disrupt or overhaul the standard business model”.

“True privacy by design will not happen spontaneously without incentives in the market,” he adds. “The EU still has the chance to entrench the right to confidentiality of communications in the ePrivacy Regulation under negotiation, but more action will be necessary to prevent further concentration of control of the infrastructure of manipulation.”

Looking ahead, the manifesto paints a bleak picture of where market forces could be headed without regulatory intervention focused on defending human rights. “The next frontier is biometric data, DNA and brainwaves — our thoughts,” he suggests. “Data is routinely gathered in excess of what is needed to provide the service; standard tropes, like ‘improving our service’ and ‘enhancing your user  experience’ serve as decoys for the extraction of monopoly rents.”

There is optimism too, though — that technology in service of society can be part of the solution to existential crises like climate change; and that data, lawfully collected, can support public good and individual self-realization.

“Interference with the right to privacy and personal data can be lawful if it serves ‘pressing social needs’,” he suggests. “These objectives should have a clear basis in law, not in the marketing literature of large companies. There is no more pressing social need than combating environmental degradation” — adding that: “The EU should promote existing and future trusted institutions, professional bodies and ethical codes to govern this exercise.”

In instances where platforms are found to have systematically gathered personal data unlawfully Buttarelli trails the interesting idea of an amnesty for those responsible “to hand over their optimisation assets”– as a means of not only resetting power asymmetries and rebalancing the competitive playing field but enabling societies to reclaim these stolen assets and reapply them for a common good.

While his hope for Europe’s Data Protection Board — the body which offers guidance and coordinates interactions between EU Member States’ data watchdogs — is to be “the driving force supporting the Global Privacy Assembly in developing a common vision and agenda for sustainable privacy”.

The manifesto also calls for European regulators to better reflect the diversity of people whose rights they’re being tasked with safeguarding.

The document, which is entitled Privacy 2030: A vision for Europe, has been published on the website of the International Association of Privacy Professionals ahead of its annual conference this week.

Buttarelli had intended — but was finally unable — to publish his thoughts on the future of privacy this year, hoping to inspire discussion in Europe and beyond. In the event, the manifesto has been compiled posthumously by Christian D’Cunha, head of his private office, who writes that he has drawn on discussions with the data protection supervisor in his final months — with the aim of plotting “a plausible trajectory of his most passionate convictions”.