No technical reason to exclude Huawei as 5G supplier, says UK committee

A UK parliamentary committee has concluded there are no technical grounds for excluding Chinese network kit vendor Huawei from the country’s 5G networks.

In a letter from the chair of the Science & Technology Committee to the UK’s digital minister Jeremy Wright, the committee says: “We have found no evidence from our work to suggest that the complete exclusion of Huawei from the UK’s telecommunications networks would, from a technical point of view, constitute a proportionate response to the potential security threat posed by foreign suppliers.”

Though the committee does go on to recommend the government mandate the exclusion of Huawei from the core of 5G networks, noting that UK mobile network operators have “mostly” done so already — but on a voluntary basis.

If it places a formal requirement on operators not to use Huawei for core supply the committee urges the government to provide “clear criteria” for the exclusion so that it could be applied to other suppliers in future.

Reached for a response to the recommendations, a government spokesperson told us: “The security and resilience of the UK’s telecoms networks is of paramount importance. We have robust procedures in place to manage risks to national security and are committed to the highest possible security standards.”

The spokesperson for the Department for Digital, Media, Culture and Sport added: “The Telecoms Supply Chain Review will be announced in due course. We have been clear throughout the process that all network operators will need to comply with the Government’s decision.”

In recent years the US administration has been putting pressure on allies around the world to entirely exclude Huawei from 5G networks — claiming the Chinese company poses a national security risk.

Australia announced it was banning Huawei and another Chinese vendor ZTE from providing kit for its 5G networks last year. Though in Europe there has not been a rush to follow the US lead and slam the door on Chinese tech giants.

In April leaked information from a UK Cabinet meeting suggested the government had settled on a policy of granting Huawei access as a supplier for some non-core parts of domestic 5G networks, while requiring they be excluded from supplying components for use in network cores.

On this somewhat fuzzy issue of delineating core vs non-core elements of 5G networks, the committee writes that it “heard unanimously and clearly” from witnesses that there will still be a distinction between the two in the next-gen networks.

It also cites testimony by the technical director of the UK’s National Cyber Security Centre (NCSC), Dr Ian Levy, who told it “geography matters in 5G”, and pointed out Australia and the UK have very different “laydowns” — meaning “we may have exactly the same technical understanding, but come to very different conclusions”.

In a response statement to the committee’s letter, Huawei SVP Victor Zhang welcomed the committee’s “key conclusion” before going on to take a thinly veiled swiped at the US — writing: “We are reassured that the UK, unlike others, is taking an evidence based approach to network security. Huawei complies with the laws and regulations in all the markets where we operate.”

The committee’s assessment is not all comfortable reading for Huawei, though, with the letter also flagging the damning conclusions of the most recent Huawei Oversight Board report which found “serious and systematic defects” in its software engineering and cyber security competence — and urging the government to monitor Huawei’s response to the raised security concerns, and to “be prepared to act to restrict the use of Huawei equipment if progress is unsatisfactory”.

Huawei has previously pledged to spend $2BN addressing security shortcomings related to its UK business — a figure it was forced to qualify as an “initial budget” after that same Oversight Board report.

“It is clear that Huawei must improve the standard of its cybersecurity,” the committee warns.

It also suggests the government consults on whether telecoms regulator Ofcom needs stronger powers to be able to force network suppliers to clean up their security act, writing that: “While it is reassuring to hear that network operators share this point of view and are ready to use commercial pressure to encourage this, there is currently limited regulatory power to enforce this.”

Another committee recommendation is for the NCSC to be consulted on whether similar security evaluation mechanisms should be established for other 5G vendors — such as Ericsson and Nokia: Two European based kit vendors which, unlike Huawei, are expected to be supplying core 5G.

“It is worth noting that an assurance system comparable to the Huawei Cyber Security Evaluation Centre does not exist for other vendors. The shortcomings in Huawei’s cyber security reported by the Centre cannot therefore be directly compared to the cyber security of other vendors,” it notes.

On the issue of 5G security generally the committee dubs this “critical”, adding that “all steps must be taken to ensure that the risks are as low as reasonably possible”.

Where “essential services” that make use of 5G networks are concerned, the committee says witnesses were clear such services must be able to continue to operate safely even if the network connection is disrupted. Government must ensure measures are put in place to safeguard operation in the event of cyber attacks, floods, power cuts and other comparable events, it adds. 

While the committee concludes there is no technical reason to limit Huawei’s access to UK 5G, the letter does make a point of highlighting other considerations, most notably human rights abuses, emphasizing its conclusion does not factor them in at all — and pointing out: “There may well be geopolitical or ethical grounds… to enact a ban on Huawei’s equipment”.

It adds that Huawei’s global cyber security and privacy officer, John Suffolk, confirmed that a third party had supplied Huawei services to Xinjiang’s Public Security Bureau, despite Huawei forbidding its own employees from misusing IT and comms tech to carry out surveillance of users.

The committee suggests Huawei technology may therefore be being used to “permit the appalling treatment of Muslims in Western China”.

Cambridge Uni graphene spin-out bags $16M to get its first product to market

Cambridge, UK based graphene startup, Paragraf, has closed a £12.8 million (~$16M) Series A round of funding led by early stage VC  Parkwalk. Also investing this round: IQ Capital Partners, Amadeus Capital Partners and Cambridge Enterprise, the commercialisation arm of the University of Cambridge, plus several unnamed angel investors. 

The funding will be used to bring the 2015-founded Cambridge University spin out’s first graphene-based electronics products to market — transitioning the startup into a commercial, revenue-generating phase.

When we covered Paragraf’s $3.9M seed raise just over a year ago CEO and co-founder Dr Simon Thomas told us it was looking to raise a Series A ahead of Q3 2019 so the business looks to be right on track at this stage.

During the seed phase Paragraf says it was able to deliver a manufacturing facility, graphene layer production and first device prototypes “significantly” ahead of plan.

It’s now switching focus to products — with strategic volume device production partners, and commercialisation of its first device: A super-high sensitivity magnetic field detector which it says operates over temperature, field and power ranges “that no other device can currently achieve”.

Commenting in a statement, Thomas added: “I am extremely proud of the young team at Paragraf who have collectively delivered the early strategy milestones with great skill. This next phase will allow Paragraf to make these truly game-changing technologies a reality. Paragraf is continually seeking like-minded collaborative development, production and commercial partners to accelerate the delivery of the many exciting electronics technology opportunities graphene has to offer.”

In terms of the touted benefits of graphene, the atom-layer-thick 2D material has long been exciting scientists as a potential replacement for silicon in computer chips — thanks to a raft of key properties including high conductivity, strength and flexibility and thermal integrity. Researchers suggest it could deliver a performance speed increase of up to 1000x, while reducing energy use by up to 50x.

But while excitement about how graphene could transform electronics has been plentiful in the more than a decade since it was discovered, those seeking to commercialize the wonder material have found it challenging to manufacture at commercial grade and scale.

This is where Paragraf aims to come in — claiming to be the first company to deliver IP-protected graphene technology using what it bills as “standard, mass production scale manufacturing approaches”.

It also says its first sensor products have demonstrated “order of magnitude operational improvements over today’s incumbents”.

Such claims of course remain to be tested in the wild but Paragraf isn’t dialling down the hype vis-a-vis the transformative potential of baking graphene into next-gen electronics.

“Achieving large-scale, graphene-based production technology will enable next generation electronics, including vastly increased computing speeds, significantly improved medical diagnostics and higher efficiency renewable energy generation as well as currently unachievable products such as instant charging batteries and very low power, flexible electronics,” it writes.

A year ago Thomas told us Paragraf expected high-tech applications of graphene in consumer technologies to appear in the general market within the next 2-3 years — a timeline that should now have shrunk to just a year or two out.

Italy stings Facebook with $1.1M fine for Cambridge Analytica data misuse

Italy’s data protection watchdog has issued Facebook with a €1 million (~$1.1M) fine for violations of local privacy law attached to the Cambridge Analytica data misuse scandal.

Last year it emerged that up to 87 million Facebook users had had their data siphoned out of the social media giant’s platform by an app developer working for the controversial (and now defunct) political data company, Cambridge Analytica.

The offences in question occurred prior to Europe’s tough new data protection framework, GDPR, coming into force — hence the relatively small size of the fine in this case, which has been calculated under Italy’s prior data protection regime. (Whereas fines under GDPR can scale as high as 4% of a company’s annual global turnover.)

Reached for comment a Facebook spokesperson said: “We have said before that we wish we had done more to investigate claims about Cambridge Analytica in 2015. However, evidence indicates that no Italian user data was shared with Cambridge Analytica. Dr Kogan only shared data with Cambridge Analytica in relation to US users. We made major changes to our platform back then and have also significantly restricted the information which app developers can access. We’re focused on protecting people’s privacy and have invested in people, technology and partnerships, including hiring more than 20,000 people focused on safety and security over the last year. We will review the Garante’s decision and will continue to engage constructively with their concerns.”

Last year the UK’s DPA similarly issued Facebook with a £500k penalty for the Cambridge Analytica breach, although Facebook is appealing — in that case it has also highlighted the regulator not having found evidence UK users’ data was shared with Cambridge Analytica, though it clearly was processed by Kogan.

The Italian regulator says 57 Italian Facebook users downloaded Dr Aleksandr Kogan‘s Thisisyourdigitallife quiz app, which was the app vehicle used to scoop up Facebook user data en masse — with a further 214,077 Italian users’ also having their personal information processed without their consent as a result of how the app could access data on each user’s Facebook friends.

In an earlier intervention in March, the Italian regulator challenged Facebook over the misuse of the data — and the company opted to pay a reduced amount of €52,000 in the hopes of settling the matter.

However the Italian DPA has decided that the scale of the violation of personal data and consent disqualifies the case for a reduced payment — so it has now issued Facebook with a €1M fine.

“The sum takes into account, in addition to the size of the database, also the economic conditions of Facebook and the number of global and Italian users of the company,” it writes in a press release on its website [translated by Google Translate].

At the time of writing its full decision on the case was not available.

Late last year the Italian regulator fined Facebook €10M for misleading users over its sign in practices.

While, in 2017, it also slapped the company with a €3M penalty for a controversial decision to begin helping itself to WhatsApp users’ data — despite the latter’s prior claims that user data would never be shared with Facebook.

Going forward, where Facebook’s use (and potential misuse) of Europeans’ data is concerned, all eyes are on the Irish Data Protection Commission; aka its lead regulator in the region on account of the location of Facebook’s international HQ.

The Irish DPC has a full suite of open investigations into Facebook and Facebook-owned companies — covering major issues such as security breaches and questions over the legal basis it claims to process people’s data, among a number of other big tech related probes.

The watchdog has suggested decisions on some of this tech giant-related case-load could land this summer.

This report was updated with comment from Facebook

Facebook’s content oversight board plan is raising more questions than it answers

Facebook has produced a report summarizing feedback it’s taken in on its idea of establishing a content oversight board to help arbitrate on moderation decisions.

Aka the ‘supreme court of Facebook’ concept first discussed by founder Mark Zuckerberg last year, when he told Vox:

[O]ver the long term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion. You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.

Facebook has since suggested the oversight board will be up and running later this year. And has just wheeled out its global head of policy and spin for a European PR push to convince regional governments to give it room for self-regulation 2.0, rather than slapping it with broadcast-style regulations.

The latest report, which follows a draft charter unveiled in January, rounds up input fed to Facebook via six “in-depth” workshops and 22 roundtables convened by Facebook and held in locations of its choosing around the world.

In all, Facebook says the events were attended by 650+ people from 88 different countries — though it further qualifies that by saying it had “personal discussions” with more than 250 people and received more than 1,200 public consultation submissions.

“In each of these engagements, the questions outlined in the draft charter led to thoughtful discussions with global perspectives, pushing us to consider multiple angles for how this board could function and be designed,” Facebook writes.

It goes without saying that this input represents a minuscule fraction of the actual ‘population’ of Facebook’s eponymous platform, which now exceeds 2.2BN accounts (an unknown portion of which will be fake/duplicates), while its operations stretch to more than double the number of markets represented by individuals at the events.

The feedback exercise — as indeed the concept of the board itself — is inevitably an exercise in opinion abstraction. Which gives Facebook leeway to shape the output as it prefers. (And, indeed, the full report notes that “some found this public consultation ‘not nearly iterative enough, nor transparent enough, to provide any legitimacy’ to the process of creating the Board”.)

In a blog post providing its spin on the “global feedback and input”, Facebook culls three “general themes” it claims emerged from the various discussions and submissions — namely that: 

  • People want a board that exercises independent judgment — not judgment influenced by Facebook management, governments or third parties, writing: “The board will need a strong foundation for its decision-making, a set of higher-order principles — informed by free expression and international human rights law — that it can refer to when prioritizing values like safety and voice, privacy and equality”. Though the full report flags up the challenge of ensuring the sought for independence, and it’s not clear Facebook will be able to create a structure that can stand apart from its own company or indeed other lobbyists
  • How the board will select and hear cases, deliberate together, come to a decision and communicate its recommendations both to Facebook and the public are key considerations — though those vital details remain tbc. “In making its decisions, the board may need to consult experts with specific cultural knowledge, technical expertise and an understanding of content moderation,” Facebook suggests, implying the boundaries of the board are unlikely to be firmly fixed
  • People also want a board that’s “as diverse as the many people on Facebook and Instagram” — the problem being that’s clearly impossible, given the planet-spanning size of Facebook platforms. Another desire Facebook highlights is for the board to be able to encourage it to make “better, more transparent decisions”. The need for board decisions (and indeed decisions Facebook takes when setting up the board) to be transparent emerges as a major theme in the report. In terms of the board’s make-up, Facebook says it should comprise experts with different backgrounds, different disciplines, and different viewpoints — “who can all represent the interests of a global community”. Though there’s clearly going to be differing views on how or even whether that’s possible to achieve; and therefore questions over how a 40-odd member body, that will likely rarely sit in plenary, can plausibly act as an prism for Facebook’s user-base

The report is worth reading in full to get a sense of the broad spectrum of governance questions and conundrums Facebook is here wading into.

If, as it very much looks, this is a Facebook-configured exercise in blame spreading for the problems its platform hosts, the surface area for disagreement and dispute will clearly be massive — and from the company’s point of view that already looks like a win. Given how, since 2016, Facebook (and Zuckerberg) have been the conduit for so much public and political anger linked to the spreading and accelerating of harmful online content.

Differing opinions and will also provide cover for Facebook to justify starting “narrow”. Which it has said it will do with the board, aiming to have something up and running by the end of this year. But that just means it’ll be managing expectations of how little actual oversight will flow right from the very start.

The report also shows that Facebook’s claimed ‘listening ear’ for a “global perspective” has some very hard limits.

So while those involved in the consultation are reported to have repeatedly suggested the oversight board should not just be limited to content judgement — but should also be able to make binding decisions related to things like Facebook’s newsfeed algorithm or wider use of AI by the company — Facebook works to shut those suggestions down, underscoring the scope of the oversight will be limited to content.

“The subtitle of the Draft Charter — “An Oversight Board for Content Decisions” — made clear that this body would focus specifically on content. In this regard, Facebook has been relatively clear about the Board’s scope and remit,” it writes. “However, throughout the consultation period, interlocutors often proposed that the Board hear a wide range of controversial and emerging issues: newsfeed ranking, data privacy, issues of local law, artificial intelligence, advertising policies, and so on.”

It goes on to admit that “the question persisted: should the Board be restricted to content decisions only, without much real influence over policy?” — before picking a selection of responses that appear intended to fuzz the issue, allowing it to position itself as seeking a reasoned middle ground.

“In the end, balance will be needed; Facebook will need to resolve tensions between minimalist and maximalist visions of the Board,” it concludes. “Above all, it will have to demonstrate that the Oversight Board — as an enterprise worth doing — adds value, is relevant, and represents a step forward from content governance as it stands today.”

Sample cases the report suggests the board could review — as suggested by participants in Facebook’s consultation — include:

  • A user shared a list of men working in academia, who were accused of engaging in inappropriate behavior and/or abuse, including unwanted sexual advances;
  • A Page that commonly uses memes and other forms of satire shared posts that used discriminatory remarks to describe a particular demographic group in India;
  • A candidate for office made strong, disparaging remarks to an unknown passerby regarding their gender identity and livestreamed the interaction. Other users reported this due to safety concerns for the latter person;
  • A government official suggested that a local minority group needed to be cautious, comparing that group’s behavior to that of other groups that have faced genocide

So, again, it’s easy to see the kinds of controversies and indeed criticisms that individuals sitting on Facebook’s board will be opening themselves up to — whichever way their decisions fall.

A content review board that will inevitably remain linked to (if not also reimbursed via) the company that establishes it, and will not be granted powers to set wider Facebook policy — but will instead be tasked with facing the impossible of trying to please all of the Facebook users (and critics) all of the time — does certainly risk looking like Facebook’s stooge; a conduit for channeling dirty and political content problems that have the potential to go viral and threaten its continued ability to monetize the stuff that’s uploaded to its platforms.

Facebook’s preferred choice of phrase to describe its users — “global community” — is a tellingly flat one in this regard.

The company conspicuously avoids talk of communities, pluralinstead the closest we get here is a claim that its selective consultation exercise is “ensuring a global perspective”, as if a singular essence can somehow be distilled from a non-representative sample of human opinion — when in fact the stuff that flows across its platforms is quite the opposite; multitudes of perspectives from individuals and communities whose shared use of Facebook does not an emergent ‘global community’ make.

This is why Facebook has struggled to impose a single set of ‘community standards’ across a platform that spans so many contexts; a one-size-fits all approach very clearly doesn’t fit.

Yet it’s not at all clear how Facebook creating yet another layer of content review changes anything much for that challenge — unless the oversight body is mostly intended to act as a human shield for the company itself, putting a firewall between it and certain highly controversial content; aka Facebook’s supreme court of taking the blame on its behalf.

Just one of the difficult content moderation issues embedded in the businesses of sociotechnical, planet-spanning social media platform giants like Facebook — hate speech — defies a top-down ‘global’ fix.

As Evelyn Douek wrote last year vis-a-via hate speech on the Lawfare blog, after Zuckerberg had floated the idea of a governance structure for online speech: “Even if it were possible to draw clear jurisdictional lines and create robust rules for what constitutes hate speech in countries across the globe, this is only the beginning of the problem: within each jurisdiction, hate speech is deeply context-dependent… This context dependence presents a practically insuperable problem for a platform with over 2 billion users uploading vast amounts of material every second.”

A cynic would say Facebook knows it can’t fix planet-scale content moderation and still turn a profit. So it needs a way to distract attention and shift blame.

If it can get enough outsiders to buy into its oversight board — allowing it to pass off the oxymoron of “global governance”, via whatever self-styled structure it allows to emerge from these self-regulatory seeds — the company’s hope must be that the device also works as a bolster against political pressure.

Both over particular problem/controversial content, and also as a vehicle to shrink the space for governments to regulate Facebook.

In a video discussion also embedded in Facebook’s blog post — in which Zuckerberg couches the oversight board project as “a big experiment that we hope can pioneer a new model for the governance of speech on the Internet” — the Facebook founder also makes reference to calls he’s made for more regulation of the Internet. As he does so he immediately qualifies the statement by blending state regulation with industry self-regulation — saying the kind of regulation he’s asking for is “in some cases by democratic process, in other cases through independent industry process”.

So Zuckerberg is making a clear pitch to position Facebook as above the rule of nation state law — and setting up a “global governance” layer is the self-serving vehicle of choice for the company to try and overtake democracy.

Even if Facebook’s oversight board’s structure is so cunningly fashioned as to present to a rationally minded individual as, in some senses, ‘independent’ from Facebook, its entire being and function will remain dependent on Facebook’s continued existence.

Whereas if individual markets impose their own statutory regulations on Internet platforms, based on democratic and societal principles, Facebook will have no control over the rules they impose, direct or otherwise — with uncontrolled compliance costs falling on its business.

It’s easy to see which model sits most easily with Zuckerberg the businessman — a man who has also demonstrated he will not be held personally accountable for what happens on his platform.

Not when he’s asked by one (non-US) parliament, nor even by representatives from nine parliaments — all keen to discuss the societal fallouts of political disinformation and hate speech spread and accelerated on Facebook.

Turns out that’s not the kind of ‘global perspective’ Facebook wants to sell you.

Europe should ban AI for mass surveillance and social credit scoring, says advisory group

An independent expert group tasked with advising the European Commission to inform its regulatory response to artificial intelligence — to underpin EU lawmakers’ stated aim of ensuring AI developments are “human centric” — has published its policy and investment recommendations.

This follows earlier ethics guidelines for “trustworthy AI”, put out by the High Level Expert Group (HLEG) for AI back in April, when the Commission also called for participants to test the draft rules.

The AI HLEG’s full policy recommendations comprise a highly detailed 50-page document — which can be downloaded from this web page. The group, which was set up in June 2018, is made up of a mix of industry AI experts, civic society representatives, political advisers and policy wonks, academics and legal experts.

The document includes warnings on the use of AI for mass surveillance and scoring of EU citizens, such as China’s social credit system, with the group calling for an outright ban on “AI-enabled mass scale scoring of individuals”. It also urges governments to commit to not engage in blanket surveillance of populations for national security purposes. (So perhaps it’s just as well the UK has voted to leave the EU, given the swingeing state surveillance powers it passed into law at the end of 2016.) 

“While there may be a strong temptation for governments to ‘secure society’ by building a pervasive surveillance system based on AI systems, this would be extremely dangerous if pushed to extreme levels,” the HLEG writes. “Governments should commit not to engage in mass surveillance of individuals and to deploy and procure only Trustworthy AI systems, designed to be respectful of the law and fundamental rights, aligned with ethical principles and socio-technically robust.”

The group also calls for commercial surveillance of individuals and societies to be “countered” — suggesting the EU’s response to the potency and potential for misuse of AI technologies should include ensuring that online people-tracking is “strictly in line with fundamental rights such as privacy”, including (the group specifies) when it concerns ‘free’ services (albeit with a slight caveat on the need to consider how business models are impacted).

Last week the UK’s data protection watchdog fired an even more specific shot across the bows of the online behavioral ad industry — warning that adtech’s mass-scale processing of web users’ personal data for targeting ads does not comply with EU privacy standards. The industry was told its rights-infringing practices must change, even if the Information Commissioner’s Office isn’t about to bring down the hammer just yet. But the reform warning was clear.

As EU policymakers work on fashioning a rights-respecting regulatory framework for AI, seeking to steer  the next ten years+ of cutting-edge tech developments in the region, the wider attention and scrutiny that will draw to digital practices and business models looks set to drive a clean up of problematic digital practices that have been able to proliferate under no or very light touch regulation, prior to now.

The HLEG also calls for support for developing mechanisms for the protection of personal data, and for individuals to “control and be empowered by their data” — which they argue would address “some aspects of the requirements of trustworthy AI”.

“Tools should be developed to provide a technological implementation of the GDPR and develop privacy preserving/privacy by design technical methods to explain criteria, causality in personal data processing of AI systems (such as federated machine learning),” they write.

“Support technological development of anonymisation and encryption techniques and develop standards for secure data exchange based on personal data control. Promote the education of the general public in personal data management, including individuals’ awareness of and empowerment in AI personal data-based decision-making processes. Create technology solutions to provide individuals with information and control over how their data is being used, for example for research, on consent management and transparency across European borders, as well as any improvements and outcomes that have come from this, and develop standards for secure data exchange based on personal data control.”

Other policy suggestions among the many included in the HLEG’s report are that AI systems which interact with humans should include a mandatory self-identification. Which would mean no sneaky Google Duplex human-speech mimicking bots. In such a case the bot would have to introduce itself up front — thereby giving the human caller a chance to disengage.

The HLEG also recommends establishing a “European Strategy for Better and Safer AI for Children”. Concern and queasiness about rampant datafication of children, including via commercial tracking of their use of online services, has been raised  in multiple EU member states.

“The integrity and agency of future generations should be ensured by providing Europe’s children with a childhood where they can grow and learn untouched by unsolicited monitoring, profiling and interest invested habitualisation and manipulation,” the group writes. “Children should be ensured a free and unmonitored space of development and upon moving into adulthood should be provided with a “clean slate” of any public or private storage of data related to them. Equally, children’s formal education should be free from commercial and other interests.”

Member states and the Commission should also devise ways to continuously “analyse, measure and score the societal impact of AI”, suggests the HLEG — to keep tabs on positive and negative impacts so that policies can be adapted to take account of shifting effects.

“A variety of indices can be considered to measure and score AI’s societal impact such as the UN Sustainable Development Goals and the Social Scoreboard Indicators of the European Social Pillar. The EU statistical programme of Eurostat, as well as other relevant EU Agencies, should be included in this mechanism to ensure that the information generated is trusted, of high and verifiable quality, sustainable and continuously available,” it suggests. “AI-based solutions can help the monitoring and measuring its societal impact.”

The report is also heavy on pushing for the Commission to bolster investment in AI — calling particularly for more help for startups and SMEs to access funding and advice, including via the InvestEU program.

Another suggestion is the creation of an EU-wide network of AI business incubators to connect academia and industry. “This could be coupled with the creation of EU-wide Open Innovation Labs, which could be built further on the structure of the Digital Innovation Hub network,” it continues. 

There are also calls to encourage public sector uptake of AI, such as by fostering digitalisation by transforming public data into a digital format; providing data literacy education to government agencies; creating European large annotated public non-personal databases for “high quality AI”; and funding and facilitating the development of AI tools that can assist in detecting biases and undue prejudice in governmental decision-making.

Another chunk of the report covers recommendations to try to bolster AI research in Europe — such as strengthening and creating additional Centres of Excellence which address strategic research topics and become “a European level multiplier for a specific AI topic”.

Investment in AI infrastructures, such as distributed clusters and edge computing, large RAM and fast networks, and a network of testing facilities and sandboxes is also urged; along with support for an EU-wide data repository “through common annotation and standardisation” — to work against data siloing, as well as trusted data spaces for specific sectors such as healthcare, automative and agri-food.

The push by the HLEG to accelerate uptake of AI has drawn some criticism, with digital rights group Access Now’s European policy manager, Fanny Hidvegi, writing that: “What we need now is not more AI uptake across all sectors in Europe, but rather clarity on safeguards, red lines, and enforcement mechanisms to ensure that the automated decision making systems — and AI more broadly — developed and deployed in Europe respect human rights.”

Other ideas in the HLEG’s report include developing and implementing a European curriculum for AI; and monitoring and restricting the development of automated lethal weapons — including technologies such as cyber attack tools which are not “actual weapons” but which the group points out “can have lethal consequences if deployed. 

The HLEG further suggests EU policymakers refrain from giving AI systems or robots legal personhood, writing: “We believe this to be fundamentally inconsistent with the principle of human agency, accountability and responsibility, and to pose a significant moral hazard.”

The report can downloaded in full here.

EU opens formal antitrust probe of Broadcom and seeks interim order

The European Commission has opened a formal investigation into US chipmaker Broadcom which it suspects of restricting competition via a number of exclusivity practices in markets where it holds a leading position such as for systems-on-a-chip, front-end chips and wifi chipsets.

Earlier this year press reports suggested US authorities are broadening their own antitrust probe of the company.

The FTC opened its investigation into Broadcom back in January 2018.

Commenting in a press release announcing the antitrust action against the chipmaker, the EU’s antitrust chief Margrethe Vestager said: “TV set-top boxes and modems are part of our daily lives, for both work and for leisure. We suspect that Broadcom, a major supplier of components for these devices, has put in place contractual restrictions to exclude its competitors from the market. This would prevent Broadcom’s customers and, ultimately, final consumers from reaping the benefits of choice and innovation. We also intend to order Broadcom to halt its behaviour while our investigation proceeds, to avoid any risk of serious and irreparable harm to competition.

The Commission has issued a formal statement of objections in which it sets out its preliminary conclusions and explains its reasons for seeking interim measures, saying (emphasis its) it believes that:

  • Broadcom is likely to hold a dominant position in various markets for the supply of systems-on-a-chip for TV set-top boxes and modems
  • certain agreements between Broadcom and seven of its main customers manufacturing TV set-top boxes and modems contain exclusivity provisions that may result in those customers purchasing systems-on-a-chip, front-end chips and WiFi chipsets exclusively or almost exclusively from Broadcom
  • the provisions contained in these agreements may affect competition and stifle innovation in these markets, to the detriment of consumers

The formal investigation could take several years to conclude and the Commission notes that the outcome is not prejudiced by preliminary findings nor any interim measures.

“The Commission has gathered information indicating that Broadcom may be implementing a range of exclusionary practices in relation to these products,” it writes. “These practices may include (i) setting exclusive purchasing obligations, (ii) granting rebates or other advantages conditioned on exclusivity or minimum purchase requirements, (iii) product bundling, (iv) abusive IP-related strategies and (v) deliberately degrading interoperability between Broadcom products and other products.

“As a result of concerns relating to these alleged practices by Broadcom, the Commission has decided to open a formal investigation.”

The Commission says it wants to impose interim measures to prevent the suspected anti-competitive behaviour from damaging the market “irreparably” — i.e. before a regulatory intervention could issue a corrective sanction, assuming it ends up deciding such action is necessary after the investigation has run its course.

Its assessment of the case found that the alleged competition concerns to be “of a serious nature and that Broadcom’s conduct may result in the elimination or marginalisation of competitors before the end of proceedings” — allowing it to meet the threshold for ordering interim measures under EU law.

The Commission says it has informed the chipmaker and the competition authorities of EU Member States that it has opened proceedings and of its intention to impose interim measures.

It’s not clear at this stage when such interim measures could be applied — with anything from several weeks to many months being possible.

Broadcom could also seek to appeal against them.

We’ve reached out to the company for comment. 

In recent years the semiconductor supplier has walked away from a proposed hostile takeover of mobile chipmaker Qualcomm after it was blocked by the Trump administration. It went on to shell out $18.9BN in cash to pick up IT management software and solutions provider, CA Technologies — in what looked like a bid to diversify its offerings.

UK law review eyes abusive trends like deepfaked porn and cyber flashing

The UK government has announced the next phase of a review of the law around the making and sharing of non-consensual intimate images, with ministers saying they want to ensure it keeps pace with evolving digital tech trends.

The review is being initiated in response to concerns that abusive and offensive communications are on the rise, as a result of it becoming easier to create and distribute sexual images of people online without their permission.

Among the issues the Law Commission will consider are so-called ‘revenge porn’, where intimate images of a person are shared without their consent; deepfaked porn, which refers to superimposing a real photograph of a person’s face onto a pornographic image or video without their consent; and cyber flashing, the unpleasant practice of sending unsolicited sexual images to a person’s phone by exploiting technologies such as Bluetooth that allow for proximity-based file sharing.

On the latter practice, the screengrab below is of one of two unsolicited messages I received as pop-ups on my phone in the space of a few seconds while waiting at a UK airport gate — and before I’d had a chance to locate the iOS master setting that actually nixes Bluetooth.

On iOS, even without accepting the AirDrop the cyberflasher is still able to send an unsolicited placeholder image with their request.

Safe to say, this example is at the tamer end of what tends to be involved. More often it’s actual dick pics fired at people’s phones, not a parrot-friendly silicone substitute…

cyber flashing

A patchwork of UK laws already covers at least some of the offensive and abusive communications in question, such as the offence of voyeurism under the Sexual Offences Act 2003, which criminalises certain non-consensual photography taken for sexual gratification — and carries a two-year maximum prison sentence (with the possibility that a perpetrator may be required to be listed on the sexual offender register); while revenge porn was made a criminal offence under section 33 of the Criminal Justice and Courts Act 2015.

But the government says that while it feels the law in this area is “robust”, it is keen not to be seen as complacent — hence continuing to keep it under review.

It will also hold a public consultation to help assess whether changes in the law are required.

The Law Commission published Phase 1 of their review of Abusive and Offensive Online Communications on November 1 last year — a scoping report setting out the current criminal law which applies.

The second phase, announced today, will consider the non-consensual taking and sharing of intimate images specifically — and look at possible recommendations for reform. Though it will not report for two years so any changes to the law are likely to take several years to make it onto the statute books.

Among specific issues the Law Commission will consider is whether anonymity should automatically be granted to victims of revenge porn.

Commenting in a statement, justice minister Paul Maynard said: “No one should have to suffer the immense distress of having intimate images taken or shared without consent. We are acting to make sure our laws keep pace with emerging technology and trends in these disturbing and humiliating crimes.”

Maynard added that the review builds on recent changes to toughen UK laws around revenge porn and to outlaw ‘upskirting’ in English law; aka the degrading practice of taking intimate photographs of others without consent.

“Too many young people are falling victim to co-ordinated abuse online or the trauma of having their private sexual images shared. That’s not the online world I want our children to grow up in,” added the secretary of state for digital issues, Jeremy Wright, in another supporting statement.

“We’ve already set out world-leading plans to put a new duty of care on online platforms towards their users, overseen by an independent regulator with teeth. This Review will ensure that the current law is fit for purpose as we deliver our commitment to make the UK the safest place to be online.”

The Law Commission review will begin on July 1, 2019 and report back to the government in summer 2021.

Terms of Reference will be published on the Law Commission’s website in due course.

Facebook’s searchable political ads archive is now global

Facebook has announced it’s rolled out a basic layer of political ads transparency globally, more than a year after launching the publicly searchable ads archive in the US.

It is also expanding what it dubs “proactive enforcement” on political ads to countries where elections or regulations are approaching — starting with Ukraine, Singapore, Canada and Argentina.

“Beginning today, we will systematically detect and review ads in Ukraine and Canada through a combination of automated and human review,” it writes in a blog post setting out the latest developments. “In Singapore and Argentina, we will begin enforcement within the next few months. We also plan to roll out the Ad Library Report in both of those countries after enforcement is in place.

“The Ad Library Report will allow you to track and download aggregate spend data across advertisers and regions.”

Facebook is still not enforcing identity checks on political advertisers in the vast majority of markets where it operates. Nor indeed monitoring whether political advertisers have included ‘paid for’ disclaimer labels — leaving the burden of policing how its ads platform is being used (and potentially misused) to concerned citizens, civic society and journalists.

The social network behemoth currently requires advertisers to get authorized and add disclaimers to political and issue-related ads in around 50 countries and territories — with around 140 other markets where it’s not enforcing identity checks or disclaimers.

“For all other countries included in today’s announcement, we will not be proactively detecting or reactively reviewing possible social issue, electoral or political ads at this time,” it confirms, before adding: “However, we strongly encourage advertisers in those countries to authorize and add the proper disclaimers, especially in a rapidly evolving regulatory landscape.”

“In all cases, it will be up to the advertiser to comply with any applicable electoral or advertising laws and regulations in the countries they want to run ads in. If we are made aware of an ad that is in violation of a law, we will act quickly to remove it. With these tools, regulators are now better positioned to consider how to protect elections with sensible regulations, which they are uniquely suited to do,” Facebook continues.

“In countries where we are not yet detecting or reviewing these types of ads, these tools provide their constituents with more information about who’s influencing their vote — and we suggest voters and local regulators hold these elected officials and influential groups accountable as well.”

In a related development it says it’s expanded access to its Ad Library API globally.

It also claims to have made improvements to the tool, which launched in March — but quickly attracted criticism from the research community for lacking basics like ad targeting criteria and engagement metrics making it difficult for outsiders to quantify how Facebook’s platform is being used to influence elections.

A review of the API by Mozilla shortly after it launched slated Facebook for not providing researchers with the necessary data to study how political influence operations play out on its platform — with a group of sixty academics put their name to the open letter saying the API does the opposite of what the company claims.

Facebook does not mention that criticism in today’s blog post. It has also provided little detail of the claimed “improvements” to the API — merely writing: “Since we expanded access in March, we’ve made improvements to our API so people can easily access ads from a given country and analyze specific advertisers. We’re also working on making it easier to programmatically access ad images, videos and recently served ads.”

The other key election interference concern linked to Facebook’s platforms — and which the company also avoids mention of here — is how non-advertising content can be seeded and spread on its networks in a bid to influence political opinion.

In recent years Facebook has announced various discoveries of inauthentic behavior and/or fake accounts. Though it is under no regulatory obligations to disclose everything it finds, or indeed to find every fake.

Hence political ads are just the tip of the disinformation iceberg.

Telegram adds location-flavored extras and full group ownership transfers

Messaging platform Telegram has added a new bunch of location-based features via an update.

Users of the latest version of the app will find an ‘Add People Nearby’ setting which they can use to quickly exchange contact details without the need to type in digits.

Coupled with a prior update, which lets Telegram users control who can see their phone number, it looks like it’ll make it possible to open a chat channel with a new contact without having to hand over your actual phone number.

Also via the ‘Add People Nearby’ contacts setting, the update lets users surface nearby public chat groups — by displaying any open chat channels in their proximity.

The setting also includes an option to ‘Create a Local Group’ — which does what it says on the tin, allowing users to set up a chat in their locality.

“This update opens up a new world of location-based group chats for anything from conferences, to festivals, to stadiums, to campuses, to chatting with people hanging out in the same cafe,” Telegram suggests, re-upping an idea that’s clocked up more than its fair share of startup tech cycles over the years. As a feature within a fully fledged messaging platform it’s more likely to find a niche groove, say for hosting ephemeral stuff like conference scuttlebutt or party chatter.

Other features added in the update include the ability to transfer admin rights of any group chat to another user with two taps.

“Telegram apps now support transferring ownership rights from any groups and channels to other users,” it writes. “Grant full admin rights to your Chosen One to see the Transfer Ownership button.”

It’s not quite a self-destruct button but the ‘pass the ownership baton’ feature could come in handy for users living in repressive states with restrictions on freedom of expression — if, for example, it allows group chat/channel admins to stay one step ahead of state forces which may target them in a bid to close conversations down.

In such a scenario, there’s the added risk that a channel admin could be personally targeted by police to extract data on group messages and other members. So enabler quicker transfers of ownership may enable comms to be maintained despite state attempts to disrupt and interfere — even if the original admin needs to temporarily delete their Telegram account to protect its data from being accessed via their device.

However, like any tech tool there’s also the opposite risk; i.e. that police could force a channel admin to transfer ownership to a group member of their choosing and then take it over and close it down.

Other features landing in the latest Telegram app update include more controls over notifications; Siri shortcuts for users of the iOS app; and tweaks to the theme picker and icon options, also on iOS.

More in Telegram’s blog.

Facebook makes another push to shape and define its own oversight

Facebook’s head of global spin and policy, former UK deputy prime minister Nick Clegg, will give a speech later today providing more detail of the company’s plan to set up an ‘independent’ external oversight board to which people can appeal content decisions so that Facebook itself is not the sole entity making such decisions.

In the speech in Berlin, Clegg will apparently admit to Facebook having made mistakes. Albeit, it would be pretty awkward if he came on stage claiming Facebook is flawless and humanity needs to take a really long hard look at itself.

“I don’t think it’s in any way conceivable, and I don’t think it’s right, for private companies to set the rules of the road for something which is as profoundly important as how technology serves society,” Clegg told BBC Radio 4’s Today program this morning, discussing his talking points ahead of the speech. “In the end this is not something that big tech companies… can or should do on their own.

“I want to see… companies like Facebook play an increasingly mature role — not shunning regulation but advocating it in a sensible way.”

The idea of creating an oversight board for content moderation and appeals was previously floated by Facebook founder, Mark Zuckerberg. Though it raises way more questions than it resolves — not least how a board whose existence depends on the underlying commercial platform it is supposed to oversee can possibly be independent of that selfsame mothership; or how board appointees will be selected and recompensed; and who will choose the mix of individuals to ensure the board can reflect the full spectrum diversity of humanity that’s now using Facebook’s 2BN+ user global platform?

None of these questions were raised let alone addressed in this morning’s BBC Radio 4 interview with Clegg.

Asked by the interviewer whether Facebook will hand control of “some of these difficult decisions” to an outside body, Clegg said: “Absolutely. That’s exactly what it means. At the end of the day there is something quite uncomfortable about a private company making all these ethical adjudications on whether this bit of content stays up or this bit of content gets taken down.

“And in the really pivotal, difficult issues what we’re going to do — it’s analogous to a court — we’re setting up an independent oversight board where users and indeed Facebook will be able to refer to that board and say well what would you do? Would you take it down or keep it up? And then we will commit, right at the outset, to abide by whatever rulings that board makes.”

Speaking shortly afterwards on the same radio program, Damian Collins, who chairs a UK parliamentary committee that has called for Facebook to be investigated by the UK’s privacy and competition regulators, suggested the company is seeking to use self-serving self-regulation to evade wider responsibility for the problems its platform creates — arguing that what’s really needed are state-set broadcast-style regulations overseen by external bodies with statutory powers.

“They’re trying to pass on the responsibility,” he said of Facebook’s oversight board. “What they’re saying to parliaments and governments is well you make things illegal and we’ll obey your laws but other than that don’t expect us to exercise any judgement about how people use our services.

“We need as level of regulation beyond that as well. Ultimately we need — just as have in broadcasting — statutory regulation based on principles that we set, and an investigatory regulator that’s got the power to go in and investigate, which, under this board that Facebook is going to set up, this will still largely be dependent on Facebook agreeing what data and information it shares, setting the parameters for investigations. Where we need external bodies with statutory powers to be able to do this.”

Clegg’s speech later today is also slated to spin the idea that Facebook is suffering unfairly from a wider “techlash”.

Asked about that during the interview, the Facebook PR seized the opportunity to argue that if Western society imposes too stringent regulations on platforms and their use of personal data there’s a risk of “throw[ing] the baby out with the bathwater”, with Clegg smoothly reaching for the usual big tech talking points — claiming innovation would be “almost impossible” if there’s not enough of a data free for all, and the West risks being dominated by China, rather than friendly US giants.

By that logic we’re in a rights race to the bottom — thanks to the proliferation of technology-enabled global surveillance infrastructure, such as the one operated by Facebook’s business.

Clegg tried to pass all that off as merely ‘communications as usual’, making no reference to the scale of the pervasive personal data capture that Facebook’s business model depends upon, and instead arguing its business should be regulated in the same way society regulates “other forms of communication”. Funnily enough, though, your phone isn’t designed to record what you say the moment you plug it in…

“People plot crimes on telephones, they exchange emails that are designed to hurt people. If you hold up any mirror to humanity you will always see everything that is both beautiful and grotesque about human nature,” Clegg argued, seeking to manage expectations vis-a-vis what regulating Facebook should mean. “Our job — and this is where Facebook has a heavy responsibility and where we have to work in partnership with governments — is to minimize the bad and to maximize the good.”

He also said Facebook supports “new rules of the road” to ensure a “level playing field” for regulations related to privacy; election rules; the boundaries of hate speech vs free speech; and data portability —  making a push to flatten regulatory variation which is often, of course, based on societal, cultural and historical differences, as well as reflecting regional democratic priorities.

It’s not at all clear how any of that nuance would or could be factored into Facebook’s preferred universal global ‘moral’ code — which it’s here, via Clegg (a former European politician), leaning on regional governments to accept.

Instead of societies setting the rules they choose for platforms like Facebook, Facebook’s lobbying muscle is being flexed to make the case for a single generalized set of ‘standards’ which won’t overly get in the way of how it monetizes people’s data.

And if we don’t agree to its ‘Western’ style surveillance, the threat is we’ll be at the mercy of even lower Chinese standards…

“You’ve got this battle really for tech dominance between the United States and China,” said Clegg, reheating Zuckerberg’s senate pitch last year when the Facebook founder urged a trade off of privacy rights to allow Western companies to process people’s facial biometrics to not fall behind China. “In China there’s no compunction about how data is used, there’s no worry about privacy legislation, data protection and so on — we should not emulate what the Chinese are doing but we should keep our ability in Europe and North America to innovate and to use data proportionately and innovat[iv]ely.

“Otherwise if we deprive ourselves of that ability I can predict that within a relatively short period of time we will have tech domination from a country with wholly different sets of values to those that are shared in this country and elsewhere.”

What’s rather more likely is the emergence of discrete Internets where regions set their own standards — and indeed we’re already seeing signs of splinternets emerging.

Clegg even briefly brought this up — though it’s not clear why (and he avoided this point entirely) Europeans should fear the emergence of a regional digital ecosystem that bakes respect for human rights into digital technologies.

With European privacy rules also now setting global standards by influencing policy discussions elsewhere — including the US — Facebook’s nightmare is that higher standards than it wants to offer Internet users will become the new Western norm.

Collins made short work of Clegg’s techlash point, pointing out that if Facebook wants to win back users’ and society’s trust it should stop acting like it has everything to hide and actually accept public scrutiny.

“They’ve done this to themselves,” he said. “If they want redemption, if they want to try and wipe the slate clean for Mack Zuckerberg he should open himself up more. He should be prepared to answer more questions publicly about the data that they gather, whether other companies like Cambridge Analytica had access to it, the nature of the problem of disinformation on the platform. Instead they are incredibly defensive, incredibly secretive a lot of the time. And it arouses suspicion.

“I think people were quite surprised to discover the lengths to which people go to to gather data about us — even people who don’t even use Facebook. And that’s what’s made them suspicious. So they have to put their own house in order if they want to end this.”

Last year Collins’ DCMS committee repeatedly asked Zuckerberg to testify to its enquiry into online disinformation — and was repeatedly snubbed…

Collins also debunked an attempt by Clegg to claim there’s no evidence of any Russian meddling on Facebook’s platform targeting the UK’s 2016 EU referendum — pointing out that Facebook previously admitted to a small amount of Russian ad spending that did target the EU referendum, before making the wider point that it’s very difficult for anyone outside Facebook to know how its platform gets used/misused; Ads are just the tip of the political disinformation iceberg.

“It’s very difficult to investigate externally, because the key factors — like the use of tools like groups on Facebook, the use of inauthentic fake accounts boosting Russian content, there have been studies showing that’s still going on and was going on during the [US] parliamentary elections, there’s been no proper audit done during the referendum, and in fact when we first went to Facebook and said there’s evidence of what was going on in America in 2016, did this happen during the referendum as well, they said to us well we won’t look unless you can prove it happened,” he said.

“There’s certainly evidence of suspicious Russian activity during the referendum and elsewhere,” Collins added.

We asked Facebook for Clegg’s talking points for today’s speech but the company declined to share more detail ahead of time.