Jamaica’s JamCOVID pulled offline after third security lapse exposed travelers’ data

Jamaica’s JamCOVID app and website were taken offline late on Thursday following a third security lapse, which exposed quarantine orders on more than half a million travelers to the island.

JamCOVID was set up last year to help the government process travelers arriving on the island. Quarantine orders are issued by the Jamaican Ministry of Health and instruct travelers to stay in their accommodation for two weeks to prevent the spread of COVID-19.

These orders contain the traveler’s name and the address of where they are ordered to stay.

But a security researcher told TechCrunch that the quarantine orders were publicly accessible from the JamCOVID website but were not protected with a password. Although the files were accessible from anyone’s web browser, the researcher asked not to be named for fear of legal repercussions from the Jamaican government.

More than 500,000 quarantine orders were exposed, some dating back to March 2020.

TechCrunch shared these details with the Jamaica Gleaner, which was first to report on the security lapse after the news outlet verified the data spillage with local cybersecurity experts.

Amber Group, which was contracted to build and maintain the JamCOVID coronavirus dashboard and immigration service, pulled the service offline a short time after TechCrunch and the Jamaica Gleaner contacted the company on Thursday evening. JamCOVID’s website was replaced with a holding page that said the site was “under maintenance.” At the time of publication, the site had returned.

Amber Group’s chief executive Dushyant Savadia did not return a request for comment.

Matthew Samuda, a minister in Jamaica’s Ministry of National Security, also did not respond to a request for comment or our questions — including if the Jamaican government plans to continue its contract or relationship with Amber Group.

This is the third security lapse involving JamCOVID in the past two weeks.

Last week, Amber Group secured an exposed cloud storage server hosted on Amazon Web Services that was left open and public, despite containing more than 70,000 negative COVID-19 lab results and over 425,000 immigration documents authorizing travel to the island. Savadia said in response that there were “no further vulnerabilities” with the app. Days later, the company fixed a second security lapse after leaving a file containing private keys and passwords for the service on the JamCOVID server.

The Jamaican government has repeatedly defended Amber Group, which says it provided the JamCOVID technology to the government “for free.” Amber Group’s Savadia has previously been quoted as saying that the company built the service in “three days.”

In a statement on Thursday, Jamaica’s prime minister Andrew Holness said JamCOVID “continues to be a critical element” of the country’s immigration process and that the government was “accelerating” to migrate the JamCOVID database — though specifics were not given.

An earlier version of this report misspelled the name of the Jamaican Gleaner newspaper. We regret the error.

Following backlash, WhatsApp to roll out in-app banner to better explain its privacy update

Last month, Facebook-owned WhatsApp announced it would delay enforcement of its new privacy terms, following a backlash from confused users which later led to a legal challenge in India and various regulatory investigations. WhatsApp users had misinterpreted the privacy updates as an indication that the app would begin sharing more data — including their private messages — with Facebook. Today, the company is sharing the next steps it’s taking to try to rectify the issue and clarify that’s not the case.

The mishandling of the privacy update on WhatsApp’s part led to widespread confusion and misinformation. In reality, WhatsApp had been sharing some information about its users with Facebook since 2016, following its acquisition by Facebook.

But the backlash is a solid indication of much user trust Facebook has since squandered. People immediately suspected the worst, and millions fled to alternative messaging apps, like Signal and Telegram, as a result.

Following the outcry, WhatsApp attempted to explain that the privacy update was actually focused on optional business features on the app, which allow business to see the content of messages between it and the end user, and give the businesses permission to use that information for its own marketing purposes, including advertising on Facebook. WhatsApp also said it labels conversations with businesses that are using hosting services from Facebook to manage their chats with customers, so users were aware.

Image Credits: WhatsApp

In the weeks since the debacle, WhatsApp says it spent time gathering user feedback and listening to concerns from people in various countries. The company found that users wanted assurance that WhatsApp was not reading their private messages or listening to their conversations, and that their communications were end-to-end encrypted. Users also said they wanted to know that WhatsApp wasn’t keeping logs of who they were messaging or sharing contact lists with Facebook.

These latter concerns seem valid, given that Facebook recently made its messaging systems across Facebook, Messenger and Instagram interoperable. One has to wonder when similar integrations will make their way to WhatsApp.

Today, WhatsApp says it will roll out new communications to users about the privacy update, which follows the Status update it offered back in January aimed at clarifying points of confusion. (See below).

Image Credits: WhatsApp

In a few weeks, WhatsApp will begin to roll out a small, in-app banner that will ask users to re-review the privacy policies — a change the company said users have shown to prefer over the pop-up, full-screen alert it displayed before.

When users click on “to review,” they’ll be shown a deeper summary of the changes, including added details about how WhatsApp works with Facebook. The changes stress that WhatsApp’s update don’t impact the privacy of users’ conversations, and reiterate the information about the optional business features.

Eventually, WhatsApp will begin to remind users to review and accept its updates to keep using WhatsApp. According to its prior announcement, it won’t be enforcing the new policy until May 15.

Image Credits: WhatsApp

Users will still need to be aware that their communications with businesses are not as secure as their private messages. This impacts a growing number of WhatsApp users, 175 million of which now communicate with businesses on the app, WhatsApp said in October.

In today’s blog post about the changes, WhatsApp also took a big swipe at rival messaging apps that used the confusion over the privacy update to draw in WhatsApp’s fleeing users by touting their own app’s privacy.

“We’ve seen some of our competitors try to get away with claiming they can’t see people’s messages – if an app doesn’t offer end-to-end encryption by default that means they can read your messages,” WhatsApp’s blog post read.

This seems to be a comment directed specifically towards Telegram, which often touts its “heavily encrypted” messaging app as more private alternative. But Telegram doesn’t offer end-to-end encryption by default, as apps like WhatsApp and Signal do. It uses “transport layer” encryption that protects the connection from the user to the server, a Wired article citing cybersecurity professionals explained in January. When users want an end-to-end encrypted experience for their one-on-one chats, they can enable the “secret chats” feature instead. (And this feature isn’t even available for group chats.)

In addition, WhatsApp fought back against the characterization that it’s somehow less safe because it has some limited data on users.

“Other apps say they’re better because they know even less information than WhatsApp. We believe people are looking for apps to be both reliable and safe, even if that requires WhatsApp having some limited data,” the post read. “We strive to be thoughtful on the decisions we make and we’ll continue to develop new ways of meeting these responsibilities with less information, not more,” it noted.

Jamaica’s immigration website exposed thousands of travelers’ data

A security lapse by a Jamaican government contractor has exposed immigration records and COVID-19 test results for hundreds of thousands of travelers who visited the island over the past year.

The Jamaican government contracted Amber Group to build the JamCOVID19 website and app, which the government uses to publish daily coronavirus figures and allows residents to self-report their symptoms. The contractor also built the website to pre-approve travel applications to visit the island during the pandemic, a process that requires travelers to upload a negative COVID-19 test result before they board their flight if they come from high-risk countries, including the United States.

But a cloud storage server storing those uploaded documents was left unprotected and without a password, and was publicly spilling out files onto the open web.

Many of the victims whose information was found on the exposed server are Americans.

The data is now secure after TechCrunch contacted Amber Group’s chief executive Dushyant Savadia, who did not comment when reached prior to publication.

The storage server, hosted on Amazon Web Services, was set to public. It’s not known for how long the data was unprotected, but contained more than 70,000 negative COVID-19 lab results, over 425,000 immigration documents authorizing travel to the island — which included the traveler’s name, date of birth and passport numbers — and over 250,000 quarantine orders dating back to June 2020, when Jamaica reopened its borders to visitors after the pandemic’s first wave. The server also contained more than 440,000 images of travelers’ signatures.

Two U.S. travelers whose lab results were among the exposed data told TechCrunch that they uploaded their COVID-19 results through the Visit Jamaica website before their travel. Once lab results are processed, travelers receive a travel authorization that they must present before boarding their flight.

Both of these documents, as well as quarantine orders that require visitors to shelter in place and several passports, were on the exposed storage server.

Travelers who are staying outside Jamaica’s so-called “resilient corridor,” a zone that covers a large portion of the island’s population, are told to install the app built by Amber Group that tracks their location and is tracked by the Ministry of Health to ensure visitors stay within the corridor. The app also requires that travelers record short “check-in” videos with a daily code sent by the government, along with their name and any symptoms.

The server exposed more than 1.1 million of those daily updating check-in videos.

An airport information flyer given to travelers arriving in Jamaica. Travelers may be required to install the JamCOVID19 app to allow the government to monitor their location and to require video check-ins. (Image: Jamaican government)

The server also contained dozens of daily timestamped spreadsheets named “PICA,” likely for the Jamaican passport, immigration and citizenship agency, but these were restricted by access permissions. But the permissions on the storage server were set so that anyone had full control of the files inside, such as allowing them to be downloaded or deleted altogether. (TechCrunch did neither, as doing so would be unlawful.)

Stephen Davidson, a spokesperson for the Jamaican Ministry of Health, did not comment when reached, or say if the government planned to inform travelers of the security lapse.

Savadia founded Amber Group in 2015 and soon launched its vehicle-tracking system, Amber Connect.

According to one report, Amber’s Savadia said the company developed JamCOVID19 “within three days” and made it available to the Jamaican government in large part for free. The contractor is billing other countries, including Grenada and the British Virgin Islands, for similar implementations, and is said to be looking for other government customers outside the Caribbean.

Savadia would not say what measures his company put in place to protect the data of paying governments.

Jamaica has recorded at least 19,300 coronavirus cases on the island to date, and more than 370 deaths.


Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.

TikTok hit with consumer, child safety and privacy complaints in Europe

TikTok is facing a fresh round of regulatory complaints in Europe where consumer protection groups have filed a series of coordinated complaints alleging multiple breaches of EU law.

The European Consumer Organisation (BEUC) has lodged a complaint against the video sharing site with the European Commission and the bloc’s network of consumer protection authorities, while consumer organisations in 15 countries have alerted their national authorities and urged them to investigate the social media giant’s conduct, BEUC said today.

The complaints include claims of unfair terms, including in relation to copyright and TikTok’s virtual currency; concerns around the type of content children are being exposed to on the platform; and accusations of misleading data processing and privacy practices.

Details of the alleged breaches are set out in two reports associated with the complaints: One covering issues with TikTok’s approach to consumer protection, and another focused on data protection and privacy.

Child safety

On child safety, the report accuses TikTok of failing to protect children and teenagers from hidden advertising and “potentially harmful” content on its platform.

“TikTok’s marketing offers to companies who want to advertise on the app contributes to the proliferation of hidden marketing. Users are for instance triggered to participate in branded hashtag challenges where they are encouraged to create content of specific products. As popular influencers are often the starting point of such challenges the commercial intent is usually masked for users. TikTok is also potentially failing to conduct due diligence when it comes to protecting children from inappropriate content such as videos showing suggestive content which are just a few scrolls away,” the BEUC writes in a press release.

TikTok has already faced a regulatory intervention in Italy this year in response to child safety concerns — in that instance after the death of a ten year old girl in the country. Local media had reported that the child died of asphyxiation after participating in a ‘black out’ challenge on TikTok — triggering the emergency intervention by the DPA.

Soon afterwards TikTok agreed to reissue an age gate to verify the age of every user in Italy, although the check merely asks the user to input a date to confirm their age so seems trivially easy to circumvent.

In the BEUC’s report, the consumer rights group draws attention to TikTok’s flimsy age gate, writing that: “In practice, it is very easy for underage users to register on the platform as the age verification process is very loose and only self-declaratory.”

And while it notes TikTok’s privacy policy claims the service is “not directed at children under the age of 13” the report cites a number of studies that found heavy use of TikTok by children under 13 — with BEUC suggesting that children in fact make up “a very big part” of TikTok’s user base.

From the report:

In France, 45% of children below 13 have indicated using the app. In the United Kingdom, a 2020 study from the Office for Telecommunications (OFCOM) revealed that 50% of children between eight and 15 upload videos on TikTok at least weekly. In Czech Republic, a 2019 study found out that TikTok is very popular among children aged 11-12. In Norway, a news article reported that 32% of children aged 10-11 used TikTok in 2019. In the United States, The New York Times revealed that more than one-third of daily TikTok users are 14 or younger, and many videos seem to come from children who are below 13. The fact that many underage users are active on the platform does not come as a surprise as recent studies have shown that, on average, a majority of children owns mobile phones earlier and earlier (for example, by the age of seven in the UK).

A recent EU-backed study also found that age checks on popular social media platforms are “basically ineffective” as they can be circumvented by children of all ages simply by lying about their age.

Terms of use

Another issue raised by the complaints centers on a claim of unfair terms of use — including in relation to copyright, with BEUC noting that TikTok’s T&Cs give it an “irrevocable right to use, distribute and reproduce the videos published by users, without remuneration”.

A virtual currency feature it offers is also highlighted as problematic in consumer rights terms.

TikTok lets users purchase digital coins which they can use to buy virtual gifts for other users (which can in turn be converted by the user back to fiat). But BEUC says its ‘Virtual Item Policy’ contains “unfair terms and misleading practices” — pointing to how it claims an “absolute right” to modify the exchange rate between the coins and the gifts, thereby “potentially skewing the financial transaction in its own favour”.

While TikTok displays the price to buy packs of its virtual coins there is no clarity over the process it applies for the conversion of these gifts into in-app diamonds (which the gift-receiving user can choose to redeem for actual money, remitted to them via PayPal or another third party payment processing tool).

“The amount of the final monetary compensation that is ultimately earned by the content provider remains obscure,” BEUC writes in the report, adding: “According to TikTok, the compensation is calculated ‘based on various factors including the number of diamonds that the user has accrued’… TikTok does not indicate how much the app retains when content providers decide to convert their diamonds into cash.”

“Playful at a first glance, TikTok’s Virtual Item Policy is highly problematic from the point of view of consumer rights,” it adds.

Privacy

On data protection and privacy, the social media platform is also accused of a whole litany of “misleading” practices — including (again) in relation to children. Here the complaint accuses TikTok of failing to clearly inform users about what personal data is collected, for what purpose, and for what legal reason — as is required under Europe’s General Data Protection Regulation (GDPR).

Other issues flagged in the report include the lack of any opt-out from personal data being processed for advertising (aka ‘forced consent’ — something tech giants like Facebook and Google have also been accused); the lack of explicit consent for processing sensitive personal data (which has special protections under GDPR); and an absence of security and data protection by design, among other issues.

We’ve reached out to the Irish Data Protection Commission (DPC), which is TikTok’s lead supervisor for data protection issues in the EU, about the complaint and will update this report with any response.

France’s data watchdog, the CNIL, already opened an investigation into TikTok last year — prior to the company shifting its regional legal base to Ireland (meaning data protection complaints must now be funnelled through the Irish DPC as a result of via the GDPR’s one-stop-shop mechanism — adding to the regulatory backlog).

Jef Ausloos, a postdoc researcher who worked on the legal analysis of TikTok’s privacy policy for the data protection complaints, told TechCrunch researchers had been ready to file data protection complaints a year ago — at a time when the platform had no age check at all — but it suddenly made major changes to how it operates.

Ausloos suggests such sudden massive shifts are a deliberate tactic to evade regulatory scrutiny of data-exploiting practices — as “constant flux” can have the effect of derailing and/or resetting research work being undertaken to build a case for enforcement — also pointing out that resource-strapped regulators may be reluctant to bring cases against companies ‘after the fact’ (i.e. if they’ve since changed a practice).

The upshot of breaches that iterate is that repeat violations of the law may never be enforced.

It’s also true that a frequent refrain of platforms at the point of being called out (or called up) on specific business practices is to claim they’ve since changed how they operate — seeking to use that a defence to limit the impact of regulatory enforcement or indeed a legal ruling. (Aka: ‘Move fast and break regulatory accountability’.)

Nonetheless, Ausloos says the complainants’ hope now is that the two years of documentation undertaken on the TikTok case will help DPAs build cases.

Commenting on the complaints in a statement, Monique Goyens, DG of BEUC, said: “In just a few years, TikTok has become one of the most popular social media apps with millions of users across Europe. But TikTok is letting its users down by breaching their rights on a massive scale. We have discovered a whole series of consumer rights infringements and therefore filed a complaint against TikTok.

“Children love TikTok but the company fails to keep them protected. We do not want our youngest ones to be exposed to pervasive hidden advertising and unknowingly turned into billboards when they are just trying to have fun.

“Together with our members — consumer groups from across Europe — we urge authorities to take swift action. They must act now to make sure TikTok is a place where consumers, especially children, can enjoy themselves without being deprived of their rights.”

Reached for comment on the complaints, a TikTok spokesperson told us:

Keeping our community safe, especially our younger users, and complying with the laws where we operate are responsibilities we take incredibly seriously. Every day we work hard to protect our community which is why we have taken a range of major steps, including making all accounts belonging to users under 16 private by default. We’ve also developed an in-app summary of our Privacy Policy with vocabulary and a tone of voice that makes it easier for teens to understand our approach to privacy. We’re always open to hearing how we can improve, and we have contacted BEUC as we would welcome a meeting to listen to their concerns.

Minneapolis bans its police department from using facial recognition software

Minneapolis voted Friday to ban the use of facial recognition software for its police department, growing the list of major cities that have implemented local restrictions on the controversial technology. After an ordinance on the ban was approved earlier this week, 13 members of the city council voted in favor of the ban, with no opposition.

The new ban will block the Minneapolis Police Department from using any facial recognition technology, including software by Clearview AI. That company sells access to a large database of facial images, many scraped from major social networks, to federal law enforcement agencies, private companies and a number of U.S. police departments. The Minneapolis Police Department is known to have a relationship with Clearview AI, as is the Hennepin County Sheriff’s Office, which will not be restricted by the new ban.

The vote is a landmark decision in the city that set off racial justice protests around the country after a Minneapolis police officer killed George Floyd last year. The city has been in the throes of police reform ever since, leading the nation by pledging to defund the city’s police department in June before backing away from that commitment into more incremental reforms later that year.

Banning the use of facial recognition is one targeted measure that can rein in emerging concerns about aggressive policing. Many privacy advocates are concerned that the AI-powered face recognition systems would not only disproportionately target communities of color, but that the tech has been demonstrated to have technical shortcomings in discerning non-white faces.

Cities around the country are increasingly looking to ban the controversial technology and have implemented restrictions in many different ways. In Portland, Oregon, new laws passed last year block city bureaus from using facial recognition but also forbid private companies from deploying the technology in public spaces. Previous legislation in San Francisco, Oakland and Boston restricted city governments from using facial recognition systems, though didn’t include a similar provision for private companies.

Sweden’s data watchdog slaps police for unlawful use of Clearview AI

Sweden’s data protection authority, the IMY, has fined the local police authority €250,000 ($300k+) for unlawful use of the controversial facial recognition software, Clearview AI, in breach of the country’s Criminal Data Act.

As part of the enforcement the police must conduct further training and education of staff in order to avoid any future processing of personal data in breach of data protection rules and regulations.

The authority has also been ordered to inform people whose personal data was sent to Clearview — when confidentiality rules allow it to do so, per the IMY.

Its investigation found that the police had used the facial recognition tool on a number of occasions and that several employees had used it without prior authorization.

Earlier this month Canadian privacy authorities found Clearview had breached local laws when it collected photos of people to plug into its facial recognition database without their knowledge or permission.

“IMY concludes that the Police has not fulfilled its obligations as a data controller on a number of accounts with regards to the use of Clearview AI. The Police has failed to implement sufficient organisational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act. When using Clearview AI the Police has unlawfully processed biometric data for facial recognition as well as having failed to conduct a data protection impact assessment which this case of processing would require,” the Swedish data protection authority writes in a press release.

The IMY’s full decision can be found here (in Swedish).

“There are clearly defined rules and regulations on how the Police Authority may process personal data, especially for law enforcement purposes. It is the responsibility of the Police to ensure that employees are aware of those rules,” added Elena Mazzotti Pallard, legal advisor at IMY, in a statement.

The fine (SEK2.5M in local currency) was decided on the basis of an overall assessment, per the IMY, though it falls quite a way short of the maximum possible under Swedish law for the violations in question — which the watchdog notes would be SEK10M. (The authority’s decision notes that not knowing the rules or having inadequate procedures in place are not a reason to reduce a penalty fee so it’s not entirely clear why the police avoided a bigger fine.)

The data authority said it was not possible to determine what had happened to the data of the people whose photos the police authority had sent to Clearview — such as whether the company still stored the information. So it has also ordered the police to take steps to ensure Clearview deletes the data.

The IMY said it investigated the police’s use of the controversial technology following reports in local media.

Just over a year ago, US-based Clearview AI was revealed by the New York Times to have amassed a database of billions of photos of people’s faces — including by scraping public social media postings and harvesting people’s sensitive biometric data without individuals’ knowledge or consent.

European Union data protection law puts a high bar on the processing of special category data, such as biometrics.

Ad hoc use by police of a commercial facial recognition database — with seemingly zero attention paid to local data protection law — evidently does not meet that bar.

Last month it emerged that the Hamburg data protection authority had instigating proceedings against Clearview following a complaint by a German resident over consentless processing of his biometric data.

The Hamburg authority cited Article 9 (1) of the GDPR, which prohibits the processing of biometric data for the purpose of uniquely identifying a natural person, unless the individual has given explicit consent (or for a number of other narrow exceptions which it said had not been met) — thereby finding Clearview’s processing unlawful.

However the German authority only made a narrow order for the deletion of the individual complainant’s mathematical hash values (which represent the biometric profile).

It did not order deletion of the photos themselves. It also did not issue a pan-EU order banning the collection of any European resident’s photos as it could have done and as European privacy campaign group, noyb, had been pushing for.

noyb is encouraging all EU residents to use forms on Clearview AI’s website to ask the company for a copy of their data and ask it to delete any data it has on them, as well as to object to being included in its database. It also recommends that individuals who finds Clearview holds their data submit a complaint against the company with their local DPA.

European Union lawmakers are in the process of drawing up a risk-based framework to regulate applications of artificial intelligence — with draft legislation expected to be put forward this year although the Commission intends it to work in concert with data protections already baked into the EU’s General Data Protection Regulation (GDPR).

Earlier this month the controversial facial recognition company was ruled illegal by Canadian privacy authorities — who warned they would “pursue other actions” if the company does not follow recommendations that include stopping the collection of Canadians’ data and deleting all previously collected images.

Clearview said it had stopped providing its tech to Canadian customers last summer.

It is also facing a class action lawsuit in the U.S. citing Illinois’ biometric protection laws.

Last summer the UK and Australian data protection watchdogs announced a joint investigation into Clearview’s personal data handling practices. That probe is ongoing.

 

EU’s top privacy regulator urges ban on surveillance-based ad targeting

The European Union’s lead data protection supervisor has recommended that a ban on targeted advertising based on tracking Internet users’ digital activity be included in a major reform of digital services rules which aims to increase operators’ accountability, among other key goals.

The European Data Protection Supervisor (EDPS), Wojciech Wiewiorówski, made the call for a ban on surveillance-based targeted ads in reference to the Commission’s Digital Services Act (DSA) — following a request for consultation from EU lawmakers.

The DSA legislative proposal was introduced in December, alongside the Digital Markets Act (DMA) — kicking off the EU’s (often lengthy) co-legislative process which involves debate and negotiations in the European Parliament and Council on amendments before any final text can be agreed for approval. This means battle lines are being drawn to try to influence the final shape of the biggest overhaul to pan-EU digital rules for decades — with everything to play for.

The intervention by Europe’s lead data protection supervisor calling for a ban on targeted ads is a powerful pre-emptive push against attempts to water down legislative protections for consumer interests.

The Commission had not gone so far in its proposal — but big tech lobbyists are certainly pushing in the opposite direction so the EDPS taking a strong line here looks important.

In his opinion on the DSA the EDPS writes that “additional safeguards” are needed to supplement risk mitigation measures proposed by the Commission — arguing that “certain activities in the context of online platforms present increasing risks not only for the rights of individuals, but for society as a whole”.

Online advertising, recommender systems and content moderation are the areas the EDPS is particularly concerned about.

“Given the multitude of risks associated with online targeted advertising, the EDPS urges the co-legislators to consider additional rules going beyond transparency,” he goes on. “Such measures should include a phase-out leading to a prohibition of targeted advertising on the basis of pervasive tracking, as well as restrictions in relation to the categories of data that can be processed for targeting purposes and the categories of data that may be disclosed to advertisers or third parties to enable or facilitate targeted advertising.”

It’s the latest regional salvo aimed at mass-surveillance-based targeted ads after the European Parliament called for tighter rules back in October — when it suggested EU lawmakers should consider a phased in ban.

Again, though, the EDPS is going a bit further here in actually calling for one. (Facebook’s Nick Clegg will be clutching his pearls.)

More recently, the CEO of European publishing giant Axel Springer, a long time co-conspirator of adtech interests, went public with a (rather protectionist-flavored) rant about US-based data-mining tech platforms turning citizens into “the marionettes of capitalist monopolies” — calling for EU lawmakers to extend regional privacy rules by prohibiting platforms from storing personal data and using it for commercial gain at all.

Apple CEO, Tim Cook, also took to the virtual stage of a (usually) Brussels based conference last month to urge Europe to double down on enforcement of its flagship General Data Protection Regulation (GDPR).

In the speech Cook warned that the adtech ‘data complex’ is fuelling a social catastrophe by driving the spread of disinformation as it works to profit off of mass manipulation. He went on to urge lawmakers on both sides of the pond to “send a universal, humanistic response to those who claim a right to users’ private information about what should not and will not be tolerated”. So it’s not just European companies (and institutions) calling for pro-privacy reform of adtech.

The iPhone maker is preparing to introduce stricter limits on tracking on its smartphones by making apps ask users for permission to track, instead of just grabbing their data — a move that’s naturally raised the hackles of the adtech sector, which relies on mass surveillance to power ‘relevant’ ads.

Hence the adtech industry has resorted to crying ‘antitrust‘ as a tactic to push competition regulators to block platform-level moves against its consentless surveillance. And on that front it’s notable than the EDPS’ opinion on the DMA, which proposes extra rules for intermediating platforms with the most market power, reiterates the vital links between competition, consumer protection and data protection law — saying these three are “inextricably linked policy areas in the context of the online platform economy”; and that there “should be a relationship of complementarity, not a relationship where one area replaces or enters into friction with another”.

Wiewiorówski also takes aim at recommender systems in his DSA opinion — saying these should not be based on profiling by default to ensure compliance with regional data protection rules (where privacy by design and default is supposed to be the legal default).

Here too be calls for additional measures to beef up the Commission’s legislative proposal — with the aim of “further promot[ing] transparency and user control”.

This is necessary because such system have “significant impact”, the EDPS argues.

The role of content recommendation engines in driving Internet users towards hateful and extremist points of view has long been a subject of public scrutiny. Back in 2017, for example, UK parliamentarians grilled a number of tech companies on the topic — raising concerns that AI-driven tools, engineered to maximize platform profit by increasing user engagement, risked automating radicalization, causing damage not just to the individuals who become hooked on hateful views the algorithms feeds them but cascading knock-on harms for all of us as societal cohesion is eaten away in the name of keeping the eyeballs busy.

Yet years on little information is available on how such algorithmic recommender systems work because the private companies that operate and profit off these AIs shield the workings as proprietary business secrets.

The Commission’s DSA proposal takes aim at this sort of secrecy as a bar to accountability — with its push for transparency obligations. The proposed obligations (in the initial draft) include requirements for platforms to provide “meaningful” criteria used to target ads; and explain the “main parameters” of their recommender algorithms; as well as requirements to foreground user controls (including at least one “nonprofiling” option).

However the EDPS wants regional lawmakers to go further in the service of protecting individuals from exploitation (and society as a whole from the toxic byproducts that flow from an industry based on harvesting personal data to manipulate people).

On content moderation, Wiewiorówski’s opinion stresses that this should “take place in accordance with the rule of law”. Though the Commission draft has favored leaving it with platforms to interpret the law.

“Given the already endemic monitoring of individuals’ behaviour, particularly in the context of online platforms, the DSA should delineate when efforts to combat ‘illegal content’ legitimise the use of automated means to detect, identify and address illegal content,” he writes, in what looks like a tacit recognition of recent CJEU jurisprudence in this area.

“Profiling for purposes of content moderation should be prohibited unless the provider can demonstrate that such measures are strictly necessary to address the systemic risks explicitly identified by the DSA,” he adds.

The EDPS has also suggested minimum interoperability requirements for very large platforms, and for those designated as ‘gatekeepers’ (under the DMA), and urges lawmakers to work to promote the development of technical standards to help with this at the European level.

On the DMA, he also urges amendments to ensure the proposal “complements the GDPR effectively”, as he puts it, calling for “increasing protection for the fundamental rights and freedoms of the persons concerned, and avoiding frictions with current data protection rules”.

Among the EDPS’ specific recommendations are: That the DMA makes it clear that gatekeeper platforms must provide users with easier and more accessible consent management; clarification to the scope of data portability envisaged in the draft; and rewording of a provision that requires gatekeepers to provide other businesses with access to aggregated user data — again with an eye on ensuring “full consistency with the GDPR”.

The opinion also raises the issue of the need for “effective anonymisation” — with the EDPS calling for “re-identification tests when sharing query, click and view data in relation to free and paid search generated by end users on online search engines of the gatekeeper”.

ePrivacy reform emerges from stasis

Wiewiorówski’s contributions to shaping incoming platform regulations come on the same day that the European Council has finally reached agreement on its negotiating position for a long-delayed EU reform effort around existing ePrivacy rules.

In a press release announcing the development, the Commission writes that Member States agreed on a negotiating mandate for revised rules on the protection of privacy and confidentiality in the use of electronic communications services.

“These updated ‘ePrivacy’ rules will define cases in which service providers are allowed to process electronic communications data or have access to data stored on end-users’ devices,” it writes, adding: “Today’s agreement allows the Portuguese presidency to start talks with the European Parliament on the final text.”

Reform of the ePrivacy directive has been stalled for years as conflicting interests locked horns — putting paid to the (prior) Commission’s hopes that the whole effort could be done and dusted in 2018. (The original ePrivacy reform proposal came out in January 2017; four years later the Council has finally settled on its arguing mandate.)

The fact that the GDPR was passed first appears to have upped the stakes for data-hungry ePrivacy lobbyists — in both the adtech and telco space (the latter having a keen interest in removing existing regulatory barriers on comms data in order that it can exploit the vast troves of user data which Internet giants running rival messaging and VoIP services have long been able to).

There’s a concerted effort to try to use ePrivacy to undo consumer protections baked into GDPR — including attempts to water down protections provided for sensitive personal data. So the stage is set for an ugly rights battle as negotiations kick off with the European Parliament.

Metadata and cookie consent rules are also bound up with ePrivacy so there’s all sorts of messy and contested issues on the table here.

Digital rights advocacy group Access Now summed up the ePrivacy development by slamming the Council for “hugely” missing the mark.

“The reform is supposed to strengthen privacy rights in the EU [but] States poked so many holes into the proposal that it now looks like French Gruyère,” said Estelle Massé, senior policy analyst at Access Now, in a statement. “The text adopted today is below par when compared to the Parliament’s text and previous versions of government positions. We lost forward-looking provisions for the protection of privacy while several surveillance measures have been added.”

The group said it will be pushing to restore requirements for service providers to protect online users’ privacy by default and for the establishment of clear rules against online tracking beyond cookies, among other policy preferences.

The Council, meanwhile, appears to be advocating for a highly dilute (and so probably useless) flavor of ‘do not track’ — by suggesting users should be able to give consent to the use of “certain types of cookies by whitelisting one or several providers in their browser settings”, per the Commission.

“Software providers will be encouraged to make it easy for users to set up and amend whitelists on their browsers and withdraw consent at any moment,” it adds in its press release.

Clearly the devil will be in the detail of the Council’s position there. The European Parliament has, by contrast, previously clearly endorsed a “legally binding and enforceable” Do Not Track mechanism for ePrivacy so the stage is set for clashes.

Encryption is another likely bone of ePrivacy contention.

As security and privacy researcher, Dr Lukasz Olejnik, noted back in mid 2017, the parliament strongly backed end-to-end encryption as a means of protecting the confidentiality of comms data — and wrote that Member States should not impose any obligations on service providers to weaken strong encryption.

So it’s notable that the Council does not have much to say about e2e encryption — at least in the PR version of its public position. (A line in this that runs: “As a main rule, electronic communications data will be confidential. Any interference, including listening to, monitoring and processing of data by anyone other than the end-user will be prohibited, except when permitted by the ePrivacy regulation” is hardly reassuring, either.)

It certainly looks like a worrying omission given recent efforts at the Council level to advocate for ‘lawful’ access to encrypted data. Digital and humans rights groups will be buckling up for a fight.

Minneapolis police used geofence warrant at George Floyd protests

Police in Minneapolis obtained a search warrant ordering Google to turn over sets of account data on vandals accused of sparking violence in the wake of the police killing of George Floyd last year, TechCrunch has learned.

The death of Floyd, a Black man killed by a white police officer in May 2020, prompted thousands to peacefully protest across the city. But violence soon erupted, which police say began with a masked man seen in a viral video using an umbrella to smash windows of an auto-parts store in south Minneapolis. The AutoZone store was the first among dozens of buildings across the city set on fire in the days following.

The search warrant compelled Google to provide police with the account data on anyone who was “within the geographical region” of the AutoZone store when the violence began on May 27, two days after Floyd’s death.

These so-called geofence warrants — or reverse-location warrants — are frequently directed at Google in large part because the search and advertising giant collects and stores vast databases of geolocation data on billions of account holders who have “location history” turned on. Geofence warrants allow police to cast a digital dragnet over a crime scene and ask tech companies for records on anyone who entered a geographic area at a particular time. But critics say these warrants are unconstitutional as they also gather the account information on innocent passers-by.

TechCrunch learned of the search warrant from Minneapolis resident Said Abdullahi, who received an email from Google stating that his account information was subject to the warrant, and would be given to the police.

But Abdullahi said he had no part in the violence and was only in the area to video the protests when the violence began at the AutoZone store.

The warrant said police sought “anonymized” account data from Google on any phone or device that was close to the AutoZone store and the parking lot between 5:20pm and 5:40pm (CST) on May 27, where dozens of the people in the area had gathered.

When reached, Minneapolis police spokesperson John Elder, citing an ongoing investigation, would not answer specific questions about the warrant, including for what reason the warrant was issued.

According to a police affidavit, police said the protests had been relatively peaceful until the afternoon of May 27, when a masked umbrella-wielding man began smashing the windows of the AutoZone store, located across the street from a Minneapolis police precinct where hundreds of protesters had gathered. Several videos show protesters confronting the masked man.

Police said they spent significant resources on trying to identify the so-called “Umbrella Man,” who they say was the catalyst for widespread violence across the city.

“This was the first fire that set off a string of fires and looting throughout the precinct and the rest of the city,” the affidavit read. At least two people were killed in the unrest. (Erika Christensen, a Minneapolis police investigator who filed the affidavit, was not made available for an interview.)

Police accuse the Umbrella Man of creating an “atmosphere of hostility and tension” whose sole aim was to “incite violence.” (TechCrunch is not linking to the affidavit as the police would not say if the suspect had been charged with a crime.) The affidavit also links the suspect to a white supremacist group called the Aryan Cowboys, and to an incident weeks later where a Muslim woman was harassed.

Multiple videos of the protests around the time listed on the warrant appear to line up with the window-smashing incident. Other videos of the scene at the time of the warrant show hundreds of other people in the vicinity. Police were positioned on rooftops and used tear gas and rubber bullets to control the crowds.

Law enforcement across the U.S. are increasingly relying on geofence warrants to solve crimes where a suspect is not known. Police have defended the use of these warrants because they can help identify potential suspects who entered a certain geographic region where a crime was committed. The warrants typically ask for “anonymized information,” but allow police to go back and narrow their requests on potential suspects of interest.

When allowed by law, Google notifies account holders of when law enforcement demands access to the user’s data. According to a court filing in 2019, Google said the number of geofence warrants it received went up by 1,500% between 2017 and 2018, and more than 500% between 2018 and 2019, but has yet to provide a specific number of warrants

Google reportedly received over 180 geofence warrants in a single week in 2019. When asked about more recent figures, a Google spokesperson declined to comment on the record.

Read more on TechCrunch

Civil liberties groups have criticized the use of dragnet search warrants. The American Civil Liberties Union said that geofence warrants “circumvent constitutional checks on police surveillance.” One district court in Virginia said geofence warrants violated the constitution because the majority of individuals whose data is collected will have “nothing whatsoever” to do with the crimes under investigation.

Reports in the past year have implicated people whose only connection to a crime is simply being nearby.

NBC News reported the case of one Gainesville, Fla. resident, who was told by Google that his account information would be given to police investigating a burglary. But the resident was able to prove that he had no connection to the burglary, thanks to an app on his phone that tracked his activity.

In 2019, Google gave federal agents investigating several arson attacks in Milwaukee, Wis. close to 1,500 user records in response to geofence warrant, thought to be one of the largest grabs of account data to date.

But lawmakers are beginning to push back. New York state lawmakers introduced a bill last year that would, if passed, ban geofence warrants across the state, citing the risk of police targeting protesters. Rep. Kelly Armstrong (R-ND) grilled Google chief executive Sundar Pichai at a House Judiciary subcommittee hearing last year. “People would be terrified to know that law enforcement could grab general warrants and get everyone’s information everywhere,” said Armstrong.

Abdullahi told TechCrunch that he had several videos documenting the protests on the day and that he has retained a lawyer to try to prevent Google from giving his account information to Minneapolis police.

“Police assumed everybody in that area that day is guilty,” he said. “If one person did something criminal, [the police] should not go after the whole block of people,” he said.


Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents with SecureDrop.

This Week in Apps: Warnings over privacy changes, Parler CEO fired, Clubhouse goes mainstream

Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.

The app industry is as hot as ever, with a record 218 billion downloads and $143 billion in global consumer spend in 2020.

Consumers last year also spent 3.5 trillion minutes using apps on Android devices alone. And in the U.S., app usage surged ahead of the time spent watching live TV. Currently, the average American watches 3.7 hours of live TV per day, but now spends four hours per day on their mobile devices.

Apps aren’t just a way to pass idle hours — they’re also a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus. In 2020, investors poured $73 billion in capital into mobile companies — a figure that’s up 27% year-over-year.

This week, we’re taking a look at Clubhouse’s breakout moment — or moments, to be fair. Also, the App Store’s rules were updated, Parler’s CEO was fired and other companies began raising their own red flags about Apple’s privacy changes.

This Week in Apps will soon be a newsletter! Sign up here: techcrunch.com/newsletters

Top Stories

Clubhouse goes mainstream

The invite-only audio platform has been on a roll, and has already hosted big names in tech, media and entertainment, including Drake, Estelle, Tiffany Haddish, Kevin Hart, Jared Leto, Ashton Kutcher, and others in the Silicon Valley tech scene. But this week was a breakout if there ever was one, when on Monday, Tesla and SpaceX founder Elon Musk showed up on Clubhouse, topping the app’s limit of 5,000 people in a single room. With others unable to get in, fans livestreamed the event to other platforms like YouTube, live-tweeted, and set up breakout rooms for the overflow. Musk was later joined by “Vlad The Stock Impaler,” aka Robinhood CEO Vlad Tenev, who of course talked about the GameStop saga — and was then interviewed by Musk himself.

Then on Thursday, Clubhouse saw yet another famous guest: Facebook CEO Mark Zuckerberg, who casually went by “Zuck23” when he joined “The Good Time Show” talk show on the app, as Musk had done before him.

The format of the social media network allowed the execs to informally address a wide audience of listeners with whatever they want to talk about — in Musk’s case, that was space travel, crypto, AI and vaccines, among other things. Zuckerberg, meanwhile, used the time to talk about AR/VR and its future in business and remote work. (If you thought Zoom meetings were bad…).

(And who knows, maybe he wanted give the app a try for other reasons, too.)

There is something unsettling about this whole arrangement, of course. Soft-balled questions lobbed at billionaires, journalists blocked from rooms, and so on — all on an app financed by a VC firm, Andreessen Horowitz (a16z), that’s said to be interested in cutting out the media middleman, to “go direct” instead. (Not coincidentally, the room inviting the big name guests was co-hosted by a16z’s Andreessen and its new GP, Sriram Krishnan, who is described as having an “optimistic” outlook — perhaps a valuable commodity when much of the media does not.)

Regardless of the machinations behind the scenes that made it happen, it’s hard to ignore an app where the biggest names in tech show up to just chat — or even interview one another.

Where is all this going?, is a valid question to be raised. Some have described Clubhouse as the late-night talk show equivalent. A place where interviews aren’t about asking the hard questions, but rather about whatever the guest came there to say or promote. And that’s fine, of course — as long as everyone understands that when big names arrive, they may do so with an agenda, even when it seems they’re just there for fun.

In any event, Clubhouse proved this week it’s no longer a buzzy newcomer. For now, at least, it’s decidedly in the game.

Companies (besides Facebook) warn investors about Apple’s privacy changes

So far, it may have seemed as if the only two businesses taking real issue with Apple’s privacy changes, including the coming changes to IDFA, were Facebook and Google. Facebook took out full-page ads and weighed lawsuits. Google delayed iOS app updates while it figured out privacy labels. But as other companies reported their fourth-quarter earnings, IDFA impacts were also topping their list of concerns.

In Snapchat CEO Evan Spiegel’s prepared remarks, he alerted investors to the potential disruption to Snap’s ad business, saying that the privacy changes “will present another risk of interruption” to advertising demand. He noted that it was unclear what the long-term consequences of those changes may be, too. Unity, meanwhile, attached a number to it: IDFA changes would reduce its revenue by about 3%, or $30 million, in 2021.

Image Credits: Facebook

It may be that no one really knows how damaging the IDFA update will be until it rolls out. These are only estimates based on tests and assumptions about user behavior. Plus, there are reports poking holes in Facebook’s claims, which had said that small businesses would suffer a 60% cut in revenues. Those are surely overstated, Harvard Business Review wrote, saying Facebook had cherry-picked and amplified its numbers.

Nevertheless, Facebook is already testing ways to encourage users to accept its tracking. The company on Monday began showing some users prompts that explained why it wants to track and asked users to opt in so Facebook can “provide a better ads experience.” Users could tap “allow” or “don’t allow” in response to the prompt.

Apple updates its App Store Rules

Apple said these were moderate changes — just clarifications and tweaks that had been under way for some time. For example, the new App Store Guidelines now include instructions about how developers should implement the new App Tracking Transparency rules. Another section details how developers can now file an appeal upon an app review rejection.

Other changes are more semantic in nature — changing person-to-person experiences to “services” to broaden the scope, for example, or to clarify how gaming companies can offer a single subscription that works across a variety of standalone apps.

To see what actually changed, go here.

Parler CEO fired

Parler — the app banned from the App Store, Google Play, Amazon AWS, using Okta, etc., etc. — fired its CEO, John Matze, this week after struggling to bring the app back online. According to reports from NPR and others, the firing was due to his disagreement with conservative donor Rebekah Mercer, who controls Parler’s board. Matze argued the app would need to crack down on domestic terrorism and groups that incite violence in order to succeed, he says, but claims he was met with silence. Parler, meanwhile, said those statements were misleading.

After Parler’s rapid deplatforming following the events at the Capitol, other alternative social networks climbed up the charts to take its place. But these apps have not proven themselves to have much staying power. Instead, the top charts are once again filled with the usual: Facebook, Instagram, YouTube, TikTok, Snapchat, etc.

Maybe it’s actually no fun yelling about the world when no one is around to challenge you or fight back?

Weekly News

Apps with earnings news

  • Snap beats with revenue of $911 million in Q4, up 62% YoY, versus $857.4 million expected. Snap’s DAU’s climbed 22% YoY to 265M. But stock dropped over a weak Q1 forecast.
  • PayPal reported stronger-than-expected, pandemic-fueled earnings with EPS up 25.58% YoY to $1.08, beating the estimate of $1.00. Revenue was $6.12 billion up 23.28% YoY year, which beat the estimate of $6.09 billion. The company added 16 million net new accounts, bringing the total to 277 million.
  • Related, Venmo’s TPV grew 60% year over year to $47 billion, and its customer base grew 32%, ending just shy of 70 million accounts. The company expects its revenues will approach $900 million in 2021.
  • Spotify reports revenue growth of 17% YoY to €2.17 billion; 345M MAUs, up 27% YoY; and paid subs to 155 million, up by 24%.

Platforms: Apple

  • Code in the iOS 14.5 beta also suggests new financial features like Apple Card Family for multiuser accounts and a new framework FinHealth that gives automated suggestions to improve your finances.
  • Apple rolls out new and updated design resources for building apps across its platforms, including iOS 14 and iPad OS 14, tvOS 14 and macOS Bir Sur. On mobile, the new design resources for Sketch have been rebuilt to support color variables, and include numerous minor improvements and bug fixes.
  • Apple’s services saw a significant outage this week that impacted, among other things, the App Store, leading to blank pages, broken search results and more.
  • Certain U.S. states will allow casino, sports and lottery games from March 1, 2021. Google already announced a change to Play Store policies, to allow these. In Apple’s updated App Store Guidelines, out this week, it also added “gambling” as one of the app categories that had to be submitted by a legal entity — an indication that it was opening its doors, too.
  • App Store growth hit a six-month high in January 2021, Morgan Stanley said, citing Sensor Tower data that indicated App Store net revenue grew 35% YoY in the month. In Japan and Germany, growth reached 60% and in the U.S. it was 42% YoY, due to pandemic impacts.
  • Some users are saying third-party apps have been crashing after syncing an iPad or iPhone with an M1 Mac.

Platforms: Google

  • Google is said to be exploring its own alternative to Apple’s new anti-tracking feature, which may seem counterintuitive, as Google is in the ads business. But according to a report from Bloomberg, the company is looking into a solution that’s “less stringent” than Apple’s. That could provide some pushback in terms of setting an industry standard.

Gaming

  • YouTube launches Clips, a short-form video feature that lets users clip 5 to 60 seconds of a video and share with others, similar to Twitch’s clips feature. The feature is in limited alpha testing.
  • Epic Games is warning Australia’s market regulator to take action against Apple for using its market power to force developers to pay a 30% commission on paid apps and IAP. Epic is suing Apple in the country, but wants the regulator to step in now.
  • In the U.S., a judge orders a 7-hour deposition from Tim Cook in the Epic vs. Apple lawsuit.
  • Google hasn’t killed game streaming service Stadia yet, but it did announce this week it’s stepping away from first-party games. The company also announced the Stadia Games and Entertainment head Jade Raymond was leaving the company, while the existing staff would be moved to other projects.
  • Amazon Luna’s game streaming service expands to more Android devices, including Pixel 3, 3XL, 3a, 3a XL; Samsung S9, S9+, Note 9. The service was already available on new Pixel, Samsung and OnePlus devices, among others.

Augmented Reality

  • Color of Change launches The Pedestal Project, an AR experience on Instagram that allows users to place statues of racial justice leaders on the empty pedestals where confederate leaders once stood (or anywhere else). At launch, there are three featured leaders included: Rep. John Lewis, Alicia Garza and Chelsea Miller.
  • TikTok partners with WPP to give WPP agencies access to ad products and APIs that are still in development, including new AR formats.

Security & Privacy

  • YouTube adds its App Store privacy label, detailing the data it uses to track users. This includes your physical address, email address, phone number, user and device ID, as well as data linked to you for third-party advertising and for app functionality, product personalization and more.

Fintech

  • Venmo is turning into a financial super app with additions that include crypto, budgeting, saving and shopping with Honey — all of which are planned for this year.
  • Robinhood CEO Vlad Tenev has been asked to testify before the House Financial Services Committee on February 18, over the GameStop debacle. The app still hasn’t recovered its reputation — Play Store reviews have gone back down to 1.0 stars, even after a purge.
  • Reddit has its best-ever month in terms of installs, thanks to the “meme stocks” frenzy driven by users of the r/wallstreetbets forum. The app gained 6.6 million downloads in January 2021, up 43% month-over-month, growing its total installs to date to 122.5 million across iOS and Android.
  • Cash App also this week had to halt buying meme stocks like GameStop, AMC, and Nokia after being notified by its clearing broker of increased capital requirements.
  • Robinhood raises another $2.4 billion from shareholders after its $1 billion raise from investors to help it ride out the meme stock trading frenzy.
  • Joompay, a European rival to Venmo and TransferWise, has now launched in the market after obtaining a Luxembourg Electronic Money Institution (EMI) license.

Social & Photos

Image Credits: Snap

  • Snapchat’s TikTok rival “Spotlight” now has 100 million MAUs, the company said during earnings, and is receiving an average of 175,000 video submissions per day. But Snap is heavily fueling this growth by paying out over $1 million per day to the top-performing videos — everyone wants to be TikTok, it seems.
  • TikTok says it will now downrank “unsubstantiated” claims that fact checkers can’t verify. The app will also place a warning banner overtop these videos and discourage users from sharing them with pop-up messages.
  • TikTok owner ByteDance sues Tencent over alleged monopoly practices. The suit claims that Tencent’s WeChat and QQ messaging services won’t allow links to Douyin, the Chinese version of TikTok.
  • Instagram confirms it’s developing a “Vertical Stories” feed that will allow users to flip through users’ stories vertically, similar to TikTok.
  • IRL, an events website and mobile app, has topped 10 million monthly users as it revamps itself into a social network for events, now including user profiles, group events, and chat.
  • Instagram bans around 400 accounts linked to hacker forum OGUsers, where members buy and sell stolen social media accounts. The hackers used SIM-swapping attacks, harassment and extortion to take over the accounts of  “OG” Instagram users who have coveted short usernames or those with unique words. Twitter and TikTok also took action to target OGUsers members, the companies confirmed.

  • Instagram adds “Recently Deleted,” a new feature that lets you review and recover deleted content. The company says it added protections to stop hackers from accessing your account to reach these items. Deleted stories that are not in your archive will stay in the folder for up to 24 hours. Everything else will be automatically deleted 30 days later.
  • Triller ditches its plans to do a Super Bowl ad and will now host a fan contest instead. The app has struggled to present a challenge to TikTok in the U.S. market.
  • Daily Twitter usage remained consistent despite Trump ban, according to data from Apptopia.

Image Credits: Apptopia

Communication and Messaging

  • Element, a client for federal chat protocol Matrix, was removed from the Play Store this week, for abusive content. But Google made a mistake. This was a third-party client, not the content’s host. And it had already removed the content, based on its own rules. For those unfamiliar, Element is an open network that offers both unencrypted public chatrooms as well as E2EE content. Eventually, the developer got a call from a Google VP who helped the app get reinstated. But the situation, which resulted in 24 hours of downtime, raised a question of how well app stores are prepared to moderate issues that crop up in decentralized platforms and services.
  • Clubhouse CEO Paul Davison confirmed the company will introduce a subscription tool that will allow creators to make money from their rooms.
  • Telegram, benefitting from the shift to private messaging and the WhatsApp backlash, became the most-downloaded app overall in January 2021, across both app stores and on Google Play. On the App Store, it was No. 4 and TikTok was No. 1.

Image Credits: Sensor Tower

Streaming Services and Media

  • Apple-owned Shazam adds iOS 14 widgets for the first time, allowing you to quickly ID any song that’s playing and see your history.
  • Spotify adds new playlists, podcasts and takeovers for Black History Month, and creates a new “Black History Is Now” hub in the app.
  • The U.S. version of the Discovery+ mobile app gets more first-month downloads (3.3 million) than HBO Max did (3.1 million), Apptopia found. But it’s not an apples-to-apples comparison, as existing HBO NOW users were upgraded to Max.

Health & Fitness

  • The Google Fit app on Pixel devices is getting an update that will allow your phone’s camera to measure pulse and breathing rates.

Productivity

  • Microsoft rebrands its document scanner app Office Lens to Microsoft Lens and adds new features, including Image to Text, an Immersive Reader, a QR Code Scanner and the ability to scan up to 100 pages. Lens also now integrates with Teams, so users can record short videos to be sent through Team chats. Uh, TikTok’s about documents, I guess?

Government & Policy

  • Myanmar’s military government orders telecoms to block Facebook until February 7, following coup. The government, which seized power following an election, said the social network is contributing to instability in the country.
  • TikTok will recheck the age of every user in Italy, following an emergency order from the GPDP issued after the January 22 death of a 10-year-old girl who tried the “blackout challenge” she saw on the app. On February 9, every user will have to go through the TikTok age-gate again.

Funding and M&A

  • Uber buys alcohol delivery service Drizly for $1.1 billion. Drizly’s website and app let users order alcohol in markets across the U.S. but is often hampered by local liquor laws. Gross bookings were up 300% YoY, ahead of the deal.
  • Vivino, a wine recommendation and marketplace app, raises $155 million Series D led by Sweden’s Kinnevik. The app now has 50 million users and data set of 1.5 billion photos of wine labels.
  • Mobile ad platform and games publisher AppLovin acquires Berlin-based mobile ad attribution company Adjust in what’s being reported as a $1 billion deal, but is reportedly less. The deal comes at a time when the ad attribution market is being dramatically altered by Apple’s ATT. Mobile Dev Memo explains the deal will give Applovin visibility into which games and driving conversions for Adjust customers, to benefit its own ad campaigns.
  • Latitude, a startup that uses AI to build storylines for games, raises $3.3 million in seed funding. Its first title is AI Dungeon, an open-ended text adventure game.
  • Chinese social gaming startup Guangzhou Quwan Network Technology raises $100 million Series B from Matrix Partners China and Orchid Asia Group Management. The company provides instant voice messaging, social gaming, esports and game distribution and operates voice chat app TT Voice, which has over 100 million users.
  • Consumer trading app Flink, a sort of Robinhood for the Mexican market, raises $12 million Series A led by Accel.
  • Commuting platform Hip, which offers both an online dashboard and mobile app, raises $12 million led by NFX and Magenta Venture Partners. The app works with bus and shuttle providers to plan routes for commuters and offers COVID-19 tracing services.
  • Bot MD, a Signapore-based app that offers doctors an AI chatbot for looking up important information, raises $5 million Series A led by Monk’s Hill Ventures. The funds will help the app to expand elsewhere in the Asia-Pacific region, including Indonesia, the Philippines, Malaysia and India.
  • Meditation and sleep app Expectful raises $3 million in seed funding for its app aimed at new mothers. The company plans to expand the app to become a broader wellness resource for hopeful, expecting and new parents.
  • Brightwheel, an app that allows preschools, daycare providers and camps to communicate with parents raises $55 million in a round led by Addition, valuing the business at $600+ million. Laurene Powell Jobs’s Emerson Collective and Jeff Weiner’s Next Play Ventures also participated.
  • ELSA, a Google-backed language learning app co-founded in 2015 by Vietnamese entrepreneur Vu Van and engineer Xavier Anguera, raises $15 million a round co-led by Vietnam Investments Group and SIG.
  • Financial super app Djamo gets Y Combinator backing for its solution for consumers in Francophone Africa.
  • Bumble IPO filing sets price range for up to $1B. The dating app makers aims to sell 34.5 million shares at $28 to $30 apiece, valuing the business potentially at $6.46B.

Downloads

Reese’s Book Club

Image Credits: Hello Sunshine Apps

Actress and producer Reese Witherspoon’s media company Hello Sunshine has launched an app for Reese’s Book Club — the book club that focuses on diverse voices where women are the center of their stories. The book club today has nearly 2 million Instagram followers and 38 book picks that made The New York Times bestseller list. Its books have also been adapted into film and TV projects, including Hulu’s “Little Fires Everywhere,” upcoming Amazon series “Daisy Jones and the Six, Netflix’s “From Scratch,” and forthcoming film “Where the Crawdads Sing.”

The new app lets users keep track of the new monthly picks, browse past selections, join community discussions with fellow readers, hear from authors, compete for prizes and, soon, buy exclusives items that will help fund The Readership, a pay-it-forward platform aimed at amplifying diverse voices and promoting literacy, which may include efforts like installing book nooks in local communities and supporting indie booksellers.

The app is a free download on the App Store and Google Play.

Carrot Weather

Image Credits: Carrot Weather

Everyone’s favorite snarky weather app received a major overhaul toward the end of January, which includes a redesigned interface, new icons, tools to design the UI how you want it (an “interface maker”), new “secret locations” (a fun Easter egg) and more. The app has also switched to a vertical layout that fills the screen with information, which also includes smart cards that bubble up with weather info when it’s needed. Carrot Weather is also now a free download with subscriptions, instead of a paid app.

‘Orwellian’ AI lie detector project challenged in EU court

A legal challenge was heard today in Europe’s Court of Justice in relation to a controversial EU-funded research project using artificial intelligence for facial ‘lie detection’ with the aim of speeding up immigration checks.

The transparency lawsuit against the EU’s Research Executive Agency (REA), which oversees the bloc’s funding programs, was filed in March 2019 by Patrick Breyer, MEP of the Pirate Party Germany and a civil liberties activist — who has successfully sued the Commission before over a refusal to disclose documents.

He’s seeking the release of documents on the ethical evaluation, legal admissibility, marketing and results of the project. And is hoping to set a principle that publicly funded research must comply with EU fundamental rights — and help avoid public money being wasted on AI ‘snake oil’ in the process.

“The EU keeps having dangerous surveillance and control technology developed, and will even fund weapons research in the future, I hope for a landmark ruling that will allow public scrutiny and debate on unethical publicly funded research in the service of private profit interests,” said Breyer in a statement following today’s hearing. “With my transparency lawsuit, I want the court to rule once and for all that taxpayers, scientists, media and Members of Parliament have a right to information on publicly funded research — especially in the case of pseudoscientific and Orwellian technology such as the ‘iBorderCtrl video lie detector’.”

The court has yet to set a decision date on the case but Breyer said the judges questioned the agency “intensively and critically for over an hour” — and revealed that documents relating to the AI technology involved, which have not been publicly disclosed but had been reviewed by the judges, contain information such as “ethnic characteristics”, raising plenty of questions.

The presiding judge went on to query whether it wouldn’t be in the interests of the EU research agency to demonstrate that it has nothing to hide by publishing more information about the controversial iBorderCtrl project, per Breyer.

AI ‘lie detection’

The research in question is controversial because the notion of an accurate lie detector machine remains science fiction, and with good reason: There’s no evidence of a ‘universal psychological signal’ for deceit.

Yet this AI-fuelled commercial R&D ‘experiment’ to build a video lie detector — which entailed testers being asked to respond to questions put to them by a virtual border guard as a webcam scanned their facial expressions and the system sought to detect what an official EC summary of the project describes as “biomarkers of deceit” in an effort to score the truthfulness of their facial expressions (yes, really🤦‍♀️) — scored over €4.5M/$5.4M in EU research funding under the bloc’s Horizon 2020 scheme.

The iBorderCtrl project ran between September 2016 and August 2019, with the funding spread between 13 private or for-profit entities across a number of Member States (including the UK, Poland, Greece and Hungary).

Public research reports the Commission said would be published last year, per a written response to Breyer’s questions challenging the lack of transparency, do not appear to have seen the light of day yet.

Back in 2019 The Intercept was able to test out the iBorderCtrl system for itself. The video lie detector falsely accused its reporter of lying — judging she had given four false answers out of 16, and giving her an overall score of 48 which it reported that a policeman who assessed the results said triggered a suggestion from the system she should be subject to further checks (though was not as the system was never run for real during border tests).

The Intercept said it had to file a data access request — a right that’s established in EU law — in order to obtain a copy of the reporter’s results. Its report quoted Ray Bull, a professor of criminal investigation at the University of Derby, who described the iBorderCtrl project as “not credible” — given the lack of n evidence that monitoring microgestures on people’s faces is an accurate way to measure lying.

“They are deceiving themselves into thinking it will ever be substantially effective and they are wasting a lot of money. The technology is based on a fundamental misunderstanding of what humans do when being truthful and deceptive,” Bull also told it.

The notion that AI can automagically predict human traits if you just pump in enough data is distressingly common — just look at recent attempts to revive phrenology by applying machine learning to glean ‘personality traits’ from face shape. So a face-scanning AI ‘lie detector’ sits in a long and ignoble anti-scientific ‘tradition’.

In the 21st century it’s frankly incredible that millions of euros of public money are being funnelled into rehashing terrible old ideas — before you even consider the ethical and legal blindspots inherent in the EU funding research that runs counter to fundamental rights set out in the EU’s charter. When you consider all the bad decisions involved in letting this fly it looks head-hangingly shameful.

The granting of funds to such a dubious application of AI also appears to ignore all the (good) research that has been done showing how data-driven technologies risk scaling bias and discrimination.

We can’t know for sure, though, because only very limited information has been released about how the consortia behind iBorderCtrl assessed ethics considerations in their experimental application — which is a core part of the legal complaint.

The challenge in front of the European Court of Justice in Luxembourg poses some very awkward questions for the Commission: Should the EU be pouring taxpayer cash into pseudoscientific ‘research’? Shouldn’t it be trying to fund actual science? And why does its flagship research program — the jewel in the EU crown — have so little public oversight?

The fact that a video lie detector made it through the EU’s ‘ethics self-assessment‘ process, meanwhile, suggests the claimed ‘ethics checks’ aren’t worth a second glance.

“The decision on whether to accept [an R&D] application or not is taken by the REA after Member States representatives have taken a decision. So there is no public scrutiny, there is no involvement of parliament or NGOs. There is no [independent] ethics body that will screen all of those projects. The whole system is set up very badly,” says Breyer.

“Their argument is basically that the purpose of this R&D is not to contribute to science or to do something for public good or to contribute to EU policies but the purpose of these programs really is to support the industry — to develop stuff to sell. So it’s really supposed to be an economical program, the way it has been devised. And I think we really actually need a discussion about whether this is right, whether this should be so.”

“The EU’s about to regulate AI and here it is actually funding unethical and unlawful technologies,” he adds.

No external ethics oversight

Not only does it look hypocritical for the EU to be funding rights-hostile research but — critics contend — it’s a waste of public money that could be spend on genuinely useful research (be it for a security purpose or, more broadly, for the public good; and for furthering those ‘European values’ EU lawmakers love to refer to).

“What we need to know and understand is that research that will never be used because it doesn’t work or it’s unethical or it’s illegal, that actually wastes money for other programs that would be really important and useful,” argues Breyer.

“For example in the security program you could maybe do some good in terms of police protective gear. Or maybe in terms of informing the population in terms of crime prevention. So you could do a lot of good if these means were used properly — and not on this dubious technology that will hopefully never be used.”

The latest incarnation of the EU’s flagship research and innovation program, which takes over from Horizon 2020, has a budget of ~€95.5BN for the 2021-2027 period. And driving digital transformation and developments in AI are among the EU’s stated research funding priorities. So the pot of money available for ‘experimental’ AI looks massive.

But who will be making sure that money isn’t wasted on algorithmic snake oil — and dangerous algorithmic snake oil in instances where the R&D runs so clearly counter to the EU’s own charter of fundamental human rights?

The European Commission declined multiple requests for spokespeople to talk about these issues but it did send some on the record points (below), and some background information regarding access to documents which is a key part of the legal complaint.

Among the Commission’s on the record statements on ‘ethics in research’, it started with the claim that “ethics is given the highest priority in EU funded research”.

“All research and innovation activities carried out under Horizon 2020 must comply with ethical principles and relevant national, EU and international law, including the Charter of Fundamental Rights and the European Convention on Human Rights,” it also told us, adding: “All proposals undergo a specific ethics evaluation which verifies and contractually obliges the compliance of the research project with ethical rules and standards.”

It did not elaborate on how a ‘video lie detector’ could possibly comply with EU fundamental rights — such as the right to dignity, privacy, equality and non-discrimination.

And it’s worth noting that the European Data Protection Supervisor (EDPS) has raised concerns about misalignment between EU-funded scientific research and data protection law, writing in a preliminary opinion last year: “We recommend intensifying dialogue between data protection authorities and ethical review boards for a common understanding of which activities qualify as genuine research, EU codes of conduct for scientific research, closer alignment between EU research framework programmes and data protection standards, and the beginning of a debate on the circumstances in which access by researchers to data held by private companies can be based on public interest”.

On the iBorderCtrl project specifically the Commission told us that the project appointed an ethics advisor to oversee the implementation of the ethical aspects of research “in compliance with the initial ethics requirement”. “The advisor works in ways to ensure autonomy and independence from the consortium,” it claimed, without disclosing who the project’s (self-appointed) ethics advisor is.

“Ethics aspects are constantly monitored by the Commission/REA during the execution of the project through the revision of relevant deliverables and carefully analysed in cooperation with external independent experts during the technical review meetings linked to the end of the reporting periods,” it went on, adding that: “A satisfactory ethics check was conducted in March 2019.”

It did not provide any further details about this self-regulatory “ethics check”.

“The way how it works so far is basically some expert group that the Commission sets up with propose/call for tender,” says Breyer, discussing how the EU’s research program is structured. “It’s dominated by industry experts, it doesn’t have any members of parliament in there, it only has — I think — one civil society representative in it, so that’s falsely composed right from the start. Then it goes to the Research Executive Agency and the actual decision is taking by representatives of the Member States.

“The call [for research proposals] itself doesn’t sound so bad if you look it up — it’s very general — so the problem really was the specific proposal that they proposed in response to it. And these are not screened by independent experts, as far as I understand it. The issue of ethics is dealt with by self assessment. So basically the applicant is supposed to indicate whether there is a high ethical risk involved in the project or not. And only if they indicate so will experts — selected by the REA — do an ethics assessment.

“We don’t know who’s been selected, we don’t know their opinions — it’s also being kept secret — and if it turns out later that a project in unethical it’s not possible to revoke the grant.”

The hypocrisy charge comes in sharply here because the Commission is in the process of shaping risk-based rules for the application of AI. And EU lawmakers have been saying for years that artificial intelligence technologies need ‘guardrails’ to make sure they’re applied in line with regional values and rights.

Commission EVP Margrethe Vestager has talked about the need for rules to ensure artificial intelligence is “used ethically” and can “support human decisions and not undermine them”, for example.

Yet EU institutions are simultaneously splashing public funds on AI research that would clearly be unlawful if implemented in the region, and which civil society critics decry as obviously unethical given the lack of scientific basis underpinning ‘lie detection’.

In an FAQ section of the iBorderCtrl website, the commercial consortia behind the project concedes that real-world deployment of some of the technologies involved would not be covered by the existing EU legal framework — adding that this means “they could not be implemented without a democratic political decision establishing a legal basis”.

Or, put another way, such a system would be illegal to actually use for border checks in Europe without a change in the law. Yet European taxpayer funding was nonetheless ploughed in.

A spokesman for the EDPS declined to comment on Breyer’s case specifically but he confirmed that its preliminary opinion on scientific research and data protection is still relevant.

He also pointed to further related work which addresses a recent Commission push to encourage pan-EU health data sharing for research purposes — where the EDPS advises that data protection safeguards should be defined “at the outset” and also that a “thought through” legal basis should be established ahead of research taking place.

The EDPS recommends paying special attention to the ethical use of data within the [health data sharing] framework, for which he suggests taking into account existing ethics committees and their role in the context of national legislation,” the EU’s chief data supervisor writes, adding that he’s “convinced that the success of the [health data sharing plan] will depend on the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights”.

tl;dr: Legal and ethical use of data must be the DNA of research efforts — not a check-box afterthought.

Unverifiable tech

In addition to a lack of independent ethics oversight of research projects that gain EU funding, there is — currently and worryingly for supposedly commercially minded research — no way for outsiders to independently verify (or, well, falsify) the technology involved.

In the case of the iBorderCtrl tech no meaningful data on the outcomes of the project has been made public and requests for data sought under freedom of information law have been blocked on commercial interest grounds.

Breyer has been trying without success to obtain information about the results of the project since it finished in 2019. The Guardian reported in detail on his fight back in December.

Under the legal framework wrapping EU research he says there’s only a very limited requirement to publish information on project outcomes — and only long after the fact. His hope is thus that the Court of Justice will agree ‘commercial interests’ can’t be used to over-broadly deny disclosure of information in the public interest.

“They basically argue there is no obligation to examine whether a project actually works so they have the right to fund research that doesn’t work,” he tells TechCrunch. “They also argue that basically it’s sufficient to exclude access if any publication of the information would damage the ability to sell the technology — and that’s an extremely wide interpretation of commercially sensitive information.

“What I would accept is excluding information that really contains business secrets like source code of software programs or internal calculations or the like. But that certainly shouldn’t cover, for example, if a project is labelled as unethical. It’s not a business secret but obviously it will harm their ability to sell it — but obviously that interpretation is just outrageously wide.”

“I’m hoping that this [legal action] will be a precedent to clarify that information on such unethical — and also unlawful if it were actually used or deployed — technologies, that the public right to know takes precedence over the commercial interests to sell the technology,” he adds. “They are saying we won’t release the information because doing so will diminish the chances of selling the technology. And so when I saw this then I said well it’s definitely worth going to court over because they will be treating all requests the same.”

Civil society organizations have also been thwarted in attempts to get detailed information about the iBorderCtrl project. The Intercept reported in 2019 that researchers at the Milan-based Hermes Center for Transparency and Digital Human Rights used freedom of information laws to obtain internal documents about the iBorderCtrl system, for example, but the hundreds of pages they got back were heavily redacted — with many completely blacked out.

“I’ve heard from [journalists] who have tried in vain to find out about other dubious research projects that they are massively withholding information. Even stuff like the ethics report or the legal assessment — that’s all stuff that doesn’t contain any commercial secrets, as such,” Breyer continues. “It doesn’t contain any source code, nor any sensitive information — they haven’t even released these partially.

“I find it outrageous that an EU authority [the REA] will actually say we don’t care what the interest is in this because as soon as it could diminish sales then we will withhold the information. I don’t think that’s acceptable, both in terms of taxpayers’ interests in knowing about what their money is being used for but also in terms of the scientific interest in being able to test/to verify these experiments on the so called ‘deception detection’ — which is very contested if it really works. And in order to verify or falsify it scientists of course need to have access to the specifics about these trials.

“Also democratically speaking if ever the legislator wants to decide on the introduction of such a system or even on the framing of these research programs we basically need to know the details — for example what was the number of false positives? How well does it really work? Does it have a discriminatory effect because it works less well on certain groups of people such as facial recognition technology. That’s all stuff that we really urgently need to know.”

Regarding access to documents related to EU-funded research the Commission referred us to Regulation no. 1049/2001 — which it said “lays down the general principles and limits” — though it added that “each case is analysed carefully and individually”.

However the Commission’s interpretation of the regulations of the Horizon program appears to entirely exclude the application of the freedom of information — at least in the iBorderCtrl project case.

Per Breyer, they limit public disclosure to a summary of the research findings — that can be published some three or four years after the completion of the project.

“You’ll see an essay of five or six pages in some scientific magazine about this project and of course you can’t use it to verify or falsify the technology,” he says. “You can’t see what exactly they’ve been doing — who they’ve been talking to. So this summary is pretty useless scientifically and to the public and democratically and it takes ages. So I hope that in the future we will get more insight and hopefully a public debate.”

The EU research program’s legal framework is secondary legislation. So Breyer’s argument is that a blanket clause about protecting ‘commercial interests’ should not be able to trump fundamental EU rights to transparency. But of course it will be up to the court to decide.

“I think I stand some good chance especially since transparency and access to information is actually a fundamental right in the EU — it’s in the EU charter of fundamental rights. And this Horizon legislation is only secondary legislation — they can’t deviate from the primary law. And they need to be interpreted in line with it,” he adds. “So I think the court will hopefully say that this is applicable and they will do some balancing in the context of the freedom of information which also protects commercial information but subject to prevailing public interests. So I think they will find a good compromise and hopefully better insight and more transparency.

“Maybe they’ll blacken out some parts of the document, redact some of it but certainly I hope that in principle we will get access to that. And thereby also make sure that in the future the Commission and the REA will have to hand over most of the stuff that’s been requested on this research. Because there’s a lot of dubious projects out there.”

A better system of research project oversight could start by having the committee that decides on funding applications not being comprised of mostly industry and EU Member State representatives (who of course will always want EU cash to come to their region) — but also parliamentary representatives, more civil society representatives and scientists, per Breyer.

“It should have independent participants and those should be the majority,” he says. “That would make sense to steer the research activities in the direction of public good, of compliance with our values, of useful research — because what we need to know and understand is research that will never be used because it doesn’t work or it’s unethical or it’s illegal, that wastes money for other programs that would be really important and useful.”

He also points to a new EU research program being set up that’s focused on defence — under the same structure, lacking proper public scrutiny of funding decisions or information disclosure, noting: “They want to do this for defence as well. So that will be even about lethal technologies.”

To date the only disclosures around iBorderCtrl have been a few parts of the technical specifications of its system and some of a communications report, per Breyer, who notes that both were ‘heavily redacted”.

“They don’t say for example which border agencies they have introduced this system to, they don’t say which politicians they’ve been talking to,” he says. “The interesting thing actually is that part of this funding is also presenting the technology to border authorities in the EU and politicians. Which is very interesting because the Commission keeps saying look this is only research; it doesn’t matter really. But in actual fact they are already using the project to promote the technology and the sales of it. And even if this is never used at EU borders funding the development will mean that it could be used by other governments — it could be sold to China and Saudi Arabia and the like.

“And also the deception detection technology — the company that is marketing it [a Manchester-based company called Silent Talker Ltd] — is also offering it to insurance companies, or to be used on job interviews, or maybe if you apply for a loan at a bank. So this idea that an AI system would be able to detect lies risks being used in the private sector very broadly and since I’m saying that it doesn’t work at all and it’s basically a lottery lots of people risk having disadvantages from this dubious technology.”

“It’s quite outrageous that nobody prevents the EU from funding such ‘voodoo’ technology,” he adds.

The Commission told us that “The Intelligent Portable Border Control System” (aka iBorderCtrl) “explored new ideas on increasing efficiency, convenience and security of land border crossing”, and like all security research projects it was “aimed at testing new ideas and technologies to address security challenges”.

“iBorderCtrl was not expected to deliver ready-made technologies or products. Not all research projects lead to the development of technologies with real-world applications. Once research projects are over, it is up to Member States to decide whether they want to further research and/or develop solutions studied by the project,” it also said. 

It also pointed out that specific application of any future technology “will always have to respect EU and national law and safeguards, including on fundamental rights and the EU rules on the protection of personal data”.

However Breyer also calls foul on the Commission seeking to deflect public attention by claiming ‘it’s only R&D’ or that it’s not deciding on the use of any particular technology. “Of course factually it creates pressure on the legislator to agree to something that has been developed if it turns out to be useful or to work,” he argues. “And also even if it’s not used by the EU itself it will be sold somewhere else — and so I think the lack of scrutiny and ethical assessment of this research is really scandalous. Especially as they have repeatedly developed and researched surveillance technologies — including mass surveillance of public spaces.”

“They have projects on Internet on bulk data collection and processing of Internet data. The security program is very problematic because they do research into interferences with fundamental rights — with the right to privacy,” he goes on. “There are no limitations really in the program to rule out unethical methods of mass surveillance or the like. And not only are there no material limitations but also there is no institutional set-up to be able to exclude such projects right from the beginning. And then even once the programs have been devised and started they will even refuse to disclose access to them. And that’s really outrageous and as I said I hope the court will do some proper balancing and provide for more insight and then we can basically trigger a public debate on the design of these research schemes.”

Pointing again to the Commission’s plan to set up a defence R&D fund under the same industry-centric decision-making structure — with a “similarly deficient ethics appraisal mechanism” — he notes that while there are some limits on EU research being able to fund autonomous weapons, other areas could make bids for taxpayer cash — such as weapons of mass destruction and nuclear weapons.

“So this will be hugely problematic and will have the same issue of transparency, all the more of course,” he adds.

On transparency generally, the Commission told us it “always encourages projects to publicise as much as possible their results”. While, for iBorderCtrl specifically, it said more information about the project is available on the CORDIS website and the dedicated project website.

If you take the time to browse to the ‘publications‘ page of the iBorderCtrl website you’ll find a number of “deliverables” — including an “ethics advisor”; the “ethic’s advisor’s first report”; an “ethics of profiling, the risk of stigmatization of individuals and mitigation plan”; and an “EU wide legal and ethical review report” — all of which are listed as “confidential”.