Facebook’s decision-review body to take “weeks” longer over Trump ban call

Facebook’s self-styled and handpicked ‘Oversight Board’ will make a decision on whether or not to overturn an indefinite suspension of the account of former president Donald Trump within “weeks”, it said in a brief update statement on the matter today.

The high profile case appears to have attracted major public interest, with the FOB tweeting that it’s received more than 9,000 responses so far to its earlier request for public feedback.

It added that its commitment to “carefully reviewing all comments” after an earlier extension of the deadline for feedback is responsible for the extension of the case timeline.

The Board’s statement adds that it will provide more information “soon”.

Trump’s indefinite suspension from Facebook and Instagram was announced by Facebook founder Mark Zuckerberg on January 7, after the then-president of the U.S. incited his followers to riot at the nation’s Capitol — an insurrection that led to chaotic and violent scenes and a number of deaths as his supporters clashed with police.

However Facebook quickly referred the decision to the FOB for review — opening up the possibility that the ban could be overturned in short order as Facebook has said it will be bound by the case review decisions issued by the Board.

After the FOB accepted the case for review it initially said it would issue a decision within 90 days of January 21 — a deadline that would have fallen next Wednesday.

However it now looks like the high profile, high stakes call on Trump’s social media fate could be pushed into next month.

It’s a familiar development in Facebook-land. Delay has been a long time feature of the tech giant’s crisis PR response in the face of a long history of scandals and bad publicity attached to how it operates its platform. So the tech giant is unlikely to be uncomfortable that the FOB is taking its time to make a call on Trump’s suspension.

After all, devising and configuring the bespoke case review body — as its proprietary parody of genuine civic oversight — is a process that has taken Facebook years already.

In related FOB news this week, Facebook announced that users can now request the board review its decisions not to remove content — expanding the Board’s potential cases to include reviews of ‘keep ups’ (not just content takedowns).

This report was updated with a correction: The FOB previously extended the deadline for case submissions; it has not done so again as we originally stated

Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach

Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.

Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).

Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.

The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.

Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.

Facebook has been contacted for comment on the litigation.

The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.

A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.

With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.

(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).

Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.

Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.

That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533M accounts now, suggests it should face a higher sanction from the DPC than Twitter received.

However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.

Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.

“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.

It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.

It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.

In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.

Facebook, meanwhile, has sought to play down the breach it failed to disclose — claiming it’s ‘old data’ — a deflection that ignores the fact that dates of birth don’t change (nor do most people routinely change their mobile number or email address).

Plenty of the ‘old’ data exposed in this latest massive Facebook data leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.

Pakistan temporarily blocks social media

Pakistan has temporarily blocked several social media services in the South Asian nation, according to users and a notice reviewed by TechCrunch.

In an order titled “Complete Blocking of Social Media Platforms,” the Pakistani government ordered Pakistan Telecommunication Authority to block social media platforms including Twitter, Facebook, WhatsApp, YouTube, and Telegram from 11am to 3pm (9.30am GMT) Friday.

The move comes as Pakistan looks to crackdown against a violent terrorist group and prevent troublemakers from disrupting Friday prayers congregations following days of violent protests.

Earlier this week Pakistan banned the Islamist group Tehrik-i-Labaik Pakistan after arresting its leader, which prompted protests, according to local media reports.

An entrepreneur based in Pakistan told TechCrunch that even though the order is supposed to expire at 3pm local time, similar past moves by the government suggests that the disruption will likely last for longer.

Though Pakistan, like its neighbor India, has temporarily cut phone calls access in the nation in the past, this is the first time Islamabad has issued a blanket ban on social media in the country.

Pakistan has explored ways to assume more control over content on digital services operating in the country in recent years. Some activists said the country was taking extreme measures without much explanations.

Facebook brings software subscriptions to the Oculus Quest

Subscription pricing is landing on Facebook’s Oculus Store, giving VR developers another way to monetize content on Facebook’s Oculus Quest headset.

Developers will be allowed to add premium subscriptions to paid or free apps, with Facebook assumedly dragging in their standard percentage fee at the same time. Oculus and the developers on its platform have been riding the success of the company’s recent Quest 2 headset, which Facebook hasn’t detailed sales numbers on but has noted that the months-old $299 headset has already outsold every other Oculus headset sold to date.

Subscription pricing is an unsurprising development but signals that some developers believe they have a loyal enough group of subscribers to bring in sizable bits of recurring revenue. Facebook shipped the first Oculus Rift just over five years ago, and it’s been a zig-zagging path to finding early consumer success during that time. A big challenge for them has been building a dynamic developer ecosystem that offer something engaging to users while ensuring that VR devs can operate sustainably.

At launch, there are already a few developers debuting subscriptions for a number of different app types, spanning exercise, meditation, social, productivity and DJing. In addition to subscriptions, the new monetization path also allows developers to let users try out paid apps on a free trial basis.

The central question is how many Quest users there are that utilize their devices enough to justify a number of monthly subscriptions, but for developers looking to monetize their hardcore users, this is another utility that they likely felt was missing from the Oculus Store.

Facebook to test new business discovery features in U.S. News Feed

Facebook announced this morning it will begin testing a new experience for discovering businesses in its News Feed in the U.S. When live, users to tap on topics they’re interested in underneath posts and ads in their News Feed in order to explore related content from businesses. The change comes at a time when Facebook has been arguing how Apple’s App Tracking Transparency update will impact its small business customers — a claim many have dismissed as misleading, but nevertheless led some mom and pop shops to express concern about the impacts to their ad targeting capabilities, as a result. This new test is an example of how easily Facebook can tweak its News Feed to build out more data on its users, if needed.

The company suggests users may see the change under posts and ads from businesses selling beauty products, fitness or clothing, among other things.

The idea here is that Facebook would direct users to related businesses through a News Feed feature, when they take a specific action to discover related content. This, in turn, could help Facebook create a new set of data on its users, in terms of which users clicked to see more, and what sort of businesses they engaged with, among other things. Over time, it could turn this feature into an ad unit, if desired, where businesses could pay for higher placement.

“People already discover businesses while scrolling through News Feed, and this will make it easier to discover and consider new businesses they might not have found on their own,” the company noted in a brief announcement.

Facebook didn’t detail its further plans with the test, but said as it learned from how users interacted with the feature, it will expand the experience to more people and businesses.

Image Credits: Facebook

Along with news of the test, Facebook said it will roll out more tools for business owners this month, including the ability to create, publish and schedule Stories to both Facebook and Instagram; make changes and edits to Scheduled Posts; and soon, create and manage Facebook Photos and Albums from Facebook’s Business Suite. It will also soon add the ability to create and save Facebook and Instagram posts as drafts from the Business Suite mobile app.

Related to the businesses updates, Facebook updated features across ad products focused on connecting businesses with customer leads, including Lead Ads, Call Ads, and Click to Messenger Lead Generations.

Facebook earlier this year announced a new Facebook Page experience that gave businesses the ability to engage on the social network with their business profile for things like posting, commenting and liking, and access to their own, dedicated News Feed. And it had removed the Like button in favor of focusing on Followers.

It is not a coincidence that Facebook is touting its tools for small businesses at a time when there’s concern — much of it loudly shouted by Facebook itself — that its platform could be less useful to small business owners in the near future, when ad targeting capabilities becomes less precise as users vote ‘no’ when Facebook’s iOS app asks if it can track them.

Ireland opens GDPR investigation into Facebook leak

Facebook’s lead data supervisor in the European Union has opened an investigation into whether the tech giant violated data protection rules vis-a-vis the leak of data reported earlier this month.

Here’s the Irish Data Protection Commission’s statement:

“The Data Protection Commission (DPC) today launched an own-volition inquiry pursuant to section 110 of the Data Protection Act 2018 in relation to multiple international media reports, which highlighted that a collated dataset of Facebook user personal data had been made available on the internet. This dataset was reported to contain personal data relating to approximately 533 million Facebook users worldwide. The DPC engaged with Facebook Ireland in relation to this reported issue, raising queries in relation to GDPR compliance to which Facebook Ireland furnished a number of responses.

The DPC, having considered the information provided by Facebook Ireland regarding this matter to date, is of the opinion that one or more provisions of the GDPR and/or the Data Protection Act 2018 may have been, and/or are being, infringed in relation to Facebook Users’ personal data.

Accordingly, the Commission considers it appropriate to determine whether Facebook Ireland has complied with its obligations, as data controller, in connection with the processing of personal data of its users by means of the Facebook Search, Facebook Messenger Contact Importer and Instagram Contact Importer features of its service, or whether any provision(s) of the GDPR and/or the Data Protection Act 2018 have been, and/or are being, infringed by Facebook in this respect.”

Facebook has been contacted for comment.

The move comes after the European Commission intervened to apply pressure on Ireland’s data protection commissioner. Justice commissioner, Didier Reynders, tweeted Monday that he had spoken with Helen Dixon about the Facebook data leak.

“The Commission continues to follow this case closely and is committed to supporting national authorities,” he added, going on to urge Facebook to “cooperate actively and swiftly to shed light on the identified issues”.

A spokeswoman for the Commission confirmed the virtual meeting between Reynders and Dixon, saying: “Dixon informed the Commissioner about the issues at stake and the different tracks of work to clarify the situation.

“They both urge Facebook to cooperate swiftly and to share the necessary information. It is crucial to shed light on this leak that has affected millions of European citizens.”

“It is up to the Irish data protection authority to assess this case. The Commission remains available if support is needed. The situation will also have to be further analyzed for the future. Lessons should be learned,” she added.

The revelation that a vulnerability in Facebook’s platform enabled unidentified ‘malicious actors’ to extract the personal data (including email addresses, mobile phone numbers and more) of more than 500 million Facebook accounts up until September 2019 — when Facebook claims it fixed the issue — only emerged in the wake of the data being found for free download on a hacker forum earlier this month.

Despite the European Union’s data protection framework (the GDPR) baking in a regime of data breach notifications — with the risk of hefty fines for compliance failure — Facebook did not inform its lead EU data supervisory when it found and fixed the issue. Ireland’s Data Protection Commission (DPC) was left to find out in the press, like everyone else.

Nor has Facebook individually informed the 533M+ users that their information was taken without their knowledge or consent, saying last week it has no plans to do so — despite the heightened risk for affected users of spam and phishing attacks.

Privacy experts have, meanwhile, been swift to point out that the company has still not faced any regulatory sanction under the GDPR — with a number of investigations ongoing into various Facebook businesses and practices and no decisions yet issued in those cases by Ireland’s DPC. (It has so far only issued one cross-border decision, fining Twitter around $550k in December over a breach it disclosed back in 2019.)

Last month the European Parliament adopted a resolution on the implementation of the GDPR which expressed “great concern” over the functioning of the mechanism — raising particular concern over the Irish data protection authority by writing that it “generally closes most cases with a settlement instead of a sanction and that cases referred to Ireland in 2018 have not even reached the stage of a draft decision pursuant to Article 60(3) of the GDPR”.

The latest Facebook data scandal further amps up the pressure on the DPC — providing further succour to critics of the GDPR who argue the regulation is unworkable under the current foot-dragging enforcement structure, given the major bottlenecks in Ireland (and Luxembourg) where many tech giants choose to locate regional HQ.

On Thursday Reynders made his concern over Ireland’s response to the Facebook data leak public, tweeting to say the Commission had been in contact with the DPC.

He does have reason to be personally concerned. Earlier last week Politico reported that Reynders’ own digits had been among the cache of leaked data, along with those of the Luxembourg prime minister Xavier Bettel — and “dozens of EU officials”. However the problem of weak GDPR enforcement affects everyone across the bloc — some 446M people whose rights are not being uniformly and vigorously upheld.

“A strong enforcement of GDPR is of key importance,” Reynders also remarked on Twitter, urging Facebook to “fully cooperate with Irish authorities”.

Last week Italy’s data protection commission also called on Facebook to immediately offer a service for Italian users to check whether they had been affected by the breach. But Facebook made no public acknowledgment or response to the call. Under the GDPR’s one-stop-shop mechanism the tech giant can limit its regulatory exposure by direct dealing only with its lead EU data supervisor in Ireland.

A two-year Commission review of how the data protection regime is functioning, which reported last summer, already drew attention to problems with patchy enforcement. A lack of progress on unblocking GDPR bottlenecks is thus a growing problem for the Commission — which is in the midst of proposing a package of additional digital regulations. That makes the enforcement point a very pressing one as EU lawmakers are being asked how new digital rules will be upheld if existing ones keep being trampled on?

It’s certainly notable that the EU’s executive has proposed a different, centralized enforcement structure for incoming pan-EU legislation targeted at digital services and tech giants. Albeit, getting agreement from all the EU’s institutions and elected representatives on how to reshape platform oversight looks challenging.

And in the meanwhile the data leaks continue: Motherboard reported Friday on another alarming leak of Facebook data it found being made accessible via a bot on the Telegram messaging platform that gives out the names and phone numbers of users who have liked a Facebook page (in exchange for a fee unless the page has had less than 100 likes).

The publication said this data appears to be separate to the 533M+ scraped dataset — after it ran checks against the larger dataset via the breach advice site, haveibeenpwned. It also asked Alon Gal, the person who discovered the aforementioned leaked Facebook dataset being offered for free download online, to compare data obtained via the bot and he did not find any matches.

We contacted Facebook about the source of this leaked data and will update this report with any response.

In his tweet about the 500M+ Facebook data leak last week, Reynders made reference to the Europe Data Protection Board (EDPB), a steering body comprised of representatives from Member State data protection agencies which works to ensure a consistent application of the GDPR.

However the body does not lead on GDPR enforcement — so it’s not clear why he would invoke it. Optics is one possibility, if he was trying to encourage a perception that the EU has vigorous and uniform enforcement structures where people’s data is concerned.

“Under the GDPR, enforcement and the investigation of potential violations lies with the national supervisory authorities. The EDPB does not have investigative powers per se and is not involved in investigations at the national level. As such, the EDPB cannot comment on the processing activities of specific companies,” an EDPB spokeswoman told us when we enquired about Reynders’ remarks.

But she also noted the Commission attends plenary meetings of the EDPB — adding it’s possible there will be an exchange of views among members about the Facebook leak case in the future, as attending supervisory authorities “regularly exchange information on cases at the national level”.

 

EU plan for risk-based AI rules to set fines as high as 4% of global turnover, per leaked draft

European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — that’s expected to be officially unveiled next week.

The plan to regulate AI has been on the cards for a while. Back in February 2020 the European Commission published a white paper, sketching plans for regulating so-called “high risk” applications of artificial intelligence.

At the time EU lawmakers were toying with a sectoral focus — envisaging certain sectors like energy and recruitment as vectors for risk. However that approach appears to have been rethought, per the leaked draft — which does not limit discussion of AI risk to particular industries or sectors.

Instead, the focus is on compliance requirements for high risk AI applications, wherever they may occur (weapons/military uses are specifically excluded, however, as such use-cases fall outside the EU treaties). Although it’s not abundantly clear from this draft exactly how ‘high risk’ will be defined.

The overarching goal for the Commission here is to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values” in order to encourage uptake of so-called “trustworthy” and “human-centric” AI. So even makers of AI applications not considered to be ‘high risk’ will still be encouraged to adopt codes of conduct — “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems”, as the Commission puts it.

Another chunk of the regulation deals with measures to support AI development in the bloc — pushing Member States to establish regulatory sandboxing schemes in which startups and SMEs can be proritized for support to develop and test AI systems before bringing them to market.

Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.

What’s high risk AI?

Under the planned rules, those intending to apply artificial intelligence will need to determine whether a particular use-case is ‘high risk’ and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.

“The classification of an AI system as high-risk should be based on its intended purpose — which should refer to the use for which an AI system is intended, including the specific context and conditions of use and — and be determined in two steps by considering whether it may cause certain harms and, if so, the severity of the possible harm and the probability of occurrence,” runs one recital in the draft.

“A classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation,” the text also specifies.

Examples of “harms” associated with high-risk AI systems are listed in the draft as including: “the injury or death of a person, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on [European] fundamental rights.”

Several examples of high risk applications are also discussed — including recruitment systems; systems that provide access to educational or vocational training institutions; emergency service dispatch systems; creditworthiness assessment; systems involved in determining taxpayer-funded benefits allocation; decision-making systems applied around the prevention, detection and prosecution of crime; and decision-making systems used to assist judges.

So long as compliance requirements — such as establishing a risk management system and carrying out post-market surveillance, including via a quality management system — are met such systems would not be barred from the EU market under the legislative plan.

Other requirements include in the area of security and that the AI achieves consistency of accuracy in performance — with a stipulation to report to “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority no later than 15 days after becoming aware of it.

“High-risk AI systems may be placed on the Union market or otherwise put into service subject to compliance with mandatory requirements,” the text notes.

“Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market should be complied with taking into account the intended purpose of the AI system and according to the risk management system to be established by the provider.

“Among other things, risk control management measures identified by the provider should be based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art, also including as reflected in relevant harmonised standards or common specifications.”

Prohibited practices and biometrics

Certain AI “practices” are listed as prohibited under Article 4 of the planned law, per this leaked draft — including (commercial) applications of mass surveillance systems and general purpose social scoring systems which could lead to discrimination.

AI systems that are designed to manipulate human behavior, decisions or opinions to a detrimental end (such as via dark pattern design UIs), are also listed as prohibited under Article 4; as are systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people.

A casual reader might assume the regulation is proposing to ban, at a stroke, practices like behavioral advertising based on people tracking — aka the business models of companies like Facebook and Google. However that assumes adtech giants will accept that their tools have a detrimental impact on users.

On the contrary, their regulatory circumvention strategy is based on claiming the polar opposite; hence Facebook’s talk of “relevant” ads. So the text (as written) looks like it will be a recipe for (yet) more long-drawn out legal battles to try to make EU law stick vs the self-interested interpretations of tech giants.

The rational for the prohibited practices is summed up in an earlier recital of the draft — which states: “It should be acknowledged that artificial intelligence can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful and should be prohibited as contravening the Union values of respect for human dignity, freedom, democracy, the rule of law and respect for human rights.”

It’s notable that the Commission has avoided proposing a ban on the use of facial recognition in public places — as it had apparently been considering, per a leaked draft early last year, before last year’s White Paper steered away from a ban.

In the leaked draft “remote biometric identification” in public places is singled out for “stricter conformity assessment procedures through the involvement of a notified body” — aka an “authorisation procedure that addresses the specific risks implied by the use of the technology” and includes a mandatory data protection impact assessment — vs most other applications of high risk AIs (which are allowed to meet requirements via self-assessment).

“Furthermore the authorising authority should consider in its assessment the likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities,” runs the draft. “It should further consider the societal impact, considering in particular democratic and civic participation, as well as the methodology, necessity and proportionality for the inclusion of persons in the reference database.”

AI systems “that may primarily lead to adverse implications for personal safety” are also required to undergo this higher bar of regulatory involvement as part of the compliance process.

The envisaged system of conformity assessments for all high risk AIs is ongoing, with the draft noting: “It is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes.”

“For AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out) changes to the algorithm and performance which have not been pre-determined and assessed at the moment of the conformity assessment shall result in a new conformity
assessment of the AI system,” it adds.

The carrot for compliant businesses is to get to display a ‘CE’ mark to help them win the trust of users and friction-free access across the bloc’s single market.

“High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the Union,” the text notes, adding that: “Member States should not create obstacles to the placing on the market or putting into service of AI systems that comply with the requirements laid down in this Regulation.”

Transparency for bots and deepfakes

As well as seeking to outlaw some practices and establish a system of pan-EU rules for bringing ‘high risk’ AI systems to market safely — with providers expected to make (mostly self) assessments and fulfil compliance obligations (such as around the quality of the data-sets used to train the model; record-keeping/documentation; human oversight; transparency; accuracy) prior to launching such a product into the market and conduct ongoing post-market surveillance — the proposed regulation seeks shrink the risk of AI being used to trick people.

It does this by suggesting “harmonised transparency rules” for AI systems intended to interact with natural persons (aka voice AIs/chat bots etc); and for AI systems used to generate or manipulate image, audio or video content (aka deepfakes).

“Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems,” runs the text.

“In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.

“This labelling obligation should not apply where the use of such content is necessary for the purposes of safeguarding public security or for the exercise of a legitimate right or freedom of a person such as for satire, parody or freedom of arts and sciences and subject to appropriate safeguards for the rights and freedoms of third parties.”

What about enforcement?

While the proposed AI regime hasn’t yet been officially unveiled by the Commission — so details could still change before next week — a major question mark looms over how a whole new layer of compliance around specific applications of (often complex) artificial intelligence can be effectively oversee and any violations enforced, especially given ongoing weaknesses in the enforcement of the EU’s data protection regime (which begun being applied back in 2018).

So while providers of high risk AIs are required to take responsibility for putting their system/s on the market (and therefore for compliance with all the various stipulations, which also include registering high risk AI systems in an EU database the Commission intends to maintain), the proposal leaves enforcement in the hands of Member States — who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime.

We’ve seen how this story plays out with the General Data Protection Regulation. The Commission itself has conceded GDPR enforcement is not consistently or vigorously applied across the bloc — so a major question is how these fledgling AI rules will avoid the same forum-shopping fate?

“Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation,” runs the draft.

The Commission does add a caveat — about potentially stepping in in the event that Member State enforcement doesn’t deliver. But there’s no near term prospect of a different approach to enforcement, suggesting the same old pitfalls will likely appear.

“Since the objective of this Regulation, namely creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union,” is the Commission’s back-stop for future enforcement failure.

The oversight plan for AI includes setting up a mirror entity akin to the GDPR’s European Data Protection Board — to be called the European Artificial Intelligence Board — which will similarly support application of the regulation by issuing relevant recommendations and opinions for EU lawmakers, such as around the list of prohibited AI practices and high-risk systems.

 

New Quest 2 software brings wireless PC streaming, updated ‘office’ mode

After a relatively quiet couple of months from Oculus on the software front, Facebook’s VR unit is sharing some details on new functionality coming to its Quest 2 standalone headset.

The features, which include wireless Oculus Link support, “Infinite Office” functionality and upcoming 120hz support will be rolling out in the Quest 2’s upcoming v28 software update. There’s no exact word on when that update is coming but the language in the blog seems to intimate that the rollout is imminent.

The big addition here is a wireless version of Oculus Link which will allow Quest 2 users to stream content from their PCs directly to their standalone headsets, enabling more graphics-intensive titles that were previously only available on the now pretty much defunct Rift platform. Air Link is a feature that will enable users to ditch the tethered experience of Oculus Link, though many users have been relying on third-party software to do this already, utilizing Virtual Desktop.

It appears this upgrade is only coming to Quest 2 users in a new experimental mode, but not owners of the original Quest headset. Users will need to update the Oculus software on both their Quest 2 and PC to the v28 version in order to use this feature.

Accompanying the release of Air Link in this update is new features coming to “Infinite Office” a VR office play that aims to bring your keyboard and mouse into VR and allow users to engage with desktop-style software. Facebook debuted it back at their VR-focused Facebook Connect conference, but they haven’t said much about it since.

Today’s updates include added keyboard support that not only allows users to link their device but see it inside VR, this support is limited to a single model from a single manufacturer (the Logitech K830) but Facebook says they’ll be adding support down the road to other keyboards. Users with this keyboard will be able to see outlines of their hands as well as a rendering of the keyboard in its real position, enabling users to accurately type (theoretically). Infinite Office will also allow users to designate where their real world desk is, a feature that will likely help users orient themselves. Even with a keyboard, there’s not much users can do at the moment beyond accessing the Oculus Browser it seems.

Lastly, Oculus is allowing developers to sample out 120hz frame rate support for their titles. Facebook says that there isn’t actually anything available with that frame rate yet, not even system software, but that support is here for developers in an experimental fashion.

Oculus says the new software update will be rolling out “gradually” to users.

Facebook tests video speed dating events with ‘Sparked’

Facebook confirmed it’s testing a video speed-dating app called Sparked, after the app’s website was spotted by The Verge. Unlike dating app giants such as Tinder, Sparked users don’t swipe on people they like or direct message others. Instead, they cycle through a series of short video dates during an event to make connections with others. The product itself is being developed by Facebook’s internal R&D group, the NPE Team, but had not been officially announced.

“Sparked is an early experiment by New Product Experimentation,” a spokesperson for Facebook’s NPE Team confirmed to TechCrunch. “We’re exploring how video-first speed dating can help people find love online.”

They also characterized the app as undergoing a “small, external beta test” designed to generate insights about how video dating could work, in order to improve people’s experiences with Facebook products. The app is not currently live on app stores, only the web.

Sparked is, however, preparing to test the experience at a Chicago Date Night event on Wednesday, The Verge’s report noted.

Image Credits: Facebook

 

During the sign-up process, Sparked tells users to “be kind,” “keep this a safe space,” and “show up.” A walkthrough of how the app also works explains that participants will meet face to face during a series of 4-minute video dates, which they can then follow up with a 10-minute date if all goes well. They can additionally choose to exchange contact info, like phone numbers, emails, or Instagram handles.

Facebook, of course, already offers a dating app product, Facebook Dating.

That experience, which takes place inside Facebook itself, first launched in 2018 outside the U.S., and then arrived in the U.S. the following year. In the early days of the pandemic, Facebook announced it would roll out a sort of virtual dating experience that leveraged Messenger for video chats — a move came at a time when many other dating apps in the market also turned to video to serve users under lockdowns. These video experiences could potentially compete with Sparked, unless the new product’s goal is to become another option inside Facebook Dating itself.

Image Credits: Facebook

Despite the potential reach, Facebook’s success in the dating market is not guaranteed, some analysts have warned. People don’t think of Facebook as a place to go meet partners, and the dating product today is still separated from the main Facebook app for privacy purposes. That means it can’t fully leverage Facebook’s network effects to gain traction, as users in this case may not want their friends and family to know about their dating plans.

Facebook’s competition in dating is fierce, too. Even the pandemic didn’t slow down the dating app giants, like Match Group or newly IPO’d Bumble. Tinder’s direct revenues increased 18% year-over-year to $1.4 billion in 2020, Match Group reported, for instance. Direct revenues from the company’s non-Tinder brands collectively increased 16%. And Bumble topped its revenue estimates in its first quarter as a public company, pulling in $165.6 million in the fourth quarter.

Image Credits: Facebook

Facebook, on the other hand, has remained fairly quiet about its dating efforts. Though the company cited over 1.5 billion matches in the 20 countries it’s live, a “match” doesn’t indicate a successful pairing — in fact, that sort of result may not be measured. But it’s early days for the product, which only rolled out to European markets this past fall.

The NPE Team’s experiment in speed dating could ultimately help to inform Facebook of what sort of new experiences a dating app user may want to use, and how.

The company didn’t say if or when Sparked would roll out more broadly.

Facebook, Instagram users can now ask ‘oversight’ panel to review decisions not to remove content

Facebook’s self-styled ‘Oversight Board’ (FOB) has announced an operational change that looks intended to respond to criticism of the limits of the self-regulatory content-moderation decision review body: It says it’s started accepting requests from users to review decisions to leave content up on Facebook and Instagram.

The move expands the FOB’s remit beyond reviewing (and mostly reversing) content takedowns — an arbitrary limit that critics said aligns it with the economic incentives of its parent entity, given that Facebook’s business benefits from increased engagement with content (and outrageous content drives clicks and makes eyeballs stick).

“So far, users have been able to appeal content to the Board which they think should be restored to Facebook or Instagram. Now, users can also appeal content to the Board which they think should be removed from Facebook or Instagram,” the FOB writes, adding that it will “use its independent judgment to decide what to leave up and what to take down”.

“Our decisions will be binding on Facebook,” it adds.

The ability to request an appeal on content Facebook wouldn’t take down has been added across all markets, per Facebook. But the tech giant said it will take some “weeks” for all users to get access as it said it’s rolling out the feature “in waves to ensure stability of the product experience”.

While the FOB can now get individual pieces of content taken down from Facebook/Instagram — i.e. if the Board believes it’s justified in reversing an earlier decision by the company not to remove content — it cannot make Facebook adopt any associated suggestions vis-a-vis its content moderation policies generally.

That’s because Facebook has never said it will be bound by the FOB’s policy recommendations; only by the final decision made per review.

That in turn limits the FOB’s ability to influence the shape of the tech giant’s approach to speech policing. And indeed the whole effort remains inextricably bound to Facebook which devised and structured the FOB — writing the Board’s charter and bylaws, and hand picking the first cohort of members. The company thus continues to exert inescapable pull on the strings linking its self-regulatory vehicle to its lucrative people-profiling and ad-targeting empire.

The FOB getting the ability to review content ‘keep ups’ (if we can call them that) is also essentially irrelevant when you consider the ocean of content Facebook has ensured the Board won’t have any say in moderating — because its limited resources/man-power mean it can only ever consider a fantastically tiny subset of cases referred to it for review.

For an oversight body to provide a meaningful limit on Facebook’s power it would need to have considerably more meaty (i.e. legal) powers; be able to freely range across all aspects of Facebook’s business (not just review user generated content); and be truly independent of the adtech mothership — as well as having meaningful powers of enforcement and sanction.

So, in other words, it needs to be a public body, functioning in the public interest.

Instead, while Facebook applies its army of in house lawyers to fight actual democratic regulatory oversight and compliance, it has splashed out to fashion this bespoke bureaucracy that can align with its speech interests — handpicking a handful of external experts to pay to perform a content review cameo in its crisis PR drama.

Unsurprisingly, then, the FOB has mostly moved the needle in a speech-maximizing direction so far — while expressing some frustration at the limited deck of cards Facebook has dealt it.

Most notably, the Board still has a decision pending on whether to reverse Facebook’s indefinitely ban on former US president Donald Trump. If it reverses that decision Facebook users won’t have any recourse to appeal the restoration of Trump’s account.

The only available route would, presumably, be for users to report future Trump content to Facebook for violating its policies — and if Facebook refuses to take that stuff down, users could try to request a FOB review. But, again, there’s no guarantee the FOB will accept any such review requests. (Indeed, if the board chooses to reinstate Trump that may make it harder for it to accept requests to review Trump content, at least in the short term (in the interests of keeping a diverse case file, so… )

How to ask for a review after content isn’t removed

To request the FOB review a piece of content that’s been left up a user of Facebook/Instagram first has to report the content to Facebook/Instagram.

If the company decides to keep the content up Facebook says the reporting person will receive an Oversight Board Reference ID (a ten-character string that begins with ‘FB’) in their Support Inbox — which they can use to appeal its ‘no takedown’ decision to the Oversight Board.

There are several hoops to jump through to make an appeal: Following on-screen instructions Facebook says the user will be taken to the Oversight Board website where they need to log in with the account to which the reference ID was issued.

They will then be asked to provide responses to a number of questions about their reasons for reporting the content (to “help the board understand why you think Facebook made the wrong decision”).

Once an appeal has been submitted, the Oversight Board will decide whether or not to review it. The board only selects a certain number of “eligible appeals” to review; and Facebook has not disclosed the proportion of requests the Board accepts for review vs submissions it receives — per case or on aggregate. So how much chance of submission success any user has for any given piece of content is an unknown (and probably unknowable) quantity.

Users who have submitted an appeal against content that was left up can check the status of their appeal via the FOB’s website — again by logging in and using the reference ID.

A further limitation is time, as Facebook notes there’s a time limit on appealing decisions to the FOB

“Bear in mind that there is a time limit on appealing decisions to the Oversight Board. Once the window to appeal a decision has expired, you will no longer be able to submit it,” it writes in its Help Center, without specifying how long users have to get their appeal in.