Facebook’s decision-review body to take “weeks” longer over Trump ban call

Facebook’s self-styled and handpicked ‘Oversight Board’ will make a decision on whether or not to overturn an indefinite suspension of the account of former president Donald Trump within “weeks”, it said in a brief update statement on the matter today.

The high profile case appears to have attracted major public interest, with the FOB tweeting that it’s received more than 9,000 responses so far to its earlier request for public feedback.

It added that its commitment to “carefully reviewing all comments” after an earlier extension of the deadline for feedback is responsible for the extension of the case timeline.

The Board’s statement adds that it will provide more information “soon”.

Trump’s indefinite suspension from Facebook and Instagram was announced by Facebook founder Mark Zuckerberg on January 7, after the then-president of the U.S. incited his followers to riot at the nation’s Capitol — an insurrection that led to chaotic and violent scenes and a number of deaths as his supporters clashed with police.

However Facebook quickly referred the decision to the FOB for review — opening up the possibility that the ban could be overturned in short order as Facebook has said it will be bound by the case review decisions issued by the Board.

After the FOB accepted the case for review it initially said it would issue a decision within 90 days of January 21 — a deadline that would have fallen next Wednesday.

However it now looks like the high profile, high stakes call on Trump’s social media fate could be pushed into next month.

It’s a familiar development in Facebook-land. Delay has been a long time feature of the tech giant’s crisis PR response in the face of a long history of scandals and bad publicity attached to how it operates its platform. So the tech giant is unlikely to be uncomfortable that the FOB is taking its time to make a call on Trump’s suspension.

After all, devising and configuring the bespoke case review body — as its proprietary parody of genuine civic oversight — is a process that has taken Facebook years already.

In related FOB news this week, Facebook announced that users can now request the board review its decisions not to remove content — expanding the Board’s potential cases to include reviews of ‘keep ups’ (not just content takedowns).

This report was updated with a correction: The FOB previously extended the deadline for case submissions; it has not done so again as we originally stated

Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach

Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.

Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).

Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.

The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.

Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.

Facebook has been contacted for comment on the litigation.

The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.

A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.

With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.

(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).

Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.

Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.

That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533M accounts now, suggests it should face a higher sanction from the DPC than Twitter received.

However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.

Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.

“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.

It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.

It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.

In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.

Facebook, meanwhile, has sought to play down the breach it failed to disclose — claiming it’s ‘old data’ — a deflection that ignores the fact that dates of birth don’t change (nor do most people routinely change their mobile number or email address).

Plenty of the ‘old’ data exposed in this latest massive Facebook data leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.

Pakistan temporarily blocks social media

Pakistan has temporarily blocked several social media services in the South Asian nation, according to users and a notice reviewed by TechCrunch.

In an order titled “Complete Blocking of Social Media Platforms,” the Pakistani government ordered Pakistan Telecommunication Authority to block social media platforms including Twitter, Facebook, WhatsApp, YouTube, and Telegram from 11am to 3pm (9.30am GMT) Friday.

The move comes as Pakistan looks to crackdown against a violent terrorist group and prevent troublemakers from disrupting Friday prayers congregations following days of violent protests.

Earlier this week Pakistan banned the Islamist group Tehrik-i-Labaik Pakistan after arresting its leader, which prompted protests, according to local media reports.

An entrepreneur based in Pakistan told TechCrunch that even though the order is supposed to expire at 3pm local time, similar past moves by the government suggests that the disruption will likely last for longer.

Though Pakistan, like its neighbor India, has temporarily cut phone calls access in the nation in the past, this is the first time Islamabad has issued a blanket ban on social media in the country.

Pakistan has explored ways to assume more control over content on digital services operating in the country in recent years. Some activists said the country was taking extreme measures without much explanations.

Twitter bans James O’Keefe of Project Veritas over fake account policy

Twitter has banned right-wing provocateur James O’Keefe, creator of political gotcha video producer Project Veritas, for violating its “platform manipulation and spam policy,” suggesting he was operating multiple accounts in an unsanctioned way. O’Keefe has already announced that he will sue the company for defamation.

The ban, or “permanent suspension” as Twitter calls it, occurred Thursday afternoon. A Twitter representative the action followed the violation of rules prohibiting “operating fake accounts” and attempting to “artificially amplify or disrupt conversations through the use of multiple accounts,” as noted here.

This suggests O’Keefe was banned for operating multiple accounts, outside the laissez-faire policy that lets people have a professional and a personal account, and that sort of thing.

But sharp-eyed users noticed that O’Keefe’s last tweet unironically accused reporter Jesse Hicks of impersonation, including an image showing an unredacted phone number supposedly belonging to Hicks. This too may have run afoul of Twitter’s rules about posting personal information, but Twitter declined to comment on this when I asked.

Supporters of O’Keefe say that the company removed his account as retribution for his most recent “exposé” involving surreptitious recordings of a CNN employee admitting the news organization has a political bias. (The person he was talking to had, impersonating an nurse, matched with him on Tinder.)

For his part O’Keefe said he would be suing Twitter for defamation over the allegation that he operated fake accounts. I’ve contacted Project Veritas for more information.

Consumer groups and child development experts petition Facebook to drop ‘Instagram for kids’ plan

A coalition of 35 consumer advocacy groups along with 64 experts in child development have co-signed a letter to Facebook asking the company to reconsider its plans to launch a version of Instagram for children under the age of 13, which Facebook has confirmed to be in development. In the letter, the groups and experts argue that social media is linked with several risk factors for younger children and adolescents, related to both their physical health and overall well-being.

The letter was written by the Campaign for a Commercial-Free Childhood, an advocacy group that often leads campaigns against big tech and its targeting of children.

The group stresses how influential social media is on young people’s development, and the dangers such an app could bring:

“A growing body of research demonstrates that excessive use of digital devices and social media is harmful to adolescents. Instagram, in particular, exploits young people’s fear of missing out and desire for peer approval to encourage children and teens to constantly check their devices and share photos with their followers,” it states. “The platform’s relentless focus on appearance, self-presentation, and branding presents challenges to adolescents’ privacy and wellbeing. Younger children are even less developmentally equipped to deal with these challenges, as they are learning to navigate social interactions, friendships, and their inner sense of strengths and challenges during this crucial window of development,” the letter reads.

Citing public health research and other studies, the letter notes that excessive screen time and social media use can contribute to a variety of risks for kids including obesity, lower psychological well-being, decreased quality of sleep, increased risk of depression and suicide ideation, and other issues. Adolescent girls report feeling pressured to post sexualized selfies for attention from their peers, the letter said, and 59% of U.S. teens have reported being bullied in social media, as well.

Another concern the groups have is the use of the Instagram algorithm which could suggest what kids would see and click on next, noting that children are “highly persuadable.”

They also point out that Facebook knows there are already children under 13 who have lied about their age using the Instagram platform, and these users will be unlikely to migrate to what they’ll view as a more “babyish” version of the app than the one they’re already using. That means Facebook is really targeting an even younger age group who don’t yet have an Instagram account with this “kids version.”

Despite the concerns being raised, Instagram’s plans to compete for younger users will not likely be impacted by the outcry. Already, Instagram’s top competitor in social media today — TikTok — has developed an experience for kids under 13. In fact, it was forced to age-gate its app as a result of its settlement with the U.S. Federal Trade Commission, which had investigated Musical.ly (the app that became TikTok) for violations of the U.S. children’s privacy law COPPA.

Facebook, too, could be in a similar situation where it has to age-gate Instagram in order to properly direct its existing underage users to a COPPA-compliant experience. At the very least, Facebook has grounds to argue that it shouldn’t have to boot the under-13 crowd off its app, since TikTok did not. And the FTC’s fines, even when historic, barely make a dent in tech giants’ revenues.

The advocacy groups’ letter follows a push from Democratic lawmakers, who also this month penned a letter addressed to Facebook CEO Mark Zuckerberg to express concerns over Facebook’s ability to protect kids’ privacy and their well-being. Their letter had specifically cited Messenger Kids, which was once found to have a design flaw that let kids chat with unauthorized users. The lawmakers gave Facebook until April 26 to respond to their questions.

Zuckerberg confirmed Facebook’s plans for an Instagram for kids at a Congressional hearing back in March, saying that the company was “early in our thinking” about how the app would work, but noted it would involve some sort of parental oversight and involvement. That’s similar to what Facebook offers today via Messenger Kids and TikTok does via its Family Pairing parental controls.

The market, in other words, is shifting towards acknowledging that kids are already on social media — with or without parents’ permission. As a result, companies are building features and age gates to accommodate that reality. The downside to this plan, of course, is once you legitimize the creation of social apps for the under-13 demographic, companies are given the legal right to hook kids even younger on what are, arguably, risky experiences from a public health standpoint.

The Campaign for a Commercial-Free Childhood also today launched a petition which others can sign to push Facebook to cancel its plans for an Instagram for kids.

Instagram Letter by TechCrunch on Scribd

Facebook to test new business discovery features in U.S. News Feed

Facebook announced this morning it will begin testing a new experience for discovering businesses in its News Feed in the U.S. When live, users to tap on topics they’re interested in underneath posts and ads in their News Feed in order to explore related content from businesses. The change comes at a time when Facebook has been arguing how Apple’s App Tracking Transparency update will impact its small business customers — a claim many have dismissed as misleading, but nevertheless led some mom and pop shops to express concern about the impacts to their ad targeting capabilities, as a result. This new test is an example of how easily Facebook can tweak its News Feed to build out more data on its users, if needed.

The company suggests users may see the change under posts and ads from businesses selling beauty products, fitness or clothing, among other things.

The idea here is that Facebook would direct users to related businesses through a News Feed feature, when they take a specific action to discover related content. This, in turn, could help Facebook create a new set of data on its users, in terms of which users clicked to see more, and what sort of businesses they engaged with, among other things. Over time, it could turn this feature into an ad unit, if desired, where businesses could pay for higher placement.

“People already discover businesses while scrolling through News Feed, and this will make it easier to discover and consider new businesses they might not have found on their own,” the company noted in a brief announcement.

Facebook didn’t detail its further plans with the test, but said as it learned from how users interacted with the feature, it will expand the experience to more people and businesses.

Image Credits: Facebook

Along with news of the test, Facebook said it will roll out more tools for business owners this month, including the ability to create, publish and schedule Stories to both Facebook and Instagram; make changes and edits to Scheduled Posts; and soon, create and manage Facebook Photos and Albums from Facebook’s Business Suite. It will also soon add the ability to create and save Facebook and Instagram posts as drafts from the Business Suite mobile app.

Related to the businesses updates, Facebook updated features across ad products focused on connecting businesses with customer leads, including Lead Ads, Call Ads, and Click to Messenger Lead Generations.

Facebook earlier this year announced a new Facebook Page experience that gave businesses the ability to engage on the social network with their business profile for things like posting, commenting and liking, and access to their own, dedicated News Feed. And it had removed the Like button in favor of focusing on Followers.

It is not a coincidence that Facebook is touting its tools for small businesses at a time when there’s concern — much of it loudly shouted by Facebook itself — that its platform could be less useful to small business owners in the near future, when ad targeting capabilities becomes less precise as users vote ‘no’ when Facebook’s iOS app asks if it can track them.

TikTok funds first episodic public health series ‘VIRAL’ from NowThis

TikTok is taking another step toward directly funding publishers’ content with today’s announcement that it’s financially backing the production of media publisher NowThis’ new series, “VIRAL,” which will feature interviews with public health experts and a live Q&A session focused on answering questions about the pandemic. The partnership represents TikTok’s first-ever funding of an episodic series from a publisher, though TikTok has previously funded creator content.

Through TikTok’s Instructive Accelerator Program, which was formerly known as the Creative Learning Fund, other TikTok publishers have received grants and hands-on support from TikTok so they could produce quality instructive content for TikTok’s #LearnOnTikTok initiative. The program today is structured as four eight-week cycles, during which time publishers post videos four times per week.

NowThis had also participated in the Creative Learning Fund last year and was selected for the latest cycle of the Instructive Accelerator Program. But its “VIRAL” series is separate from these efforts.

NowThis says it brought the concept for the show to TikTok earlier this year outside of the accelerator program, and TikTok greenlit it. TikTok then co-produced the series and provided some funding. Neither NowThis nor TikTok would comment on the extent of the financial backing involved, however.

The “VIRAL” series itself is hosted by infectious disease clinical researcher Laurel Bristow, who spent the last year working on COVID treatments and research. Every Thursday, Bristow will break down COVID facts in easy-to-understand language, NowThis says, including things like vaccine efficacy, transmission timelines and treatment. The show will also bust COVID myths, provide information about ongoing public health risks and feature interviews with a cross-section of experts.

Each episode will be 45 minutes in length and will also include an interactive segment where the TikTok viewing audience will be able to engage in a real-time Q&A session about the show’s content. In total, five episodes are being produced, and will air starting on Thursday April 15 at 6 PM ET and will run through Thursday May 13 on the @NowThis main TikTok page.

@nowthisTune in to our new TikTok live show VIRAL on Thursdays at 6pm ET with host @kinggutterbaby♬ original sound – nowthis

NowThis has become one of the most-followed news media accounts on TikTok, with 4.6 million followers across its news and politics channels, since launching a little over a year ago. Because of its focus on video, it’s been a good fit for the TikTok’s platform.

The approach TikTok is taking with “VIRAL’s” production, it’s worth noting, stands in contrast to how other social media platforms are handling the pandemic and COVID-19 information. While most, including TikTok, have pledged to fact-check COVID-19 information, remove misinformation and conspiracies, point users to official sources for health information and provide other resources, TikTok is directly funding public health content featuring scientists and researchers, and then promoting it on its network.

The company explained to TechCrunch its thinking on the matter.

“As the pandemic continues to evolve, we think it’s important to provide our community an outlet to dispel misinformation and communicate with public health experts in real time,” said Robbie Levin, manager of Media Partnerships at TikTok. “NowThis has consistently been a great partner that produces engaging and informative content, so we felt this series would be an impactful and important avenue for our users to receive credible information on our platform,” Levin noted.

While the pandemic has driven the topic of choice here, paying creators for content is not new. And TikTok isn’t the only one to do so. Instagram and Snapchat are both funding creator content for their TikTok clones, Reels and Spotlight, respectively. And new social platforms like Clubhouse are funding creators’ shows, as well.

TikTok says it’s not currently talking to other publishers to produce more series like “VIRAL,” but it isn’t ruling out the idea of expanding its creator funding and producing efforts. In addition to its accelerator program, which is continuing, TikTok says if “VIRAL” proves successful and the community responds positively, it will pursue similar opportunities in the future.

Ireland opens GDPR investigation into Facebook leak

Facebook’s lead data supervisor in the European Union has opened an investigation into whether the tech giant violated data protection rules vis-a-vis the leak of data reported earlier this month.

Here’s the Irish Data Protection Commission’s statement:

“The Data Protection Commission (DPC) today launched an own-volition inquiry pursuant to section 110 of the Data Protection Act 2018 in relation to multiple international media reports, which highlighted that a collated dataset of Facebook user personal data had been made available on the internet. This dataset was reported to contain personal data relating to approximately 533 million Facebook users worldwide. The DPC engaged with Facebook Ireland in relation to this reported issue, raising queries in relation to GDPR compliance to which Facebook Ireland furnished a number of responses.

The DPC, having considered the information provided by Facebook Ireland regarding this matter to date, is of the opinion that one or more provisions of the GDPR and/or the Data Protection Act 2018 may have been, and/or are being, infringed in relation to Facebook Users’ personal data.

Accordingly, the Commission considers it appropriate to determine whether Facebook Ireland has complied with its obligations, as data controller, in connection with the processing of personal data of its users by means of the Facebook Search, Facebook Messenger Contact Importer and Instagram Contact Importer features of its service, or whether any provision(s) of the GDPR and/or the Data Protection Act 2018 have been, and/or are being, infringed by Facebook in this respect.”

Facebook has been contacted for comment.

The move comes after the European Commission intervened to apply pressure on Ireland’s data protection commissioner. Justice commissioner, Didier Reynders, tweeted Monday that he had spoken with Helen Dixon about the Facebook data leak.

“The Commission continues to follow this case closely and is committed to supporting national authorities,” he added, going on to urge Facebook to “cooperate actively and swiftly to shed light on the identified issues”.

A spokeswoman for the Commission confirmed the virtual meeting between Reynders and Dixon, saying: “Dixon informed the Commissioner about the issues at stake and the different tracks of work to clarify the situation.

“They both urge Facebook to cooperate swiftly and to share the necessary information. It is crucial to shed light on this leak that has affected millions of European citizens.”

“It is up to the Irish data protection authority to assess this case. The Commission remains available if support is needed. The situation will also have to be further analyzed for the future. Lessons should be learned,” she added.

The revelation that a vulnerability in Facebook’s platform enabled unidentified ‘malicious actors’ to extract the personal data (including email addresses, mobile phone numbers and more) of more than 500 million Facebook accounts up until September 2019 — when Facebook claims it fixed the issue — only emerged in the wake of the data being found for free download on a hacker forum earlier this month.

Despite the European Union’s data protection framework (the GDPR) baking in a regime of data breach notifications — with the risk of hefty fines for compliance failure — Facebook did not inform its lead EU data supervisory when it found and fixed the issue. Ireland’s Data Protection Commission (DPC) was left to find out in the press, like everyone else.

Nor has Facebook individually informed the 533M+ users that their information was taken without their knowledge or consent, saying last week it has no plans to do so — despite the heightened risk for affected users of spam and phishing attacks.

Privacy experts have, meanwhile, been swift to point out that the company has still not faced any regulatory sanction under the GDPR — with a number of investigations ongoing into various Facebook businesses and practices and no decisions yet issued in those cases by Ireland’s DPC. (It has so far only issued one cross-border decision, fining Twitter around $550k in December over a breach it disclosed back in 2019.)

Last month the European Parliament adopted a resolution on the implementation of the GDPR which expressed “great concern” over the functioning of the mechanism — raising particular concern over the Irish data protection authority by writing that it “generally closes most cases with a settlement instead of a sanction and that cases referred to Ireland in 2018 have not even reached the stage of a draft decision pursuant to Article 60(3) of the GDPR”.

The latest Facebook data scandal further amps up the pressure on the DPC — providing further succour to critics of the GDPR who argue the regulation is unworkable under the current foot-dragging enforcement structure, given the major bottlenecks in Ireland (and Luxembourg) where many tech giants choose to locate regional HQ.

On Thursday Reynders made his concern over Ireland’s response to the Facebook data leak public, tweeting to say the Commission had been in contact with the DPC.

He does have reason to be personally concerned. Earlier last week Politico reported that Reynders’ own digits had been among the cache of leaked data, along with those of the Luxembourg prime minister Xavier Bettel — and “dozens of EU officials”. However the problem of weak GDPR enforcement affects everyone across the bloc — some 446M people whose rights are not being uniformly and vigorously upheld.

“A strong enforcement of GDPR is of key importance,” Reynders also remarked on Twitter, urging Facebook to “fully cooperate with Irish authorities”.

Last week Italy’s data protection commission also called on Facebook to immediately offer a service for Italian users to check whether they had been affected by the breach. But Facebook made no public acknowledgment or response to the call. Under the GDPR’s one-stop-shop mechanism the tech giant can limit its regulatory exposure by direct dealing only with its lead EU data supervisor in Ireland.

A two-year Commission review of how the data protection regime is functioning, which reported last summer, already drew attention to problems with patchy enforcement. A lack of progress on unblocking GDPR bottlenecks is thus a growing problem for the Commission — which is in the midst of proposing a package of additional digital regulations. That makes the enforcement point a very pressing one as EU lawmakers are being asked how new digital rules will be upheld if existing ones keep being trampled on?

It’s certainly notable that the EU’s executive has proposed a different, centralized enforcement structure for incoming pan-EU legislation targeted at digital services and tech giants. Albeit, getting agreement from all the EU’s institutions and elected representatives on how to reshape platform oversight looks challenging.

And in the meanwhile the data leaks continue: Motherboard reported Friday on another alarming leak of Facebook data it found being made accessible via a bot on the Telegram messaging platform that gives out the names and phone numbers of users who have liked a Facebook page (in exchange for a fee unless the page has had less than 100 likes).

The publication said this data appears to be separate to the 533M+ scraped dataset — after it ran checks against the larger dataset via the breach advice site, haveibeenpwned. It also asked Alon Gal, the person who discovered the aforementioned leaked Facebook dataset being offered for free download online, to compare data obtained via the bot and he did not find any matches.

We contacted Facebook about the source of this leaked data and will update this report with any response.

In his tweet about the 500M+ Facebook data leak last week, Reynders made reference to the Europe Data Protection Board (EDPB), a steering body comprised of representatives from Member State data protection agencies which works to ensure a consistent application of the GDPR.

However the body does not lead on GDPR enforcement — so it’s not clear why he would invoke it. Optics is one possibility, if he was trying to encourage a perception that the EU has vigorous and uniform enforcement structures where people’s data is concerned.

“Under the GDPR, enforcement and the investigation of potential violations lies with the national supervisory authorities. The EDPB does not have investigative powers per se and is not involved in investigations at the national level. As such, the EDPB cannot comment on the processing activities of specific companies,” an EDPB spokeswoman told us when we enquired about Reynders’ remarks.

But she also noted the Commission attends plenary meetings of the EDPB — adding it’s possible there will be an exchange of views among members about the Facebook leak case in the future, as attending supervisory authorities “regularly exchange information on cases at the national level”.

 

Facebook tests video speed dating events with ‘Sparked’

Facebook confirmed it’s testing a video speed-dating app called Sparked, after the app’s website was spotted by The Verge. Unlike dating app giants such as Tinder, Sparked users don’t swipe on people they like or direct message others. Instead, they cycle through a series of short video dates during an event to make connections with others. The product itself is being developed by Facebook’s internal R&D group, the NPE Team, but had not been officially announced.

“Sparked is an early experiment by New Product Experimentation,” a spokesperson for Facebook’s NPE Team confirmed to TechCrunch. “We’re exploring how video-first speed dating can help people find love online.”

They also characterized the app as undergoing a “small, external beta test” designed to generate insights about how video dating could work, in order to improve people’s experiences with Facebook products. The app is not currently live on app stores, only the web.

Sparked is, however, preparing to test the experience at a Chicago Date Night event on Wednesday, The Verge’s report noted.

Image Credits: Facebook

 

During the sign-up process, Sparked tells users to “be kind,” “keep this a safe space,” and “show up.” A walkthrough of how the app also works explains that participants will meet face to face during a series of 4-minute video dates, which they can then follow up with a 10-minute date if all goes well. They can additionally choose to exchange contact info, like phone numbers, emails, or Instagram handles.

Facebook, of course, already offers a dating app product, Facebook Dating.

That experience, which takes place inside Facebook itself, first launched in 2018 outside the U.S., and then arrived in the U.S. the following year. In the early days of the pandemic, Facebook announced it would roll out a sort of virtual dating experience that leveraged Messenger for video chats — a move came at a time when many other dating apps in the market also turned to video to serve users under lockdowns. These video experiences could potentially compete with Sparked, unless the new product’s goal is to become another option inside Facebook Dating itself.

Image Credits: Facebook

Despite the potential reach, Facebook’s success in the dating market is not guaranteed, some analysts have warned. People don’t think of Facebook as a place to go meet partners, and the dating product today is still separated from the main Facebook app for privacy purposes. That means it can’t fully leverage Facebook’s network effects to gain traction, as users in this case may not want their friends and family to know about their dating plans.

Facebook’s competition in dating is fierce, too. Even the pandemic didn’t slow down the dating app giants, like Match Group or newly IPO’d Bumble. Tinder’s direct revenues increased 18% year-over-year to $1.4 billion in 2020, Match Group reported, for instance. Direct revenues from the company’s non-Tinder brands collectively increased 16%. And Bumble topped its revenue estimates in its first quarter as a public company, pulling in $165.6 million in the fourth quarter.

Image Credits: Facebook

Facebook, on the other hand, has remained fairly quiet about its dating efforts. Though the company cited over 1.5 billion matches in the 20 countries it’s live, a “match” doesn’t indicate a successful pairing — in fact, that sort of result may not be measured. But it’s early days for the product, which only rolled out to European markets this past fall.

The NPE Team’s experiment in speed dating could ultimately help to inform Facebook of what sort of new experiences a dating app user may want to use, and how.

The company didn’t say if or when Sparked would roll out more broadly.

Facebook, Instagram users can now ask ‘oversight’ panel to review decisions not to remove content

Facebook’s self-styled ‘Oversight Board’ (FOB) has announced an operational change that looks intended to respond to criticism of the limits of the self-regulatory content-moderation decision review body: It says it’s started accepting requests from users to review decisions to leave content up on Facebook and Instagram.

The move expands the FOB’s remit beyond reviewing (and mostly reversing) content takedowns — an arbitrary limit that critics said aligns it with the economic incentives of its parent entity, given that Facebook’s business benefits from increased engagement with content (and outrageous content drives clicks and makes eyeballs stick).

“So far, users have been able to appeal content to the Board which they think should be restored to Facebook or Instagram. Now, users can also appeal content to the Board which they think should be removed from Facebook or Instagram,” the FOB writes, adding that it will “use its independent judgment to decide what to leave up and what to take down”.

“Our decisions will be binding on Facebook,” it adds.

The ability to request an appeal on content Facebook wouldn’t take down has been added across all markets, per Facebook. But the tech giant said it will take some “weeks” for all users to get access as it said it’s rolling out the feature “in waves to ensure stability of the product experience”.

While the FOB can now get individual pieces of content taken down from Facebook/Instagram — i.e. if the Board believes it’s justified in reversing an earlier decision by the company not to remove content — it cannot make Facebook adopt any associated suggestions vis-a-vis its content moderation policies generally.

That’s because Facebook has never said it will be bound by the FOB’s policy recommendations; only by the final decision made per review.

That in turn limits the FOB’s ability to influence the shape of the tech giant’s approach to speech policing. And indeed the whole effort remains inextricably bound to Facebook which devised and structured the FOB — writing the Board’s charter and bylaws, and hand picking the first cohort of members. The company thus continues to exert inescapable pull on the strings linking its self-regulatory vehicle to its lucrative people-profiling and ad-targeting empire.

The FOB getting the ability to review content ‘keep ups’ (if we can call them that) is also essentially irrelevant when you consider the ocean of content Facebook has ensured the Board won’t have any say in moderating — because its limited resources/man-power mean it can only ever consider a fantastically tiny subset of cases referred to it for review.

For an oversight body to provide a meaningful limit on Facebook’s power it would need to have considerably more meaty (i.e. legal) powers; be able to freely range across all aspects of Facebook’s business (not just review user generated content); and be truly independent of the adtech mothership — as well as having meaningful powers of enforcement and sanction.

So, in other words, it needs to be a public body, functioning in the public interest.

Instead, while Facebook applies its army of in house lawyers to fight actual democratic regulatory oversight and compliance, it has splashed out to fashion this bespoke bureaucracy that can align with its speech interests — handpicking a handful of external experts to pay to perform a content review cameo in its crisis PR drama.

Unsurprisingly, then, the FOB has mostly moved the needle in a speech-maximizing direction so far — while expressing some frustration at the limited deck of cards Facebook has dealt it.

Most notably, the Board still has a decision pending on whether to reverse Facebook’s indefinitely ban on former US president Donald Trump. If it reverses that decision Facebook users won’t have any recourse to appeal the restoration of Trump’s account.

The only available route would, presumably, be for users to report future Trump content to Facebook for violating its policies — and if Facebook refuses to take that stuff down, users could try to request a FOB review. But, again, there’s no guarantee the FOB will accept any such review requests. (Indeed, if the board chooses to reinstate Trump that may make it harder for it to accept requests to review Trump content, at least in the short term (in the interests of keeping a diverse case file, so… )

How to ask for a review after content isn’t removed

To request the FOB review a piece of content that’s been left up a user of Facebook/Instagram first has to report the content to Facebook/Instagram.

If the company decides to keep the content up Facebook says the reporting person will receive an Oversight Board Reference ID (a ten-character string that begins with ‘FB’) in their Support Inbox — which they can use to appeal its ‘no takedown’ decision to the Oversight Board.

There are several hoops to jump through to make an appeal: Following on-screen instructions Facebook says the user will be taken to the Oversight Board website where they need to log in with the account to which the reference ID was issued.

They will then be asked to provide responses to a number of questions about their reasons for reporting the content (to “help the board understand why you think Facebook made the wrong decision”).

Once an appeal has been submitted, the Oversight Board will decide whether or not to review it. The board only selects a certain number of “eligible appeals” to review; and Facebook has not disclosed the proportion of requests the Board accepts for review vs submissions it receives — per case or on aggregate. So how much chance of submission success any user has for any given piece of content is an unknown (and probably unknowable) quantity.

Users who have submitted an appeal against content that was left up can check the status of their appeal via the FOB’s website — again by logging in and using the reference ID.

A further limitation is time, as Facebook notes there’s a time limit on appealing decisions to the FOB

“Bear in mind that there is a time limit on appealing decisions to the Oversight Board. Once the window to appeal a decision has expired, you will no longer be able to submit it,” it writes in its Help Center, without specifying how long users have to get their appeal in.