Facebook Dating now integrates with Instagram and Facebook Stories

Facebook Dating, an opt-in feature of the main Facebook app, will begin to tap into the content users are already creating across both Facebook and Instagram to enhance its service. Today, Facebook Dating users will be able to add their Facebook or Instagram Stories to Facebook Dating, in order to share their everyday moments with daters.

As opposed to more polished profile photos, Stories can give someone better insight into what a person is like by showcasing what activities they like to engage in, their hobbies, their interests, their personality, and their humor, among other things. And if the daters themselves appear in a Story, it lets others see what they really look like, even if their online photos are out-of-date.

The way the feature is being implemented on Facebook Dating puts the user in control of what’s being shared. That is, your Facebook or Instagram Stories are not automatically copied over to Facebook Dating by default. Instead, users can select which of their Stories are shared and which are not.

In addition, people daters have blocked or passed on Facebook Dating won’t be able to see them.

If a Story is inappropriate, you can also block the user and report it, like you can with other content elsewhere on Facebook.

One thing to be aware of is that this feature is a way to share a Story to Facebook Dating, but the Story isn’t exclusively designed for Facebook Dating. That means, if you decide to use the Story feature as some sort of video dating intro, your Facebook and Instagram friends could see this, as well.

When browsing Facebook Dating, you’ll be able to view other people’s Stories along with their profiles. And if you match with someone, you can continue to view their Stories and then even use that to spark a conversation, which takes place in the app. This is similar to how you can respond to someone’s Facebook or Instagram Story today, which then appears in Messenger or Instagram’s Messages section, respectively.

The new Stories feature could be a potential competitive advantage for Facebook Dating, because it allows users a new way to express themselves without requiring them to create new content just for the dating service itself. Even if a rival dating app like Tinder or Bumble introduced their own version of Stories, many wouldn’t think to launch a dating app to capture their everyday moments.

Stories integration is rolling out starting today to Facebook Dating.

Dating, as a Facebook feature, is currently available in 20 countries, including Argentina, Bolivia, Brazil, Canada, Chile, Colombia, Ecuador, Guyana, Laos, Malaysia, Mexico, Paraguay, Peru, the Philippines, Singapore, Suriname, Thailand, United States, Uruguay, and Vietnam. It will be in Europe by early 2020, Facebook says.

The company has not disclosed how many people are using Facebook Dating at this time.

Despite bans, Giphy still hosts self-harm, hate speech and child sex abuse content

Image search engine Giphy bills itself as providing a “fun and safe way” to search and create animated GIFs. But despite its ban on illicit content, the site is littered with self-harm and child sex abuse imagery, TechCrunch has learned.

A new report from Israeli online child protection startup L1ght — previously AntiToxin Technologies — has uncovered a host of toxic content hiding within the popular GIF-sharing community, including illegal child abuse content, depictions of rape and other toxic imagery associated with topics like white supremacy and hate speech. The report, shared exclusively with TechCrunch, also showed content encouraging viewers into unhealthy weight loss and glamorizing eating disorders.

TechCrunch verified some of the company’s findings by searching the site using certain keywords. (We did not search for terms that may have returned child sex abuse content, as doing so would be illegal.) Although Giphy blocks many hashtags and search terms from returning results, search engines like Google and Bing still cache images with certain keywords.

When we tested using several words associated with illicit content, Giphy sometimes showed content from its own results. When it didn’t return any banned materials, search engines often returned a stream of would-be banned results.

L1ght develops advanced solutions to combat online toxicity. Through its tests, one search of illicit material returned 195 pictures on the first search page alone. L1ght’s team then followed tags from one item to the next, uncovering networks of illegal or toxic content along the way. The tags themselves were often innocuous in order to help users escape detection, but they served as a gateway to the toxic material.

Despite a ban on self-harm content, researchers found numerous keywords and search terms to find the banned content. We have blurred this graphic image. (Image: TechCrunch)

Many of the more extreme content — including images of child sex abuse — are said to have been tagged using keywords associated with known child exploitation sites.

We are not publishing the hashtags, search terms or sites used to access the content, but we passed on the information to the National Center for Missing and Exploited Children, a national nonprofit established by Congress to fight child exploitation.

Simon Gibson, Giphy’s head of audience, told TechCrunch that content safety was of the “utmost importance” to the company and that it employs “extensive moderation protocols.” He said that when illegal content is identified, the company works with the authorities to report and remove it.

He also expressed frustration that L1ght had not contacted Giphy with the allegations first. L1ght said that Giphy is already aware of its content moderation problems.

Gibson said Giphy’s moderation system “leverages a combination of imaging technologies and human validation,” which involves users having to “apply for verification in order for their content to appear in our searchable index.” Content is “then reviewed by a crowdsourced group of human moderators,” he said. “If a consensus for rating among moderators is not met, or if there is low confidence in the moderator’s decision, the content is escalated to Giphy’s internal trust and safety team for additional review,” he said.

“Giphy also conducts proactive keyword searches, within and outside of our search index, in order to find and remove content that is against our policies,” said Gibson.

L1ght researchers used their proprietary artificial intelligence engine to uncover illegal and other offensive content. Using that platform, the researchers can find other related content, allowing them to find vast caches of illegal or banned content that would otherwise and for the most part go unseen.

This sort of toxic content plagues online platforms, but algorithms only play a part. More tech companies are finding human moderation is critical to keeping their sites clean. But much of the focus to date has been on the larger players in the space, like Facebook, Instagram, YouTube and Twitter.

Facebook, for example, has been routinely criticized for outsourcing moderation to teams of lowly paid contractors who often struggle to cope with the sorts of things they have to watch, even experiencing post-traumatic stress-like symptoms as a result of their work. Meanwhile, Google’s YouTube this year was found to have become a haven for online sex abuse rings, where criminals had used the comments section to guide one another to other videos to watch while making predatory remarks.

Giphy and other smaller platforms have largely stayed out of the limelight, during the past several years. But L1ght’s new findings indicate that no platform is immune to these sorts of problems.

L1ght says the Giphy users sharing this sort of content would make their accounts private so they wouldn’t be easily searchable by outsiders or the company itself. But even in the case of private accounts, the abusive content was being indexed by some search engines, like Google, Bing and Yandex, which made it easy to find. The firm also discovered that pedophiles were using Giphy as the means of spreading their materials online, including communicating with each other and exchanging materials. And they weren’t just using Giphy’s tagging system to communicate — they were also using more advanced techniques like tags placed on images through text overlays.

This same process was utilized in other communities, including those associated with white supremacy, bullying, child abuse and more.

This isn’t the first time Giphy has faced criticism for content on its site. Last year a report by The Verge described the company’s struggles to fend off illegal and banned content. Last year the company was booted from Instagram for letting through racist content.

Giphy is far from alone, but it is the latest example of companies not getting it right. Earlier this year and following a tip, TechCrunch commissioned then-AntiToxin to investigate the child sex abuse imagery problem on Microsoft’s search engine Bing. Under close supervision by the Israeli authorities, the company found dozens of illegal images in the results from searching certain keywords. When The New York Times followed up on TechCrunch’s report last week, its reporters found Bing had done little in the months that had passed to prevent child sex abuse content appearing in its search results.

It was a damning rebuke on the company’s efforts to combat child abuse in its search results, despite pioneering its PhotoDNA photo detection tool, which the software giant built a decade ago to identify illegal images based off a huge database of hashes of known child abuse content.

Giphy’s Gibson said the company was “recently approved” to use Microsoft’s PhotoDNA but did not say if it was currently in use.

Where some of the richest, largest and most-resourced tech companies are failing to preemptively limit their platforms’ exposure to illegal content, startups are filling in the content moderation gaps.

L1ght, which has a commercial interest in this space, was founded a year ago to help combat online predators, bullying, hate speech, scams and more.

The company was started by former Amobee chief executive Zohar Levkovitz and cybersecurity expert Ron Porat, previously the founder of ad-blocker Shine, after Porat’s own son experienced online abuse in the online game Minecraft. The company realized the problem with these platforms was something that had outgrown users’ own ability to protect themselves, and that technology needed to come to their aid.

L1ght’s business involves deploying its technology in similar ways as it has done here with Giphy — in order to identify, analyze and predict online toxicity with near real-time accuracy.

Social network for motherhood Peanut raises $5M, expands to include women trying to conceive

Peanut, an app that began its life as a match-maker for finding new mom friends but has since evolved into a social network of more than a million women, announced today it has closed on $5 million in new funding and is expanding its focus to reach women who are trying to conceive. The round was led by San Francisco and London-based VC firm Index Ventures, also backers of Dropbox, Facebook and Glossier, among others.

Other Peanut investors include Sweet Capital, Greycroft, Aston Kutcher’s Sound Ventures, Female Founders Fund, Felix Capital and Partech. To date, Peanut has raised $9.8 million.

The idea for Peanut arose from co-founder Michelle Kennedy’s personal understanding of how difficult it was to forge female friendships after motherhood. As the former deputy CEO at dating app Badoo and an inaugural board member at Bumble, she initially saw the potential for Peanut as a friendship-focused matching app with swipe mechanisms similar to popular dating apps.

Over the past couple of years, however, Kennedy realized that what women needed was more of a community space. The team then built out the app’s features accordingly, with the launch of its Q&A forums, Peanut Pages, last year, and more recently, with Peanut Groups. The latter has now become Peanut’s main use case, with 60% of users taking advantage of the app’s community features and just 40% using the friend-finding functions.

“Community is definitely becoming a very important part of what we do. It’s where we see the users that we deem to be power users — women who are using Peanut for hours every day — they’re very much within the community section,” explains Kennedy. “We see that growth there and it actually guides the product. So we’re taking the behaviors that we see and letting that inform our roadmap,” Kennedy says.

Since around November 2018, Peanut has been growing by 20% month-over-month, as more women discover Peanut’s private and ad-free alternative to Facebook Groups. On Peanut, users are verified (by selfies!), and people have the sorts of discussions that don’t really take place in other social apps.

Even Kennedy admits she was surprised at first by what women were talking about in the app.

“The conversations were much, much more personal and intimate and more related to their lives. So whether that had to do with their sex life or relationships, it was on a deeper level,” she says. “These are conversations that women simply can’t have anywhere else. Of course, they’re not happening in Facebook Groups…these are very intimate and self-reflective moments. And [women] want to do that in a private setting in a private social network,” Kennedy adds.

The new funding, in part, will be used to grow Peanut’s 16-person team to 22 this year, which will then double next year.

In addition, Peanut is expanding access to women who are trying to conceive, with the launch of the Trying To Conceive (TTC) community. This will offer a separate sign-up experience and access to a dedicated network of women, where members can candidly discuss the topic and ask questions. Within TTC, members can also create their own groups — like one for women on their fifth round of IVF, for example — to have conversations with others who are at the same place in their journey.

The community, today, won’t point women to other fertility-focused apps or related health services, Kennedy says, though she sees the potential for strategic partnerships further down the road. In the near-term, however, Peanut plans to generate revenue by way of the freemium model and micropayments.

“We’re incredibly excited to partner with Michelle to grow Peanut from the essential platform for mothers it is today, to a social network for women globally. Peanut is a true companion for women, bringing them together when they need each other the most,” says Hannah Seal, principal at Index Ventures, about the firm’s investment. “We’ve been impressed with the response Peanut has received since launch and look forward to supporting the team as it enters into new areas such as fertility, and expands globally.”

“We want to shine a light on an often silent struggle. What has always been Peanut’s point of difference is enabling conversations women feel unable to have on any other platform. Providing a safe, inclusive space for women to discuss fertility is a natural progression for our brand as we continue to support women throughout each life stage. No woman should ever feel lonely, isolated or muted on such an important issue,” Kennedy says.

Facebook says government demands for user data are at a record high

Facebook’s latest transparency report is out.

The social media giant said the number of government demands for user data increased by 16% to 128,617 demands during the first-half of this year compared to the second-half of last year.

That’s the highest number of government demands its received in any reporting period since it published its first transparency report in 2013.

The U.S. government led the way with the most number of requests — 50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all of the U.S. government’s requests came with a gag order, preventing the company from telling the user about the request for their data.

But Facebook said it was able to release details of 11 so-called national security letters (NSLs) for the first time after their gag provisions were lifted during the period. National security letters can compel companies to turn over non-content data at the request of the FBI. These letters are not approved by a judge, and often come with a gag order preventing their disclosure. But since the Freedom Act passed in 2015, companies have been allowed to request the lifting of those gag orders.

The report also said the social media giant had detected 67 disruptions of its services in 15 countries, compared to 53 disruptions in nine countries during the second-half of last year.

And, the report said Facebook also pulled 11.6 million pieces of content, up from 5.8 million in the same period a year earlier, which Facebook said violated its policies on child nudity and sexual exploitation of children.

Read more:

A US federal court finds suspicionless searches of phones at the border is illegal

A federal court in Boston has ruled that the government is not allowed to search travelers’ phones and devices at the U.S. border without first having reasonable suspicion of a crime.

That’s a significant victory for civil liberties advocates who have said that the government’s own rules that allow its border agents to search electronic devices at the border are unconstitutional.

The court said that the government’s policies on warrantless searches of devices without reasonable suspicion “violate the Fourth Amendment,” which provides constitutional protections against warrantless searches and seizures, the court said.

The case was brought by 11 travelers — ten of which are U.S. citizens — with support from the American Civil Liberties Union and the Electronic Frontier Foundation, who said border agents searched their smartphones and laptops without a warrant, or any suspicion of wrongdoing or criminal activity. But the travelers said the government was overreaching its powers.

The border remains a bizarre legal space, where the government asserts powers that it cannot claim against citizens or residents within the United States. The government has long said it doesn’t need a warrant to search devices at the border.

Any data collected by Customs & Border Protection without a warrant can still be shared with federal, state, local and foreign law enforcement.

Esha Bhandari, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, said the ruling “significantly advances” protections under the Fourth Amendment.

“This is a great day for travelers who now can cross the international border without fear that the government will, in the absence of any suspicion, ransack the extraordinarily sensitive information we all carry in our electronic devices,” said Sophia Cope, a senior staff attorney at the EFF.

Millions of travelers arrive into the U.S. every day. Last year, border officials searched 33,000 travelers’ devices — a fourfold increase since 2015 — without any need for reasonable suspicion. In recent months, travelers have been told to inform the government of any social media handles they have, all of which are subject to inspection. But some have been denied entry to the U.S. for content on their phones shared by other people.

Earlier this year, a federal appeals court found that traffic enforcement officers using chalk to mark car tires was deemed unconstitutional.

A spokesperson for Customs & Border Protection did not immediately comment.

Facebook says a bug caused its iPhone app’s inadvertent camera access

Facebook has faced a barrage of concern over an apparent bug that resulted in the social media giant’s iPhone app exposing the camera as users scroll through their feed.

A tweet over the weekend blew up after Joshua Maddux tweeted a screen recording of the Facebook app on his iPhone. He noticed that the camera would appear behind the Facebook app as he scrolled through his social media feed.

Several users had already spotted the bug earlier in the month. One person called it “a little worrying.”

Some immediately assumed the worst — as you might expect, given the long history of security vulnerabilities, data breaches and inadvertent exposures at Facebook over the past year. Just last week, the company confirmed that some developers had improperly retained access to some Facebook user data for more than a year.

Will Strafach, chief executive at Guardian Firewall, said it looked like a “harmless but creepy looking bug.”

The bug appears to only affect iPhone users running the latest iOS 13 software, and those who have already granted the app access to the camera and microphone. It’s believed the bug relates to the “story” view in the app, which opens the camera for users to take photos.

One workaround is to simply revoke camera and microphone access to the Facebook app in their iOS settings.

Facebook vice president of integrity Guy Rosen tweeted this morning that it “sounds like a bug” and the company was investigating. Only after we published, a spokesperson confirmed to TechCrunch that the issue was in fact a bug.

“We recently discovered that version 244 of the Facebook iOS app would incorrectly launch in landscape mode,” said the spokesperson. “In fixing that issue last week in v246 — launched on November 8th — we inadvertently introduced a bug that caused the app to partially navigate to the camera screen adjacent to News Feed when users tapped on photos.”

“We have seen no evidence of photos or videos being uploaded due to this bug,” the spokesperson added. The bug fix was submitted for Apple’s approval today.

“I guess it does say something when Facebook trust has eroded so badly that it will not get the benefit of the doubt when people see such a bug,” said Strafach.

Updated with Facebook comment.

Dutch court orders Facebook to ban celebrity crypto scam ads after another lawsuit

A Dutch court has ruled that Facebook can be required to use filter technologies to identify and pre-emptively take down fake ads linked to crypto currency scams that carry the image of a media personality, John de Mol, and other well known celebrities.

The Dutch celerity filed a lawsuit against Facebook in April over the misappropriation of his and other celebrities’ likeness to shill Bitcoin scams via fake ads run on its platform.

In an immediately enforceable preliminary judgement today the court has ordered Facebook to remove all offending ads within five days, and provide data on the accounts running them within a week.

Per the judgement, victims of the crypto scams had reported a total of €1.7 million (~$1.8M) in damages to the Dutch government at the time of the court summons.

The case is similar to a legal action instigated by UK consumer advice personality, Martin Lewis, last year, when he announced defamation proceedings against Facebook — also for misuse of his image in fake ads for crypto scams.

Lewis withdrew the suit at the start of this year after Facebook agreed to apply new measures to tackle the problem: Namely a scam ads report button. It also agreed to provide funding to a UK consumer advice organization to set up a scam advice service.

In the de Mol case the lawsuit was allowed to run its course — resulting in today’s preliminary judgement against Facebook. It’s not yet clear whether the company will appeal but in the wake of the ruling Facebook has said it will bring the scam ads report button to the Dutch market early next month.

In court, the platform giant sought to argue that it could not more proactively remove the Bitcoin scam ads containing celebrity images on the grounds that doing so would breach EU law against general monitoring conditions being placed on Internet platforms.

However the court rejected that argument, citing a recent ruling by Europe’s top court related to platform obligations to remove hate speech, also concluding that the specificity of the requested measures could not be classified as ‘general obligations of supervision’.

It also rejected arguments by Facebook’s lawyers that restricting the fake scam ads would be restricting the freedom of expression of a natural person, or the right to be freely informed — pointing out that the ‘expressions’ involved are aimed at commercial gain, as well as including fraudulent practices.

Facebook also sought to argue it is already doing all it can to identify and take down the fake scam ads — saying too that its screening processes are not perfect. But the court said there’s no requirement for 100% effectiveness for additional proactive measures to be ordered. Its ruling further notes a striking reduction in fake scam ads using de Mol’s image since the lawsuit was announced

Facebook’s argument that it’s just a neutral platform was also rejected, with the court pointing out that its core business is advertising.

It also took the view that requiring Facebook to apply technically complicated measures and extra effort, including in terms of manpower and costs, to more effectively remove offending scam ads is not unreasonable in this context.

The judgement orders Facebook to remove fake scam ads containing celebrity likenesses from Facebook and Instagram within five days of the order — with a penalty of €10k per day that Facebook fails to comply with the order, up to a maximum of €1M (~$1.1M).

The court order also requires that Facebook provides data to the affected celebrity on the accounts that had been misusing their likeness within seven days of the judgement, with a further penalty of €1k per day for failure to comply, up to a maximum of €100k.

Facebook has also been ordered to pay the case costs.

Responding to the judgement in a statement, a Facebook spokesperson told us:

We have just received the ruling and will now look at its implications. We will consider all legal actions, including appeal. Importantly, this ruling does not change our commitment to fighting these types of ads. We cannot stress enough that these types of ads have absolutely no place on Facebook and we remove them when we find them. We take this very seriously and will therefore make our scam ads reporting form available in the Netherlands in early December. This is an additional way to get feedback from people, which in turn helps train our machine learning models. It is in our interest to protect our users from fraudsters and when we find violators we will take action to stop their activity, up to and including taking legal action against them in court.

One legal expert describes the judgement as “pivotal“. Law professor Mireille Hildebrandt told us that it provides for as an alternative legal route for Facebook users to litigate and pursue collective enforcement of European personal data rights. Rather than suing for damages — which entails a high burden of proof.

Injunctions are faster and more effective, Hildebrandt added.

The judgement also raises questions around the burden of proof for demonstrating Facebook has removed scam ads with sufficient (increased) accuracy; and what specific additional measures it might deploy to improve its takedown rate.

Although the introduction of the ‘report scam ad button’ does provide one clear avenue for measuring takedown performance.

The button was finally rolled out to the UK market in July. And while Facebook has talked since the start of this year about ‘envisaging’ introducing it in other markets it hasn’t exactly been proactive in doing so — up til now, with this court order. 

Facebook’s first experimental apps from its ‘NPE Team’ division focus on students, chat & music

This July, Facebook announced a new division called NPE Team which would build experimental consumer-facing apps, allowing the company to try out new ideas and features to see how people would react. It soon thereafter tapped former Vine GM Jason Toff to join the team as a product manager. The first apps to emerge from the NPE Team have now quietly launched. One, Bump, is a chat app that aims to help people make new friends through conversations, not appearances. Another, Aux, is a social music listening app.

Aux seems a bit reminiscent of an older startup, Turntable.fm, that closed its doors in 2013. As in Turntable.fm, the idea with Aux is that of a virtual DJ’ing experience where people instead of algorithms are programming the music. This concept of crowdsourced DJ’ing also caught on in years past with radio stations that put their audiences in control of the playlist through their mobile app.

Later, streaming music apps like Spotify experimented with party playlists, and various startups launched their own guest-controlled playlists.

The NPE Team’s Aux app is a slightly different take on this general idea of people-powered playlists.

The app is aimed at school-aged kids and teens who join a party in the app every day at 9 PM. They then choose the songs they want to play and compete for the “AUX” to get theirs played first. At the end of the night, a winner is chosen based on how many “claps” are received.

As the app describes it, Aux is a “DJ for Your School” — a title that’s a bit confusing, as it brings to mind music being played over the school’s intercom system, as opposed to a social app for kids who attend school to use in the evenings.

Aux launched on August 8, 2019 in Canada, and has less than 500 downloads on iOS, according to data from Sensor Tower. It’s not available on Android. It briefly ranked No. 38 among all Music apps on the Canadian App Store on October 22, which may point to some sort of short campaign to juice the downloads.

The other new NPE Team app is Bump, which aims to help people “make new friends.”

Essentially an anonymous chat app, the idea here is that Bump can help people connect by giving them icebreakers to respond to using text. There are no images, videos or links in Bump — just chats.

Based on the App Store screenshots, the app seems to be intended for college students. The screenshots show questions about “the coolest place” on campus and where to find cheap food. A sample chat shown in the screenshots mentions things like classes and roommate troubles. 

There could be a dating component to the app, as well, as it stresses that Bump helps people make a connection through “dialog versus appearances.” That levels the playing field a bit, compared with other social apps — and certainly dating apps — where the most attractive users with the best photos tend to receive the most attention.

Chats in Bump take place in real time, and you can only message in one chat at a time. There’s also a time limit of 30 seconds to respond to messages, which keeps the chat active. When the chat ends, the app will ask you if you want to keep in touch with the other person. Only if both people say yes will you be able to chat with them again.

Bump is available on both iOS and Android and is live in Canada and the Philippines. Bump once ranked as high as No. 252 in Social Networking on the Canadian App Store on September 1, 2019, according to Sensor Tower. However, it’s not ranking at all right now.

What’s interesting is that only one of these NPE Team apps, Bump, discloses in its App Store description that the NPE Team is from Facebook. The other, Aux, doesn’t mention this. However, both do point to an App privacy policy that’s hosted on Facebook.com for those who go digging.

That’s not too different from how Google’s in-house app incubator, Area 120, behaves. Some of its apps aren’t clear about their affiliation with Google, save for a link to Google’s privacy policy. It seems these companies want to see if the apps succeed or fail on their own merit, not because of their parent company’s brand name recognition.

Facebook hasn’t said much about its plans for the NPE Team beyond the fact that they will focus on new ways of building community and may be shut down quickly if they’re not useful.

Facebook has been asked for comment about the new apps and we’ll update if one is provided.

California accuses Facebook of ignoring subpoenas in state’s Cambridge Analytica investigation

California’s attorney general Xavier Becerra has accused Facebook of “continuing to drag its feet” by failing to provide documents to the state’s investigation into Facebook and Cambridge Analytica.

The attorney general said in a court filing Wednesday that Facebook had provided a “patently deficient” response to two sets of subpoenas for the previously undisclosed investigation started more than a year ago. “Facebook has provided no answers for nineteen interrogatories and produced no documents in response to six document requests,” the filing said.

Among the documents sought are communications by executives, including chief executive Mark Zuckerberg and chief operating officer Sheryl Sandberg, and documentation relating to the company’s privacy changes.

The filing said the social media giant was “failing to comply with lawfully issued subpoenas and interrogatories” for what the attorney general says involves “serious allegations of unlawful business practices by one of the richest companies in the world,” referring to Facebook.

Becerra is now asking a court to compel Facebook to produce the documents.

The now-defunct Cambridge Analytica scraped tens of millions of Facebook profiles as part of an effort to help the Trump presidential campaign decide which swing voters to target with election-related advertising. Facebook banned the analytics and voter data firm following the unauthorized scraping. Facebook was later fined $5 billion by the Federal Trade Commission for violating a privacy decree in 2012, which demanded that the company engage in better privacy protections of its users’ data.

A Facebook spokesperson did not respond to a request for comment.

Twitter suspends accounts affiliated with Hamas and Hezbollah

Twitter suspended several accounts affiliated with Hamas and Hezbollah over the weekend after being repeatedly asked to do so by a bipartisan group of U.S. Representatives. The lawmakers—Josh Gottheimer (D-NJ), Tom Reed (R-NY), Max Rose (D-NY) and Brian Fitzpatrick (R-PA)—criticized the company for allowing the accounts to stay up even though Hamas and Hezbollah are designated as Foreign Terrorist Organizations by the United States government.

The accounts suspended include Hamas’ English and Arabic-language accounts and ones belonging to Al-Manar, a television station linked to Hezbollah, and Hamas-affiliated news service Quds News Network.

Hamas' suspended English-language Twitter account

Hamas’ suspended English-language Twitter account

Twitter initially told the congressmen that it distinguishes between political and military factions of those organizations. In an Oct. 22 response, the House members told Twitter that “this distinction is not meaningful, nor is it widely shared. Hezbollah and Hamas are terrorist organizations as designated by the United States Government. Period.”

On Nov. 1, Twitter’s director of public policy in the United States and Canada Carlos Monje Jr., replied that the accounts had been suspended after a review.

“Twitter’s policy is to remove or terminate all accounts it identifies as owned or operated by, or directly affiliated with, any designated foreign terrorist association. If Twitter identifies an account as affiliated with Hamas or Hizballah [sic], Twitter’s policy is to terminate that account,” he wrote in a letter to the congressmen.

Monje added that “Twitter also takes significant steps to identify accounts that are not directly affiliated with a designated foreign terrorist organization but which nonetheless promote or support violent extremism.”

Raja Adulhaq, co-founder of Quds News Network, told the Wall Street Journal that three of the news agency’s accounts had been removed and described the suspensions a “clear censorship of Palestinian narratives.”

The accounts’ suspension comes as social media companies, including Twitter, Facebook and YouTube, face increased scrutiny from lawmakers over what content and advertising they allow on their platform. Twitter recently said it would stop running political ads, an announcement that came after Facebook CEO Mark Zuckerberg defended his company’s policy of not fact-checking political ads while testifying in front of Congress last month.

TechCrunch has contacted Twitter for comment.