Facebook prototypes Favorites for close friends microsharing

Facebook is building its own version of Instagram Close Friends, the company confirms to TechCrunch. There are a lot people that don’t share on Facebook because it can feel risky or awkward as its definition of “friends” has swelled to include family, work colleagues and distant acquaintances. No one wants their boss or grandma seeing their weekend partying or edgy memes. There are whole types of sharing, like Snapchat’s Snap Map-style live location tracking, that feel creepy to expose to such a wide audience.

The social network needs to get a handle on microsharing. Yet Facebook has tried and failed over the years to get people to build Friend Lists for posting to different subsets of their network.

Back in 2011, Facebook said that 95% of users hadn’t made a single list. So it tried auto-grouping people into Smart Lists like High School Friends and Co-Workers, and offered manual always-see-in-feed Close Friends and only-see-important-updates Acquaintances lists. But they too saw little traction and few product updates in the past eight years. Facebook ended up shutting down Friend Lists Feeds last year for viewing what certain sets of friends shared.

Then a year ago, Instagram made a breakthrough. Instead of making a complicated array of Friend Lists you could never remember who was on, it made a single Close Friends list with a dedicated button for sharing to them from Stories. Instagram’s research found 85% of a user’s Direct messages go to the same three people, so why not make that easier for Stories without pulling everyone into a group thread? Last month I wrote that “I’m surprised Facebook doesn’t already have its own Close Friends feature, and it’d be smart to build one.”

How Facebook Favorites works

Now Facebook is in fact prototyping its version of Instagram Close Friends called Favorites. It lets users designate certain friends as Favorites, and then instantly post their Story from Facebook or Messenger to just those people instead of all their friends, as is the default.

The feature was first spotted inside Messenger by reverse engineering master and frequent TechCrunch tipster Jane Manchun Wong. Buried in the Android app is the code that let Wong generate the screenshots (above) of this unreleased feature. They show how when users go to share a Story from Messenger, Facebook offers to let users post it to Favorites, and edit who’s on that list or add to it from algorithmic suggestions. Users in that Favorites list would then be the only recipients of that post within Stories, like with Instagram Close Friends.

 

A Facebook spokesperson confirmed to me that this feature is a prototype that the Messenger team created. It’s an early exploration of the microsharing opportunity, and the feature isn’t officially testing internally with employees or publicly in the wild. The spokesperson describes the Favorites feature as a type of shortcut for sharing to a specific set of people. They tell me that Facebook is always exploring new ways to share, and as discussed at its F8 conference this year, Facebook is focused on improving the experience of sharing with and staying more connected to your closest friends.

Unlocking creepier sharing

There are a ton of benefits Facebook could get from a Favorites feature if it ever launches. First, users might share more often if they can make content visible to just their best pals, as those people wouldn’t get annoyed by over-posting. Second, Facebook could get new, more intimate types of content shared, from the heartfelt and vulnerable to the silly and spontaneous to the racy and shocking — stuff people don’t want every single person they’ve ever accepted a friend request from to see. Favorites could reduce self-censorship.

“No one has ever mastered a close friends graph and made it easy for people to understand . . . People get friend requests and they feel pressure to accept,” Instagram director of product Robby Stein told me when it launched Close Friends last year. “The curve is actually that your sharing goes up and as you add more people initially, as more people can respond to you. But then there’s a point where it reduces sharing over time.” Google+, Path and other apps have died chasing this purposefully selective microsharing behavior.

Facebook Favorites could stimulate lots of sharing of content unique to its network, thereby driving usage and ad views. After all, Facebook said in April that it had 500 million daily Stories users across Facebook and Messenger, the same number as Instagram Stories and WhatsApp Status.

Before Instagram launched Close Friends, it actually tested the feature under the name Favorites and allowed you to share feed posts as well as Stories to just that subset of people. And last month Instagram launched the Close Friends-only messaging app Threads that lets you share your Auto-Status about where or what you’re up to.

Facebook Favorites could similarly unlock whole new ways to connect. Facebook can’t follow some apps like Snapchat down more privacy-centric product paths because it knows users are already uneasy about it after 15 years of privacy scandals. Apps built for sharing to different graphs than Facebook have been some of the few social products that have succeeded outside its empire, from Twitter’s interest graph, to TikTok’s fandoms of public entertainment, to Snapchat’s messaging threads with besties.

Instagram Threads

A competent and popular Facebook Favorites could let it try products in location, memes, performances, Q&A, messaging, live streaming and more. It could build its own take on Instagram Threads, let people share exact location just with Favorites instead of just what neighborhood they’re in with Nearby Friends or create a dedicated meme resharing hub like the LOL experiment for teens it shut down. At the very least, it could integrate with Instagram Close Friends so you could syndicate posts from Instagram to your Facebook Favorites.

The whole concept of Favorites aligns with Facebook CEO Mark Zuckerberg’s privacy-focused vision for social networking. “Many people prefer the intimacy of communicating one-on-one or with just a few friends,” he writes. Facebook can’t just be the general purpose catch-all social network we occasionally check for acquaintances’ broadcasted life updates. To survive another 15 years, it must be where people come back each day to get real with their dearest friends. Less can be more.

Facebook Dating now integrates with Instagram and Facebook Stories

Facebook Dating, an opt-in feature of the main Facebook app, will begin to tap into the content users are already creating across both Facebook and Instagram to enhance its service. Today, Facebook Dating users will be able to add their Facebook or Instagram Stories to Facebook Dating, in order to share their everyday moments with daters.

As opposed to more polished profile photos, Stories can give someone better insight into what a person is like by showcasing what activities they like to engage in, their hobbies, their interests, their personality, and their humor, among other things. And if the daters themselves appear in a Story, it lets others see what they really look like, even if their online photos are out-of-date.

The way the feature is being implemented on Facebook Dating puts the user in control of what’s being shared. That is, your Facebook or Instagram Stories are not automatically copied over to Facebook Dating by default. Instead, users can select which of their Stories are shared and which are not.

In addition, people daters have blocked or passed on Facebook Dating won’t be able to see them.

If a Story is inappropriate, you can also block the user and report it, like you can with other content elsewhere on Facebook.

One thing to be aware of is that this feature is a way to share a Story to Facebook Dating, but the Story isn’t exclusively designed for Facebook Dating. That means, if you decide to use the Story feature as some sort of video dating intro, your Facebook and Instagram friends could see this, as well.

When browsing Facebook Dating, you’ll be able to view other people’s Stories along with their profiles. And if you match with someone, you can continue to view their Stories and then even use that to spark a conversation, which takes place in the app. This is similar to how you can respond to someone’s Facebook or Instagram Story today, which then appears in Messenger or Instagram’s Messages section, respectively.

The new Stories feature could be a potential competitive advantage for Facebook Dating, because it allows users a new way to express themselves without requiring them to create new content just for the dating service itself. Even if a rival dating app like Tinder or Bumble introduced their own version of Stories, many wouldn’t think to launch a dating app to capture their everyday moments.

Stories integration is rolling out starting today to Facebook Dating.

Dating, as a Facebook feature, is currently available in 20 countries, including Argentina, Bolivia, Brazil, Canada, Chile, Colombia, Ecuador, Guyana, Laos, Malaysia, Mexico, Paraguay, Peru, the Philippines, Singapore, Suriname, Thailand, United States, Uruguay, and Vietnam. It will be in Europe by early 2020, Facebook says.

The company has not disclosed how many people are using Facebook Dating at this time.

Amnesty International latest to slam surveillance giants Facebook and Google as “incompatible” with human rights

Human rights charity Amnesty International is the latest to call for reform of surveillance capitalism — blasting the business models of “surveillance giants” Facebook and Google in a new report which warns the pair’s market dominating platforms are “enabling human rights harm at a population scale”.

“[D]despite the real value of the services they provide, Google and Facebook’s platforms come at a systemic cost,” Amnesty warns. “The companies’ surveillance-based business model forces people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse. Firstly, an assault on the right to privacy on an unprecedented scale, and then a series of knock-on effects that pose a serious risk to a range of other rights, from freedom of expression and opinion, to freedom of thought and the right to non-discrimination.”

“This isn’t the internet people signed up for,” it adds.

What’s most striking about the report is the familiarly of the arguments. There is now a huge weight of consensus criticism around surveillance-based decision-making — from Apple’s own Tim Cook through scholars such as Shoshana Zuboff and Zeynep Tufekci to the United Nations — that’s itself been fed by a steady stream of reportage of the individual and societal harms flowing from platforms’ pervasive and consentless capturing and hijacking of people’s information for ad-based manipulation and profit.

This core power asymmetry is maintained and topped off by self-serving policy positions which at best fiddle around the edges of an inherently anti-humanitarian system. While platforms have become practiced in dark arts PR — offering, at best, a pantomime ear to the latest data-enabled outrage that’s making headlines, without ever actually changing the underlying system. That surveillance capitalism’s abusive modus operandi is now inspiring governments to follow suit — aping the approach by developing their own data-driven control systems to straitjacket citizens — is exceptionally chilling.

But while the arguments against digital surveillance are now very familiar what’s still sorely lacking is an effective regulatory response to force reform of what is at base a moral failure — and one that’s been allowed to scale so big it’s attacking the democratic underpinnings of Western society.

“Google and Facebook have established policies and processes to address their impacts on privacy and freedom of expression – but evidently, given that their surveillance-based business model undermines the very essence of the right to privacy and poses a serious risk to a range of other rights, the companies are not taking a holistic approach, nor are they questioning whether their current business models themselves can be compliant with their responsibility to respect human rights,” Amnesty writes.

“The abuse of privacy that is core to Facebook and Google’s surveillance-based business model is starkly demonstrated by the companies’ long history of privacy scandals. Despite the companies’ assurances over their commitment to privacy, it is difficult not to see these numerous privacy infringements as part of the normal functioning of their business, rather than aberrations.”

Needless to say Facebook and Google do not agree with Amnesty’s assessment. But, well, they would say that wouldn’t they?

Amnesty’s report notes there is now a whole surveillance industry feeding this beast — from adtech players to data brokers — while pointing out that the dominance of Facebook and Google, aka the adtech duopoly, over “the primary channels that most of the world relies on to engage with the internet” is itself another harm, as it lends the pair of surveillance giants “unparalleled power over people’s lives online”.

“The power of Google and Facebook over the core platforms of the internet poses unique risks for human rights,” it warns. “For most people it is simply not feasible to use the internet while avoiding all Google and Facebook services. The dominant internet platforms are no longer ‘optional’ in many societies, and using them is a necessary part of participating in modern life.”

Amnesty concludes that it is “now evident that the era of self-regulation in the tech sector is coming to an end” — saying further state-based regulation will be necessary. Its call there is for legislators to follow a human rights-based approach to rein in surveillance giants.

You can read the report in full here (PDF).

Give InKind’s smarter giving platform brings in surprise $1.5 million in pre-seed funding

Helping out a friend in need online can be surprisingly difficult. While giving cash is easy enough, that’s often not what people need most — so Give InKind aims to be the platform where you can do a lot more than write a check. The idea is such a natural one that the company tripled its goal for a pre-seed round, raising $1.5 million from Seattle investors.

The company was selected for inclusion in the Female Founders Alliance’s Ready Set Raise accelerator, at the demo day for which I saw founder Laura Malcolm present.

The problem Malcolm is attempting to solve is simply that in times of hardship, not only do people not want to deal with setting up a fundraising site, but money isn’t even what they require to get through that period. Malcolm experienced this herself, when she experienced a personal tragedy and found that what was out there to let others help was simply inadequate.

“My friends and family were trying to support me from around the country, but the tools they had to do that were outdated and didn’t solve the problems for us,” she explained. “There just wasn’t one place to put all the help that’s needed, whether that’s meal drop-off, or rides to school for the kids, or a wishlist for Instacart, or Lyft credits. Every situation is unique, and no one has put it all together in one place where, when someone says ‘how can I help?’ you can just point there.”

The idea with Give InKind is to provide a variety of options for helping someone out. Of course you can donate cash, but you can also buy specific items from wishlists, coordinate deliveries, set up recurring gifts (like diapers or gift boxes), or organize in-person help on a built-in calendar.

These all go on a central profile page that Malcolm noted is rarely set up by the beneficiary themselves.

“90 percent of pages are set up by someone else. Not everyone has been impacted by one of these situations, but I think almost everyone has known someone who has, and has wondered how they were supposed to respond or help,” she explained. “So this isn’t about capturing people during a time of need, but about solving the problem for people who want to know how to help.”

That certainly resonated with me, as I have always felt the cash donation option when someone is going through a tough time to be pretty impersonal and general. It’s nice to be able to help out in person, but what about a friend in another city who’s been taken out of action and needs someone else to figure out the dog walking situation? Give InKind is meant to surface specific needs like that and provide the links (to, for instance, Rover) and relevant information all in one place.

“The majority of actions on the site are people doing things themselves — signing up for meals, or to help. The calendar view is for coordination, and it’s the most used part of the site. About 70 percent is that, the rest is those national services [i.e. Instacart, Uber, etc.],” Malcolm said.

Locally run services (cleaners that aren’t on a national directory, for instance) are on the roadmap, but as you can imagine that takes a lot of footwork to put together, so it will have to wait.

Right now the site works almost entirely on an affiliate model; Helpers make accounts to do things like add themselves to the schedule or help edit the profile, then get sent out to the merchant site to complete the transaction there. The company is experimenting with on-site purchases for some things, but the idea isn’t to become host transactions except where that can really add value.

The plan for expansion is to double down on the existing organic growth patterns of the site. Every page that gets set up attracts multiple new users and visits, and those users are far more likely to start more pages even years down the line. Between improving that and some actual marketing work, Malcolm feels sure that they can grow quickly and could soon join other major giving services like GoFundMe in scale.

Ready, set, raise… a lot more than expected

Give InKind came to my attention through the Female Founder Alliance here in Seattle, which hosted a demo night a little while ago to highlight the companies and, naturally, their founders as well. Although some of the companies focused on female-forward issues, for instance the difficulty of acquiring workwear tailored to women’s bodies, the idea is more to find valuable companies that just happen to have female founders.

“Ready Set Raise was built to find high potential, dramatically undervalued investment opportunities, and translate them into something the VC community can understand,” said FFA founder Leslie Feinzaig. “Our last member survey results were consistent with findings that women founders raise less capital but make it go further. Give InKind is a perfect example. They bootstrapped for 3 years, found product market fit, grew 20% every month, and still struggled to resonate with investors.”

Yet after presenting, Malcolm’s company was honored at the event with a $100K investment from Trilogy Ventures. And having originally kicked off fundraising with a view to a $500K round, she soon found she had to cap it at an unexpected but very welcome $1.5M. The final list of participants in the round includes Madrona Venture Group, SeaChange Fund, Keeler Investments, FAM Fund, Grubstakes, and X Factor Ventures.

I suggested that this must have been something of a validating experience.

“It’s super validating,” she agreed. “The founder journey is long and hard, and the odds are not in favor of female founders or impact companies, necessarily, and consumer is not huge in Seattle, either. We really sort of defied the odds across the board raising this round so quickly… Seattle really showed up.”

She described the accelerator as being “incredibly unique. It’s entirely about creating access for female founders to investors, mentors, and experts.”

“We spent so much time turning my model upside down and shaking everything out of it. Turns out it was much more defensible than I thought. We didn’t change the business, and we didn’t change the product — we lightly changed the positioning,” she said. “This combination of access with coaching and mentorship, getting the ability to present the business in a way that’s compelling, you realize how much of this is held back from people who don’t have these opportunities. I’ve been carrying around Give InKind for three years in a paper bag, and they put a bell on it.”

Feinbaig cited the competitiveness of the application process and quality of their coaches, which give lots of 1 on 1 time, for the high quality of the companies emerging from the accelerator. You can check out the rest of the companies in the second cohort here — and of course Give InKind is live should you or anyone you know need a helping hand.

Facebook announces dates for its 2020 F8 developers conference

Facebook announced its plans for its F8 annual developer conference, which is where the company shows off its latest technology, apps and its vision for the future. The company says the next event will take place on May 5-6, 2020 at the McEnery Convention Center in San Jose, CA.

Interested attendees can sign up at www.f8.com to be notified when registration opens and receive updates.

Last year, the company used the F8 event to introduce a huge redesign of Facebook.com, plus upgrades, products, and expansions in areas like Messenger, WhatsApp, Dating, Marketplace and beyond. It also showed off how it’s putting new technology to work, including with VR, smart home, hardware, and other developer-facing technology, including Facebook’s Ax and BoTorch initiatives.

Facebook isn’t yet talking about what it will show off at F8 in 2020, but says it will feature: “product demos, deep-dive sessions that showcase how technology can enable you to come together and create your best work, and opportunities for you to network with our global developer community and learn from each other,” according to its announcement.

Beyond the news that comes from F8 in terms of individual products, the conference gives Facebook a platform to present its overarching vision.

Last year, for example, it was one of a network that’s trying to become more private and working to retain users. More recently, Facebook’s initiatives show a company that’s still trying to be disruptive, with new products like Libra. But some of its launches also demonstrate that the company is painfully aware of how much it’s ceding traction to other social apps, like Snapchat and TikTok. We’ll see if Facebook has any new responses to those challenges, or to the larger, more existential issues Facebook now faces with the rise of anti-trust investigations that put Facebook in their crosshairs.

Twitter launches a way to report abusive use of its Lists feature

Like many things found on today’s social media platforms, Twitter’s Lists feature was introduced without thinking about the impact it could have on marginalized groups, or how it could be otherwise be used for abuse or surveillance if put in the hands of bad actors. Today, Twitter is taking a step to address that problem with the launch of a new reporting feature that specifically addresses the abusive use of Twitter Lists.

The feature is launching first on iOS today, and will come soon to Android and the web, Twitter says.

Similar to reporting an abusive tweet, Twitter users will tap on the three-dot icon next to the List in question, and then choose “Report.” From the next screen, you’ll select “It’s abusive or harmful.” Twitter will also ask for additional information at that point and will send an email confirming receipt of the report along with other recommendations as to how to manage your Twitter experience.

Twitter Lists have been abused for years, as they became another way to target and harass people — particularly women and other minority groups. They were particularly useful as a way to avoid being banned for abusive tweets, as Twitter took no notice of Lists.

Twitter has been aware of the problem for years, noted CNBC in an exposé that ran over the summer.

Back in 2017, Twitter said it would no longer notify users when they’ve been added to a list — an attempt to cut back on what were very often upsetting notifications. It then reversed the decision after people argued that notifications were how they learned what sort of harmful lists they had been added to in the first place.

Despite Twitter’s understanding of how Lists were abused, there have not been any good tools for getting an abusive list removed from Twitter itself — users could only block the list’s creator.

Twitter has admitted that despite the availability of its reporting tools and the increasing speed with which it handles abuse reports, there’s still too much pressure on people to flag abuse for themselves. The company says it wants to figure out how to be more proactive — today, the majority is not flagged by technology (only 38% is), but by reports from users.

This problem and all the many like it have to do with who’s built our social media tools in the first place.

Twitter, like other tech companies, has struggled with a lack of diversity which means there’s a large lack of understanding about how features could be twisted to be used in ways no one intended. Though Twitter’s diversity metrics have been improving, Twitter as of this spring was 40.2% female, but just 4.5% black, and 3.9% Latinx.

The other issue with Twitter — and social media in general — is that there’s some distance between the abuser and the victim of harassment. The latter is often not seen as a real person, but rather a placeholder meant to absorb someone’s malcontent, outrage, or hatred. And thanks to the platform’s anonymity, there are no real-world consequences for bad behavior on Twitter the way there would be if those same hateful things were said in a public place — like in a community setting such as your local church or social group, or in your workplace.

Finally, Twitter’s trend toward pithiness has led to it becoming a place to be sarcastic, cynical, and witty-at-others-expense — a trend that’s driven by a prolific but small crowd of Twitter users. The goal has very much been to “perform” on Twitter, and accumulate likes and retweets along the way.

Twitter says the new feature is rolling out now to iOS.

 

Facebook’s Libra code chugs along ignoring regulatory deadlock

“5 months and growing strong” the Libra Association announced today in an post about its technical infrastructure that completely omits the fierce regulatory backlash to its cryptocurrency.

40 wallets, tools, and block explorers plus 1,700 Github commits have how now been built on its blockchain testnet that’s seen 51,000 mock transactions in the past two months. Libra nodes that process transactions are now being run by Coinbase, Uber, BisonTrails, Iliad, Xapo, Anchorage, and Facebook’s Calibra. Six more nodes are being established, plus there are 8 more getting set up from members who lack technical teams, meaning all 21 members have nodes running or in the works.

But the update on the Libra backend doesn’t explain how the association plans to get all the way to its goal of 100 members and nodes by next year when it originally projected a launch. And it gives no nod to the fact that even if Libra is technically ready to deploy its mainnet in 2020, government regulators in the US and around the world still won’t necessarily let it launch.

Facebook itself seems to be hedging its bets on fintech in the face of pushback against Libra. This week it began the launch of Facebook Pay, which will let users pay friends, merchants, and charities with a single payment method across Facebook, Messenger, WhatsApp, and Instagram.

Facebook Pay could help the company drive more purchases on its platform, get more insights into transactions, and lead merchants to spend more on ads to lure in sales facilitated by quicker payments. That’s most of what Facebook was trying to get out of Libra in the first place, beyond better financial inclusion.

Last month’s congressional testimony from Facebook CEO Mark Zuckerberg was less contentious than Libra board member David Marcus’ appearances on Capitol Hill in July. Yet few of lawmakers’ core concerns about how Libra could facilitate money laundering, endanger users’ assets, and give Facebook even more power amidst ongoing anti-trust investigations were assuaged.

This set of announcements from the Libra Core summit of technical members was an opportunity for the project to show how it was focused on addressing fraud, security, and decentralization of power. Instead, the Libra Association took the easy route of focusing on what the Facebook-led development team knows best: writing code, not fixing policy. TechCrunch provided questions to the Libra Association and some members but the promised answers were not returned before press time.

For those organizations without a technical team to implement a node, the Libra Association is working on a strategy to support deployment in 2020, when the Libra Core feature set is complete” the Association’s Michael Engle writes. “The Libra Association intends to deploy 100 nodes on the mainnet, representing a mix of on-premises and cloud-hosted infrastructure.” It feels a bit like Libra is plugging its ears.

Having proper documentation, setting up CLAs to ease GitHub contributions, standardizing the Move code language, a Bug Bounty program, and a public technical roadmap are a good start. But until the Association can answers Congress’ questions directly, they’re likely to refuse Libra approval which Zuckerberg said the project won’t launch without.

Daily Crunch: TikTok starts experimenting with commerce

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. TikTok tests social commerce

The short-form video app said it’s allowing some users to add links to e-commerce sites (or any other destination) to their profile, while also offering creators the ability to easily send their viewers to shopping websites.

On their own, these changes might not sound that dramatic, and parent company ByteDance characterizes them as experiments. But it could eventually lead TikTok to become a major force in commerce — and to follow the lead of Instagram, where “link in bio” has become one of the most common promotional messages.

2. Despite bans, Giphy still hosts self-harm, hate speech and child sex abuse content

A new report from Israeli online child protection startup L1ght  has uncovered a host of toxic content hiding within the popular GIF-sharing community, including illegal child abuse content, depictions of rape and other toxic imagery associated with topics like white supremacy and hate speech.

3. Lyft is ceasing scooter operations in six cities and laying off 20 employees

Lyft notified employees today that it’s pulling its scooters from six markets: Nashville, San Antonio, Atlanta, the Phoenix area, Dallas and Columbus. A spokesperson told us, “We’re choosing to focus on the markets where we can have the biggest impact.”

4. Takeaways from Nvidia’s latest quarterly earnings

After yesterday’s earnings report, Wall Street seems to have barely budged on the stock price — everyone’s waiting for resolution on some of the key questions facing the company. (Extra Crunch membership required.)

5. Virgin Galactic begins ‘Astronaut Readiness Program’ for first paying customers

The program is being run out of the global headquarters of Under Armour, Virgin Galactic’s partner for its official astronaut uniforms. The training, with instruction from Chief Astronaut Instructor Beth Moses and Chief Pilot Dave Mackay, is required for all Virgin Galactic passengers.

6. AWS confirms reports it will challenge JEDI contract award to Microsoft

In a statement, an Amazon spokesperson suggested that there was possible bias in the selection process: “AWS is uniquely experienced and qualified to provide the critical technology the U.S. military needs, and remains committed to supporting the DoD’s modernization efforts.”

7. SoftBank Vision Fund’s Carolina Brochado is coming to Disrupt Berlin

At SoftBank’s Vision Fund, Brochado focuses on fintech, digital health and marketplace startups. Some of her past investments with both Atomico and SoftBank include LendInvest, Gympass, Hinge Health, Ontruck and Rekki.

Twitter makes its political ad ban official

The ban on political ads announced by Twitter two weeks ago has come into effect, and the rules are surprisingly simple — perhaps too simple. No political content as they define it may be promoted; Candidates, parties, governments or officials, PACs, and certain political nonprofit groups are banned from promoting content altogether.

The idea intended to be made manifest in these policies is that “political message reach should be earned, not bought,” as the company puts it. It’s hard to argue with that (but Facebook will anyway). The new rules apply globally and to all ad types.

It’s important to make clear at the outset that Twitter is not banning political content, it is banning the paid promotion of that content. Every topic is fair game and every person or organization on Twitter can pursue their cause as before — they just can’t pay to get their message in front of more eyeballs.

In its briefly stated rules, the company explains what it means by “political content”:

We define political content as content that references a candidate, political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome.

Also banned are:

Ads that contain references to political content, including appeals for votes, solicitations of financial support, and advocacy for or against any of the above-listed types of political content.

That seems pretty straightforward. Banning political ads is controversial to begin with, but unclear or complicated definitions would really make things difficult.

A blanket ban on many politically-motivated organizations will also help clear the decks. Political action committees, or PACs, and their deep-pocketed cousins the SuperPACs, are banned from advertising at all. That makes sense, since what content would they be promoting other than attempts to influence the political process? 501(c)4 nonprofit organizations, not as publicly notorious as PACs but huge spenders on political causes, are also banned.

There are of course exemptions, both for news organizations that want to promote coverage of political issues, and “cause-based” content deemed non-political.

The first exemption is pretty natural — although many news organizations do have a political outlook or ideological bent, it’s a far cry from the practice of donating millions directly to candidates or parties. But not just any site can take advantage — you’ll have to have 200,000 monthly unique visitors, make your own content with your own people, and not be primarily focused on a single issue.

The “cause-based” exemption may be where Twitter takes the most heat. As Twitter’s policy states, it will allow “ads that educate, raise awareness, and/or call for people to take action in connection with civic engagement, economic growth, environmental stewardship, or social equity causes.”

These come with some restrictions: They can only be targeted to the state, province, or region level — no zip codes, so hyper-local influence is out. And politically-charged interests may not be targeted, so you can’t send your cause-based ads just to “socialists,” for example. And they can’t reference or be run on behalf of any of the banned entities above.

But it’s the play in the definition that may come back to bite Twitter. What exactly constitutes “civic engagement” and “social equity causes”? Perhaps these concepts were only vaguely defined by design to be accommodating rather than prescriptive, but if you leave an inch for interpretation, you’d better believe bad actors are going to take a mile.

Clearly this is meant to allow promotion of content like voter registration drives, disaster relief work, and so on. But it’s more than possible someone will try to qualify, say, an anti-immigrant rally as “public conversation around important topics.”

I asked Twitter whether additional guidance on the cause-based content rules would be forthcoming, but a representative simply pointed me to the very language I quoted.

That said, policy lead at Twitter Vijaya Gadde said that the company will attempt to be transparent with its decisions on individual issues and clear about changes to the rules going forward.

“This is new territory,” she tweeted. “As with every policy we put into practice, it will evolve and we’ll be listening to your feedback.”

And no doubt they shall receive it — in abundance.

Despite bans, Giphy still hosts self-harm, hate speech and child sex abuse content

Image search engine Giphy bills itself as providing a “fun and safe way” to search and create animated GIFs. But despite its ban on illicit content, the site is littered with self-harm and child sex abuse imagery, TechCrunch has learned.

A new report from Israeli online child protection startup L1ght — previously AntiToxin Technologies — has uncovered a host of toxic content hiding within the popular GIF-sharing community, including illegal child abuse content, depictions of rape and other toxic imagery associated with topics like white supremacy and hate speech. The report, shared exclusively with TechCrunch, also showed content encouraging viewers into unhealthy weight loss and glamorizing eating disorders.

TechCrunch verified some of the company’s findings by searching the site using certain keywords. (We did not search for terms that may have returned child sex abuse content, as doing so would be illegal.) Although Giphy blocks many hashtags and search terms from returning results, search engines like Google and Bing still cache images with certain keywords.

When we tested using several words associated with illicit content, Giphy sometimes showed content from its own results. When it didn’t return any banned materials, search engines often returned a stream of would-be banned results.

L1ght develops advanced solutions to combat online toxicity. Through its tests, one search of illicit material returned 195 pictures on the first search page alone. L1ght’s team then followed tags from one item to the next, uncovering networks of illegal or toxic content along the way. The tags themselves were often innocuous in order to help users escape detection, but they served as a gateway to the toxic material.

Despite a ban on self-harm content, researchers found numerous keywords and search terms to find the banned content. We have blurred this graphic image. (Image: TechCrunch)

Many of the more extreme content — including images of child sex abuse — are said to have been tagged using keywords associated with known child exploitation sites.

We are not publishing the hashtags, search terms or sites used to access the content, but we passed on the information to the National Center for Missing and Exploited Children, a national nonprofit established by Congress to fight child exploitation.

Simon Gibson, Giphy’s head of audience, told TechCrunch that content safety was of the “utmost importance” to the company and that it employs “extensive moderation protocols.” He said that when illegal content is identified, the company works with the authorities to report and remove it.

He also expressed frustration that L1ght had not contacted Giphy with the allegations first. L1ght said that Giphy is already aware of its content moderation problems.

Gibson said Giphy’s moderation system “leverages a combination of imaging technologies and human validation,” which involves users having to “apply for verification in order for their content to appear in our searchable index.” Content is “then reviewed by a crowdsourced group of human moderators,” he said. “If a consensus for rating among moderators is not met, or if there is low confidence in the moderator’s decision, the content is escalated to Giphy’s internal trust and safety team for additional review,” he said.

“Giphy also conducts proactive keyword searches, within and outside of our search index, in order to find and remove content that is against our policies,” said Gibson.

L1ght researchers used their proprietary artificial intelligence engine to uncover illegal and other offensive content. Using that platform, the researchers can find other related content, allowing them to find vast caches of illegal or banned content that would otherwise and for the most part go unseen.

This sort of toxic content plagues online platforms, but algorithms only play a part. More tech companies are finding human moderation is critical to keeping their sites clean. But much of the focus to date has been on the larger players in the space, like Facebook, Instagram, YouTube and Twitter.

Facebook, for example, has been routinely criticized for outsourcing moderation to teams of lowly paid contractors who often struggle to cope with the sorts of things they have to watch, even experiencing post-traumatic stress-like symptoms as a result of their work. Meanwhile, Google’s YouTube this year was found to have become a haven for online sex abuse rings, where criminals had used the comments section to guide one another to other videos to watch while making predatory remarks.

Giphy and other smaller platforms have largely stayed out of the limelight, during the past several years. But L1ght’s new findings indicate that no platform is immune to these sorts of problems.

L1ght says the Giphy users sharing this sort of content would make their accounts private so they wouldn’t be easily searchable by outsiders or the company itself. But even in the case of private accounts, the abusive content was being indexed by some search engines, like Google, Bing and Yandex, which made it easy to find. The firm also discovered that pedophiles were using Giphy as the means of spreading their materials online, including communicating with each other and exchanging materials. And they weren’t just using Giphy’s tagging system to communicate — they were also using more advanced techniques like tags placed on images through text overlays.

This same process was utilized in other communities, including those associated with white supremacy, bullying, child abuse and more.

This isn’t the first time Giphy has faced criticism for content on its site. Last year a report by The Verge described the company’s struggles to fend off illegal and banned content. Last year the company was booted from Instagram for letting through racist content.

Giphy is far from alone, but it is the latest example of companies not getting it right. Earlier this year and following a tip, TechCrunch commissioned then-AntiToxin to investigate the child sex abuse imagery problem on Microsoft’s search engine Bing. Under close supervision by the Israeli authorities, the company found dozens of illegal images in the results from searching certain keywords. When The New York Times followed up on TechCrunch’s report last week, its reporters found Bing had done little in the months that had passed to prevent child sex abuse content appearing in its search results.

It was a damning rebuke on the company’s efforts to combat child abuse in its search results, despite pioneering its PhotoDNA photo detection tool, which the software giant built a decade ago to identify illegal images based off a huge database of hashes of known child abuse content.

Giphy’s Gibson said the company was “recently approved” to use Microsoft’s PhotoDNA but did not say if it was currently in use.

Where some of the richest, largest and most-resourced tech companies are failing to preemptively limit their platforms’ exposure to illegal content, startups are filling in the content moderation gaps.

L1ght, which has a commercial interest in this space, was founded a year ago to help combat online predators, bullying, hate speech, scams and more.

The company was started by former Amobee chief executive Zohar Levkovitz and cybersecurity expert Ron Porat, previously the founder of ad-blocker Shine, after Porat’s own son experienced online abuse in the online game Minecraft. The company realized the problem with these platforms was something that had outgrown users’ own ability to protect themselves, and that technology needed to come to their aid.

L1ght’s business involves deploying its technology in similar ways as it has done here with Giphy — in order to identify, analyze and predict online toxicity with near real-time accuracy.