Despite bans, Giphy still hosts self-harm, hate speech and child sex abuse content

Image search engine Giphy bills itself as providing a “fun and safe way” to search and create animated GIFs. But despite its ban on illicit content, the site is littered with self-harm and child sex abuse imagery, TechCrunch has learned.

A new report from Israeli online child protection startup L1ght — previously AntiToxin Technologies — has uncovered a host of toxic content hiding within the popular GIF-sharing community, including illegal child abuse content, depictions of rape and other toxic imagery associated with topics like white supremacy and hate speech. The report, shared exclusively with TechCrunch, also showed content encouraging viewers into unhealthy weight loss and glamorizing eating disorders.

TechCrunch verified some of the company’s findings by searching the site using certain keywords. (We did not search for terms that may have returned child sex abuse content, as doing so would be illegal.) Although Giphy blocks many hashtags and search terms from returning results, search engines like Google and Bing still cache images with certain keywords.

When we tested using several words associated with illicit content, Giphy sometimes showed content from its own results. When it didn’t return any banned materials, search engines often returned a stream of would-be banned results.

L1ght develops advanced solutions to combat online toxicity. Through its tests, one search of illicit material returned 195 pictures on the first search page alone. L1ght’s team then followed tags from one item to the next, uncovering networks of illegal or toxic content along the way. The tags themselves were often innocuous in order to help users escape detection, but they served as a gateway to the toxic material.

Despite a ban on self-harm content, researchers found numerous keywords and search terms to find the banned content. We have blurred this graphic image. (Image: TechCrunch)

Many of the more extreme content — including images of child sex abuse — are said to have been tagged using keywords associated with known child exploitation sites.

We are not publishing the hashtags, search terms or sites used to access the content, but we passed on the information to the National Center for Missing and Exploited Children, a national nonprofit established by Congress to fight child exploitation.

Simon Gibson, Giphy’s head of audience, told TechCrunch that content safety was of the “utmost importance” to the company and that it employs “extensive moderation protocols.” He said that when illegal content is identified, the company works with the authorities to report and remove it.

He also expressed frustration that L1ght had not contacted Giphy with the allegations first. L1ght said that Giphy is already aware of its content moderation problems.

Gibson said Giphy’s moderation system “leverages a combination of imaging technologies and human validation,” which involves users having to “apply for verification in order for their content to appear in our searchable index.” Content is “then reviewed by a crowdsourced group of human moderators,” he said. “If a consensus for rating among moderators is not met, or if there is low confidence in the moderator’s decision, the content is escalated to Giphy’s internal trust and safety team for additional review,” he said.

“Giphy also conducts proactive keyword searches, within and outside of our search index, in order to find and remove content that is against our policies,” said Gibson.

L1ght researchers used their proprietary artificial intelligence engine to uncover illegal and other offensive content. Using that platform, the researchers can find other related content, allowing them to find vast caches of illegal or banned content that would otherwise and for the most part go unseen.

This sort of toxic content plagues online platforms, but algorithms only play a part. More tech companies are finding human moderation is critical to keeping their sites clean. But much of the focus to date has been on the larger players in the space, like Facebook, Instagram, YouTube and Twitter.

Facebook, for example, has been routinely criticized for outsourcing moderation to teams of lowly paid contractors who often struggle to cope with the sorts of things they have to watch, even experiencing post-traumatic stress-like symptoms as a result of their work. Meanwhile, Google’s YouTube this year was found to have become a haven for online sex abuse rings, where criminals had used the comments section to guide one another to other videos to watch while making predatory remarks.

Giphy and other smaller platforms have largely stayed out of the limelight, during the past several years. But L1ght’s new findings indicate that no platform is immune to these sorts of problems.

L1ght says the Giphy users sharing this sort of content would make their accounts private so they wouldn’t be easily searchable by outsiders or the company itself. But even in the case of private accounts, the abusive content was being indexed by some search engines, like Google, Bing and Yandex, which made it easy to find. The firm also discovered that pedophiles were using Giphy as the means of spreading their materials online, including communicating with each other and exchanging materials. And they weren’t just using Giphy’s tagging system to communicate — they were also using more advanced techniques like tags placed on images through text overlays.

This same process was utilized in other communities, including those associated with white supremacy, bullying, child abuse and more.

This isn’t the first time Giphy has faced criticism for content on its site. Last year a report by The Verge described the company’s struggles to fend off illegal and banned content. Last year the company was booted from Instagram for letting through racist content.

Giphy is far from alone, but it is the latest example of companies not getting it right. Earlier this year and following a tip, TechCrunch commissioned then-AntiToxin to investigate the child sex abuse imagery problem on Microsoft’s search engine Bing. Under close supervision by the Israeli authorities, the company found dozens of illegal images in the results from searching certain keywords. When The New York Times followed up on TechCrunch’s report last week, its reporters found Bing had done little in the months that had passed to prevent child sex abuse content appearing in its search results.

It was a damning rebuke on the company’s efforts to combat child abuse in its search results, despite pioneering its PhotoDNA photo detection tool, which the software giant built a decade ago to identify illegal images based off a huge database of hashes of known child abuse content.

Giphy’s Gibson said the company was “recently approved” to use Microsoft’s PhotoDNA but did not say if it was currently in use.

Where some of the richest, largest and most-resourced tech companies are failing to preemptively limit their platforms’ exposure to illegal content, startups are filling in the content moderation gaps.

L1ght, which has a commercial interest in this space, was founded a year ago to help combat online predators, bullying, hate speech, scams and more.

The company was started by former Amobee chief executive Zohar Levkovitz and cybersecurity expert Ron Porat, previously the founder of ad-blocker Shine, after Porat’s own son experienced online abuse in the online game Minecraft. The company realized the problem with these platforms was something that had outgrown users’ own ability to protect themselves, and that technology needed to come to their aid.

L1ght’s business involves deploying its technology in similar ways as it has done here with Giphy — in order to identify, analyze and predict online toxicity with near real-time accuracy.

Mercedes-Benz app glitch exposed car owners’ information to other people’s accounts

Mercedes-Benz car owners have said that the app they used to remotely locate, unlock and start their cars was displaying other people’s account and vehicle information.

TechCrunch spoke to two customers who said the Mercedes-Benz’ connected car app was pulling in information from other accounts and not their own, allowing them to see personal information — including names, locations, phone numbers, and other information — of other vehicle owners.

The apparent security lapse happened late-Friday before the app went offline “due to site maintenance” a few hours later.

It’s not uncommon for modern vehicles these days to come with an accompanying phone app. These apps connect to your car and let you remotely locate them, lock or unlock them, and start or stop the engine. But as cars become internet-connected and hooked up to apps, security flaws have allowed researchers to remotely hijack or track vehicles.

One Seattle-based car owner told TechCrunch that their app pulled in information from several other accounts. He said that both he and a friend, who are both Mercedes owners, had the same car belonging to another customer, in their respective apps but every other account detail was different.

benz app 2

Screenshots of the Mercedes-Benz app showing another person’s vehicle, and exposed data belonging to another car owner. (Image: supplied)

The car owners we spoke to said they were able to see the car’s recent activity, including the locations of where it had recently been, but they were unable to track the real-time location using the app’s feature.

When he contacted Mercedes-Benz, a customer service representative told him to “delete the app” until it was fixed, he said.

The other car owner we spoke to said he opened the app and found it also pulled in someone else’s profile.

“I got in contact with the person who owns the car that was showing up,” he told TechCrunch. “I could see the car was in Los Angeles, where he had been, and he was in fact there,” he added.

He said that he wasn’t sure if the app has exposed his private information to another customer.

“Pretty bad fuck up in my opinion,” he said.

The first customer reported that the “lock and unlock” and the engine “start and stop” features did not work on his app, somewhat limiting the impact of the security lapse. The other customer said they did not attempt to test either feature.

It’s not clear how the security lapse happened or how widespread the problem was. A spokesperson for Daimler, the parent company of Mercedes-Benz, did not respond to a request for comment on Saturday.

According to Google Play’s rankings, more than 100,000 customers have installed the app.

A similar security lapse hit Credit Karma’s mobile app in August. The credit monitoring company admitted that users were inadvertently shown other users’ account information, including details about credit card accounts and balances. But despite disclosing other people’s information, the company denied a data breach.

Facebook has suspended ‘tens of thousands’ of apps suspected of hoarding data

Facebook has suspended “tens of thousands” of apps connected to its platform which it suspects may be collecting large amounts of user profile data.

That’s a sharp rise from the 400 apps flagged a year ago by the company’s investigation in the wake of Cambridge Analytica, a scandal that saw tens of millions of Facebook profiles scraped to help swing undecided voters in favor of the Trump campaign during the U.S. presidential election in 2016.

Facebook did not provide a more specific number in its blog post but said the apps were built by 400 developers.

Many of the apps had been banned for a number of reasons, like siphoning off Facebook user profile data or making data public without protecting their identities, or other violations of the company’s policies.

Despite the bans, the social media giant said it has “not confirmed” other instances of misusing user data beyond those it has already notified the public about. Among those previously disclose include South Korean analytics firm Rankwave, accused of abusing the developer platform and refusing an audit; and myPersonality, a personality quiz that collected data on more than four million users.

The action comes in the wake of the since-defunct Cambridge Analytica and other serious privacy and security breaches. Federal authorities and lawmakers have launched investigations and issued fines from everything from its Libra cryptocurrency project to how the company handles users’ privacy.

Facebook said its investigation will continue.

Apple tweaks App Store rule changes for children’s apps and sign in services

Originally announced in June, changes to Apple’s App Store policies on its Sign in with Apple service and the rules around children’s app categories are being tweaked. New apps must comply right away with the tweaked terms, but existing apps will have until early 2020 to comply with the new rules.

The changes announced at Apple’s developer conference in the summer were significant, and raised concerns among developers that the rules could handicap their ability to do business in a universe that, frankly, offers tough alternatives to ad-based revenue for children’s apps.

In a short interview with TechCrunch, Apple’s Phil Schiller said that they had spent time with developers, analytics companies and advertising services to hear what they had to say about the proposals and have made some updates.

The changes are garnering some strong statements of support from advocacy groups and advertising providers for children’s apps that were pre-briefed on the tweaks. The changes will show up as of this morning in Apple’s developer guidelines.

“As we got closer to implementation we spent more time with developers, analytics companies and advertising companies,” said Schiller. “Some of them are really forward thinking and have good ideas and are trying to be leaders in this space too.”

With their feedback, Schiller said, they’ve updated the guidelines to allow them to be more applicable to a broader number of scenarios. The goal, he said, was to make the guidelines easy enough for developers to adopt while being supportive of sensible policies that parents could buy into. These additional guidelines, especially around the Kids app category, says Schiller, outline scenarios that may not be addressed by the Children’s Online Privacy Protection Act (COPPA) or GDPR regulations.

There are two main updates.

Kids changes

The first area that is getting further tweaking is the Kids terms. Rule sections 1.3 and 5.1.4 specifically are being adjusted after Apple spoke with developers and providers of ad and analytics services about their concerns over the past few months.

Both of those rules are being updated to add more nuance to their language around third-party services like ads and analytics. In June, Apple announced a very hard-line version of these rule updates that essentially outlawed any third-party ads or analytics software and prohibited any data transmission to third-parties. The new rules offer some opportunities for developers to continue to integrate these into their apps, but also sets out explicit constraints for them.

The big changes come in section 1.3 surrounding data safety in the Kids category. Apple has removed the explicit restriction on including any third-party advertising or analytics. This was the huge hammer that developers saw heading towards their business models.

Instead, Apple has laid out a much more nuanced proposal for app developers. Specifically, it says these apps should not include analytics or ads from third parties, which implicitly acknowledging that there are ways to provide these services while also practicing data safety on the App Store.

Apple says that in limited cases, third-party analytics may be permitted as long as apps in the Kids category do not send personal identifiable information or any device fingerprinting information to third parties. This includes transmitting the IDFA (the device ID for advertisers), name, date of birth, email address, location or any other personally identifiable information.

Third-party contextual ads may be allowed but only if those companies providing the ads have publicly documented practices and policies and also offer human review of ad creatives. That certainly limits the options, including most offerings from programmatic services.

Rule 5.1.4 centers on data handling in kids apps. In addition to complying with COPPA, GDPR and other local regulations, Apple sets out some explicit guard rails.

First, the language on third-party ads and analytics has been changed from may not to should not. Apple is discouraging their use, but acknowledges that “in limited cases” third-party analytics and advertising may be permitted if it adheres to the new rules set out in guideline 1.3.

The explicit prohibition on transmitting any data to third parties from apps in the Kids category has been removed. Once again, this was the big bad bullet that every children’s app maker was paying attention to.

An additional clause reminds developers not to use terms like “for kids” and “for children” in app metadata for apps outside of the Kids category on the App Store.

SuperAwesome is a company that provides services like safe ad serving to kids apps. CEO Dylan Collins was initially critical of Apple’s proposed changes, noting that killing off all third-party apps could decimate the kids app category.

“Apple are clearly very serious about setting the standard for kids apps and digital services,” Collins said in a statement to TechCrunch after reviewing the new rules Apple is publishing. “They’ve spent a lot of time working with developers and kidtech providers to ensure that policies and tools are set to create great kids digital experiences while also ensuring their digital privacy and safety. This is the model for all other technology platforms to follow.”

All new apps must adhere to the guidelines. Existing apps have been given an additional six months to live in their current form but must comply by March 3, 2020.

“We commend Apple for taking real steps to protect children’s privacy and ensure that kids will not be targets for data-driven, personalized marketing,” said Josh Golin, Executive Director of Campaign for Commercial-Free Childhood. “Apple rightly recognizes that a child’s personal identifiable information should never be shared with marketers or other third parties. We also appreciate that Apple made these changes on its own accord, without being dragged to the table by regulators.”

The CCFC had a major win recently when the FTC announced a $170M fine against YouTube for violations of COPPA.

Sign in with Apple

The second set of updates has to do with Apple’s Sign in with Apple service.

Sign in with Apple is a sign-in service that can be offered by an app developer to instantly create an account that is handled by Apple with additional privacy for the user. We’ve gone over the offering extensively here, but there are some clarifications and policy additions in the new guidelines.

Sign in with Apple is being required to be offered by Apple if your app exclusively offers third-party or social log ins like those from Twitter, Google, LinkedIn, Amazon or Facebook. It is not required if users sign in with a unique account created in the app, with say an email and password.

But some additional clarifications have been added for additional scenarios. Sign in with Apple will not be required in the following conditions:

  • Your app exclusively uses your company’s own account setup and sign-in systems.
  • Your app is an education, enterprise or business app that requires the user to sign in with an existing education or enterprise account.
  • Your app uses a government or industry-backed citizen identification system or electronic ID to authenticate users.
  • Your app is a client for specific third-party service and users are required to sign in to their mail, social media or other third-party account directly to access their content.

Most of these were sort of assumed to be true but were not initially clear in June. The last one, especially, was one that I was interested in seeing play out. This scenario applies to, for instance, the Gmail app for iOS, as well as apps like Tweetbot, which log in via Twitter because all they do is display Twitter.

Starting today, new apps submitted to the store that don’t meet any of the above requirements must offer Sign in with Apple to users. Current apps and app updates have until April 2020 to comply.

Both of these tweaks come after developers and other app makers expressed concern and reports noted the abruptness and strictness of the changes in the context of the ever-swirling anti-trust debate surrounding big tech. Apple continues to walk a tightrope with the App Store where they flex muscles in an effort to enhance data protections for users while simultaneously trying to appear as egalitarian as possible in order to avoid regulatory scrutiny.

Google says China used YouTube to meddle in Hong Kong protests

Google has disabled 210 YouTube accounts after it said China used the video platform to sow discord among protesters in Hong Kong.

The search giant, which owns YouTube, followed in the footsteps of Twitter and Facebook, which earlier this week said China had used their social media sites to spread misinformation and discord among the protesters, who have spent weeks taking to the streets to demand China stops interfering with the semi-autonomous region’s affairs.

Earlier this week Twitter said China was using its service to “sow discord” through fake accounts as part of a “a coordinated state-backed operation.”

In a brief blog post, Google’s Shane Huntley said the company took action after it detected activity which “behaved in a coordinated manner while uploading videos related to the ongoing protests in Hong Kong.”

“This discovery was consistent with recent observations and actions related to China announced by Facebook and Twitter,” said Huntley.

In line with Twitter and Facebook’s findings, Google said it detected the use of virtual private networks — or VPNs — which can be used to tunnel through China’s censorship system, known as the Great Firewall. Facebook, Twitter and Google are all banned in China. But Google said little more about the accounts, what they shared, or whether it would disclose its findings to researchers.

When reached, a Google spokesperson only referred back to the blog post and did not comment further.

Over a million protesters took to the streets this weekend to peacefully demonstrate against the Chinese regime, which took over rule from the United Kingdom in 1997. Protests erupted earlier this year after a bid by Hong Kong leader Carrie Lam to push through a highly controversial bill that would allow criminal suspects to be extradited to mainland China for trial. The bill was suspended, effectively killing it from reaching the law books, but protests have continued, pushing back at claims that China is trying to meddle in Hong Kong’s affairs.

After data incidents, Instagram expands its bug bounty

Facebook is expanding its data abuse bug bounty to Instagram.

The social media giant, which owns Instagram, first rolled out its data abuse bounty in the wake of the Cambridge Analytica scandal, which saw tens of millions of Facebook profiles scraped to help swing undecided voters in favor of the Trump campaign during the U.S. presidential election in 2016.

The idea was that security researchers and platform users alike could report instances of third-party apps or companies that were scraping, collecting and selling Facebook data for other purposes, such as to create voter profiles or build vast marketing lists.

Even following he high profile public relations disaster of Cambridge Analytica, Facebook still still had apps illicitly collecting data on its users.

Instagram wasn’t immune either. Just this month Instagram booted a “trusted” marketing partner off its platform after it was caught scraping millions of users’ stories, locations and other data points on millions of users, forcing Instagram to make product changes to prevent future scraping efforts. That came after two other incidents earlier this year where a security researcher found 14 million scraped Instagram profiles sitting on an exposed database — without a password — for anyone to access. Another incident saw another company platform scrape the profile data — including email addresses and phone numbers — of Instagram influencers.

Last year Instagram also choked developers’ access as the company tried to rebuild its privacy image in the aftermath of the Cambridge Analytica scandal.

Dan Gurfinkel, security engineering manager at Instagram, said its new and expanded data abuse bug bounty aims to “encourage” security researchers to report potential abuse.

Instagram said it’s also inviting a select group of trusted security researchers to find flaws in its Checkout service ahead of its international rollout, who will also be eligible for bounty payouts.

Read more:

Week in Review: Snapchat beats a dead horse

Hey. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.

Last week, I talked about how Netflix might have some rough times ahead as Disney barrels towards it.


3d video spectacles 3

The big story

There is plenty to be said about the potential of smart glasses. I write about them at length for TechCrunch and I’ve talked to a lot of founders doing cool stuff. That being said, I don’t have any idea what Snap is doing with the introduction of a third-generation of its Spectacles video sunglasses.

The first-gen were a marketing smash hit, their sales proved to be a major failure for the company which bet big and seemingly walked away with a landfill’s worth of the glasses.

Snap’s latest version of Spectacles were announced in Vogue this week, they are much more expensive at $380 and their main feature is that they have two cameras which capture images in light depth which can lead to these cute little 3D boomerangs. One one hand, it’s nice to see the company showing perseverance with a tough market, on the other it’s kind of funny to see them push the same rock up the hill again.

Snap is having an awesome 2019 after a laughably bad 2018, the stock has recovered from record lows and is trading in its IPO price wheelhouse. It seems like they’re ripe for something new and exciting, not beautiful yet iterative.

The $150 Spectacles 2 are still for sale, though they seem quite a bit dated-looking at this point. Spectacles 3 seem to be geared entirely towards women, and I’m sure they made that call after seeing the active users of previous generations, but given the write-down they took on the first-generation, something tells me that Snap’s continued experimentation here is borne out of some stubbornness form Spiegel and the higher-ups who want the Snap brand to live in a high fashion world and want to be at the forefront of an AR industry that seems to have already moved onto different things.

Send me feedback
on Twitter @lucasmtny or email
[email protected]

On to the rest of the week’s news.

tumblr phone sold

Trends of the week

Here are a few big news items from big companies, with green links to all the sweet, sweet added context:

  • WordPress buys Tumblr for chump change
    Tumblr, a game-changing blogging network that shifted online habits and exited for $1.1 billion just changed hands after Verizon (which owns TechCrunch) unloaded the property for a reported $3 million. Read more about this nightmarish deal here.
  • Trump gives American hardware a holiday season pass on tariffs 
    The ongoing trade war with China generally seems to be rough news for American companies deeply intertwined with the manufacturing centers there, but Trump is giving U.S. companies a Christmas reprieve from the tariffs, allowing certain types of hardware to be exempt from the recent rate increases through December. Read more here.
  • Facebook loses one last acquisition co-founder
    This week, the final remnant of Facebook’s major acquisitions left the company. Oculus co-founder Nate Mitchell announced he was leaving. Now, Instagram, WhatsApp and Oculus are all helmed by Facebook leadership and not a single co-founder from the three companies remains onboard. Read more here.

GAFA Gaffes

How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:

  1. Facebook’s turn in audio transcription debacle:
    [Facebook transcribed users’ audio messages without permission]
  2. Google’s hate speech detection algorithms get critiqued:
    [Racial bias observed in hate speech detection algorithm from Google]
  3. Amazon has a little email mishap:
    [Amazon customers say they received emails for other people’s orders]

Adam Neumann (WeWork) at TechCrunch Disrupt NY 2017

Extra Crunch

Our premium subscription service had another week of interesting deep dives. My colleague Danny Crichton wrote about the “tech” conundrum that is WeWork and the questions that are still unanswered after the company filed documents this week to go public.

WeWork’s S-1 misses these three key points

…How is margin changing at its older locations? How is margin changing as it opens up in places like India, with very different costs and revenues? How do those margins change over time as a property matures? WeWork spills serious amounts of ink saying that these numbers do get better … without seemingly being willing to actually offer up the numbers themselves…

Here are some of our other top reads this week for premium subscribers. This week, we published a major deep dive into the world’s next music unicorn and we dug deep into marketplace startups.

Sign up for more newsletters in your inbox (including this one) here.

Instagram and Facebook are experiencing outages

Users reported issues with Instagram and Facebook Sunday morning.

The mobile apps wouldn’t load for many users beginning in the early hours of the morning, prompting thousands to take to Twitter to complain about the outage. #facebookdown and #instagramdown are both trending on Twitter at time of publish.

We’ve reached out to Facebook for more information and when they are expecting services to come back online. We’ll update this story when we hear back.

 

Apple has pushed a silent Mac update to remove hidden Zoom web server

Apple has released a silent update for Mac users removing a vulnerable component in Zoom, the popular video conferencing app, which allowed websites to automatically add a user to a video call without their permission.

The Cupertino, Calif.-based tech giant told TechCrunch that the update — now released — removes the hidden web server, which Zoom quietly installed on users’ Macs when they installed the app.

Apple said the update does not require any user interaction and is deployed automatically.

The video conferencing giant took flack from users following a public vulnerability disclosure on Monday by Jonathan Leitschuh, in which he described how “any website [could] forcibly join a user to a Zoom call, with their video camera activated, without the user’s permission.” The undocumented web server remained installed even if a user uninstalled Zoom. Leitschuh said this allowed Zoom to reinstall the app without requiring any user interaction.

He also released a proof-of-concept page demonstrating the vulnerability.

Although Zoom released a fixed app version on Tuesday, Apple said its actions will protect users both past and present from the undocumented web server vulnerability without affecting or hindering the functionality of the Zoom app itself.

The update will now prompt users if they want to open the app, whereas before it would open automatically.

Apple often pushes silent signature updates to Macs to thwart known malware — similar to an anti-malware service — but it’s rare for Apple to take action publicly against a known or popular app. The company said it pushed the update to protect users from the risks posed by the exposed web server.

Zoom spokesperson Priscilla McCarthy told TechCrunch: “We’re happy to have worked with Apple on testing this update. We expect the web server issue to be resolved today. We appreciate our users’ patience as we continue to work through addressing their concerns.”

More than four million users across 750,000 companies around the world use Zoom for video conferencing.

Instagram’s new chat sticker lets friends ask to get in on the conversation directly in Stories

Instagram has a new sticker type rolling out today that lets friends and followers instantly tap to start conversations from within Stories. The new sticker option, labelled “Chat,” will let anyone looking at a story request to join an Instagram group DM conversation tied to the post, with the original poster still getting the opportunity to actually approve the requests coming in from their friends and followers.

Instagram’s Direct Messages provide built-in one-to-one and one-to-many private messaging for users on the platform, and are one key way that the social network owned by Facebook has used to fend off, anticipate and adapt features from would-be competitor Snapchat. The company confirmed in May that it was discontinuing development of Direct, its own standalone app version of the Instagram DM feature, but its clearly still interested on iterating the core product to make it more engaging for users and better linked to Instagram’s other core sharing capabilities.