Get ready to see more looping videos on Spotify, as Canvas launches into beta

Spotify is opening up its Canvas feature to more artists, the company announced this morning, which means you’ll see a lot more of those looping videos on the app starting soon. The feature has been in limited testing before today with select artists. When available, you don’t just see the album artwork behind the player controls — you see a moving, visual experience that plays in a short loop.

So far, Canvas has had mixed reviewers from Spotify users. Some find the looping imagery distracting while others simply prefer seeing the album art. Some people seem to like the feature. But others only like it with certain content and artists.

The challenge is in designing a video loop that works well. That means it shouldn’t be an attempt to try to lip sync to a part of a song. It shouldn’t include intense flashing graphics or text, nor should it distract people from being able to see the player controls and track information.

Screen Shot 2019 10 10 at 12.07.54 PM

Spotify also suggests trying to tell a full story in the loop rather than just drastically trimming a music video down to the time allotted (3- or 8-second clips). Other recommended Canvas experiences are those that help develop the artists’ persona across their profile and tracks, or those that are updated frequently. Billie Eilish, for example, uses the feature to share animated versions of fan art.

Since launching, Canvas has been seen by millions of users, Spotify says. But the company seems to acknowledge the impact varies, based on how the Canvas is designed. When it works, it can “significantly increase” track streams, shares, and artists page visits. But Spotify didn’t say what happens when the feature fails to engage fans.

However, based on social media discussions about the feature and how-to guides detailing how to turn the thing off, it would seem that some users choose to opt out of the experience entirely.

Today, Spotify says Canvas will no longer be limited to select artists, as it’s opening more broadly to artists in an expanded beta. With the beta, Spotify hopes artists will treat Canvas as a critical part of their release strategy, and will continue to use it across their catalog.

“It’s a way to get noticed and build a vision — and an excellent way to share more of who you are with your listeners, hopefully turning them into fans,” the company writes in an announcement. “The goal is for you to have richer ways to express yourself and to allow listeners to engage with you and your music even more deeply. We’re continuing to work on additional features, as well as more tools and metrics to help you better understand how your art is reaching your audience,” the company says.

It’s hard not to comment on the timing of this launch. At the end of September, Google announced that YouTube Music would not be preinstalled on new Android devices, taking the place of Google Play Music. With YouTube Music, streamers gain access to a visually immersive experience where they can watch the music videos, not just listen to the audio, if they prefer.

Spotify, however, has traditionally been a place to listen — not to watch. That’s not to say there aren’t music videos on Spotify, they’re just not well highlighted by the app nor a core part of the Spotify experience.

The company says it’s now sending artists their invites to join the beta. Those who haven’t received the invite can instead make a request to be added here.

 

Facetune maker Lightricks expands with a trio of apps for small businesses

Fresh of its recent raise of $135 million, Lightricks, the maker of popular selfie editor Facetune and several other top consumer-focused photo editing applications, is branching out. The company this morning announced the launch of a suite of new mobile apps aimed at small businesses that want professional content creation tools to help them with their social media marketing campaigns.

Together, the new suite of apps is known as “BoostApps,” and includes StoryBoost for creating unique stories for Instagram; VideoBoost for making videos using your own clips, stock footage or both; and PosterBoost, which lets you turn photos into engaging posts for your business.

Screens StoryBoost

The move to serve the needs of small businesses was directed by how Lightricks saw its users taking advantage of its existing products, like Enlight Videoleap and Enlight Photofox, the company says.

When Lightricks surveyed its Videoleap subscriber base, for example, it found that roughly 30% were already using the app for business purposes.

“We understood then, that our next product had to be a tool specifically for businesses. Businesses are results-driven, and that’s the basis of the BoostApps — empowering and enabling businesses to create social media marketing materials that are not only beautiful, but also effective,” says Zeev Farbman, Lightricks Co-Founder and CEO.

Like its consumer line of apps, the BoostApps are designed to be easy to use — even if you’re not a photo-editing professional or have a social media marketing background.

Instead, they’re for people who consider themselves a small business owner, Farbman says. That could be a yoga teacher, startup entrepreneur, influencer, or anyone else.

Screens PosterBoost

“The common denominator is that they started their business because they were passionate about something, and then needed to become full-fledged marketers in addition to building their businesses,” he explains. “They know that social media can get them to their marketing goals, but they don’t know how to get results without investing too much time and money.”

Like the rest of the Lightricks line, the apps are subscription-based. StoryBoost and PosterBoost are $7.99 per month, and VideoBoost is $9.99 per month. There are also annual discounts ($44.99 or $59.99, respectively) and the option for a lifetime purchase ($99.99 or $119.99). And you can subscribe to all three in a bundle for $95.99 per year. 

Also like other Lightricks apps, the new BoostApps rely heavily on the company’s technology investments and A.I. Lightricks has a dedicated team whose job it is to figure out ways to integrate their most advanced and innovative research into their apps.

For example, it created a camera motion effect that can be used on every image, using A.I. technology to build a depth map that turns the images into engaging posters, says Farbman. They also created a video engine capable of producing a range of composition effects. And the results render in real-time on the device, to make the editing process feel fast.

Screens VideoBoost

While there are plenty of other companies offering creative tools for marketers, Lightricks brings its own knowledge of digital marketing to the table — something it credits with its own success, in fact.

The launch of the SMB-focused suite doesn’t mean Lightricks is pivoting to pro tools, however, Farbman clarified. It’s more of an expansion.

Case in point: the company today is also partnering with subscription beauty service BoxyCharm on a collaboration with Facetune2, which will allow users to virtually “try on” products in the October box using AR filters.

That said, Lightricks does plan to further expand BoostApps further down the road with more tools that will integrate scheduling, smart algorithms, and post optimization features.

The Jerusalem-headquartered company, which recently achieved unicorn status, is growing quickly. It now has over 300 employees, and expects to reach at least 500 by 2020. And despite the sizable funding round, Lightricks says it tries to stay profitable or as close to profitable as possible, even when launching new products.

Combined, its suite of apps has seen nearly 200 million downloads and has 3 million paying subscribers.

To date, Lightricks has raised $205 million.

BoostApps will be available today on iOS devices. (iOS 11 or higher).

 

YouTube overhauls its problematic verification program

YouTube’s verification program is getting a massive overhaul, the company announced today, which will likely result in a number of less prominent creators losing their verification status. Previously, YouTube allowed any channel that reached 100,000 subscribers to request verification. That limit is being removed, with a change to the verification program that rolls out in October. Going forward, YouTube will focus its efforts on verifying channels that have more of a need to prove their authenticity — like those belonging to a brand, public figure, artist or another creator who might be subject to impersonation, for example.

YouTube says the earlier verification system was established when the site was smaller, but its ecosystem has since grown and “become more complex.”

Instead of looking at a number of subscribers — a metric than can be gamed by bots — the new system will have more murky requirements. YouTube says it’s about “prominence,” which it defines in a number of ways.

For starters, YouTube will determine if the channel represents a “well-known or highly searched for creator, artist, public figure or company.” It will also consider if the channel is widely recognized outside of YouTube and has a strong online presence, or if it’s a channel that has a very similar name to many other channels.

We understand YouTube will use a combination of human curation and algorithmic signals to make these determinations. When asked, the company declined to discuss the specifics, however.

Creators V3

There were several reasons YouTube wanted to change its system, beyond raising the threshold for verification.

The company had run into a similar problem that Twitter once faced — people mistook the verification badge as an endorsement. On Twitter, that issue reached a tipping point when it was discovered that Twitter had verified the Charlottesville rally organizer. Twitter stopped verifying accounts shortly afterward. Its system today is still being fixed, but the project has been put on the back burner.

Similarly, YouTube’s research found that over 30% of users misunderstood the verification badge’s meaning, believing the checkmark indicted “endorsement of content,” not “identity.”

This is problematic for YouTube for a number of reasons, but mainly because the company wants to distance itself from the content on its platform — content that is often racist, vile, false, dangerous, conspiracy-filled and extremist. YouTube wants to be an open site, with all the troubles that entails, but doesn’t want to be held accountable for the terrible things posted there — like the 14-year-old girl who grew to online fame by posting racist, anti-Muslim, anti-LGBTQ videos, or the high-profile star who made repeated racist comments, then gets honored by YouTube with special creator rewards. 

There were other issues with the prior system, as well.

Some creators would fake their verification status, for instance. Before the changes, a verified channel would display a checkmark next to its channel name. This could be easily forged by simply adding a checkmark to the end of a channel name.

Plus, the checkmark itself only really worked when people viewed the channel’s main watch page on desktop or mobile. It didn’t translate as well to interactions in live chats, on community posts or in stories.

Music V3

By revamping the verification system, YouTube is clarifying that the verification isn’t an endorsement — it’s a neutral statement of fact. It’s also less difficult to forge, and works everywhere the creator interacts with fans.

The updated verification system drops the checkmark in favor of a gray swipe across the channel name (see above).

This applies to both channels and artists. With regards to the latter, it will replace the music note.

The system will roll out in late October, YouTube said, and the new criteria will apply for all channels.

Those who meet the new requirements won’t need to apply — they’ll automatically receive the new verified treatment. Others who didn’t qualify for re-verification will be notified today and will have the option to appeal the decision before the changes take place.

Information on the appeals process will be available in YouTube’s Help Center.

Update, 9/19/19, 1:26 PM ET: Here’s the letter YouTube creators are receiving. Note it refers to a timeframe of “early” instead of “late” October for the changes.

youtube letter

Here’s the email if you stay verified (thanks @thiojoe) –

YouTube Music cracks down on rampant chart manipulation with new pay-for-play ban

YouTube will no longer allow paid views and advertising to influence its YouTube Music Charts, the company announced this morning. Instead, it will calculate its rankings based only on view counts coming from organic plays. In addition, it’s changing its methodology for reporting on 24-hour record debuts to also only count views from organic sources, including direct links to the video, search results, Watch Next and Trending — but not video advertising.

The changes come about after multiple reports examined how music labels were spending aggressively on video advertising in order to juice the views of their artists’ newly debuted songs.

One report by Rolling Stone detailed how the practice worked, with regard to YouTube’s TrueView ads. This form of advertising lets the advertiser, like the artist or the label, play a shortened version of a music video as an advertisement in front of other videos. Under some conditions — like if a YouTube user interacts with the video or watches it for a certain amount of time — it would count toward the video’s overall view count.

Bloomberg had also reported on the curious case of Indian rapper Badshah, whose video “Paagal” broke records with 75 million views in a single day — topping a prior record set by Korean boy band BTS. Initially, there were rumors that the label, Sony Music, had used server farms and bots to accomplish this. It later turned out to be paid advertising, which Badshah confessed to on Instagram.

But this was not an uncommon practice — Taylor Swift and Blackpink and many others had done the same, the report said. Badshah had just taken it much further.

The report also said YouTube was considering revising its system, as a result.

Today, YouTube is officially announcing those changes.

“YouTube Music Charts have become an indispensable source for the industry and the most accurate place for measuring the popularity of music listening behavior happening on the world’s largest music platform,” the company explained in a blog post. “In an effort to provide more transparency to the industry and align with the policies of official charting companies such as Billboard and Nielsen, we are no longer counting paid advertising views on YouTube in the YouTube Music Charts calculation. Artists will now be ranked based on view counts from organic plays,” the post read.

The changes impact the 24-hour debuts, plus all of YouTube Music’s other charts, including those focused on what’s rising, trending and popular, both locally and globally.

Though advertising and non-organic views will no longer contribute to the view count for the purpose of YouTube’s Music Chart rankings, the company says these changes will not impact YouTube’s existing 24-hour record debut holders. That means Badshah and others can continue to tout their “records,” tainted as those claims may now be.

The changes won’t likely mean the end of this sort of music video advertising, however. Ads still remain a great way for users to be exposed to new music which can, in turn, boost organic views as links get clicked, shared, and embedded elsewhere around the web, for example. But it could have a dampening impact on the pay-for-play business and the size of the ad spend.

“Staying true to YouTube’s overall mission of giving everyone a voice and showing them the world, we want to celebrate all artist achievements on YouTube as determined by their global fans. It’s the artists and fans that have made YouTube the best and most accurate measure of the world’s listening tastes, and we intend on keeping it that way,” said YouTube.

YouTube to spend $100M on original children’s content

Creators of child-directed content will be financially impacted by the changes required by the FTC settlement, YouTube admitted today. The settlement will end the use of children’s personal data for ad-targeting purposes, the FTC said. To address creators’ concerns over their businesses, YouTube also announced a $100 million fund to invest in original children’s content.

The fund, distributed over three years, will be dedicated to the creation of “thoughtful” original content for YouTube and YouTube Kids globally, the company says.

“We know these changes will have a significant business impact on family and kids creators who have been building both wonderful content and thriving businesses, so we’ve worked to give impacted creators four months to adjust before changes take effect on YouTube,” wrote YouTube CEO Susan Wojcicki in a blog post. “We recognize this won’t be easy for some creators and are committed to working with them through this transition and providing resources to help them better understand these changes.”

YouTube plans to share more information about the fund and its plans in the weeks ahead.

In addition, YouTube said today it’s “rethinking” its overall approach to the YouTube kids and family experience.

This could go toward fixing some of the other problems raised by the consumer advocacy groups who prompted the FTC investigation. The groups weren’t entirely pleased by the settlement, as they believed it was only scratching the surface of YouTube’s issue.

“It’s extremely disappointing that the FTC isn’t requiring more substantive changes or doing more to hold Google accountable for harming children through years of illegal data collection,” said Josh Golin, the executive director for the Campaign for a Commercial-Free Childhood (CCFC), a group that spearheaded the push for an investigation. “A plethora of parental concerns about YouTube – from inappropriate content and recommendations to excessive screen time – can all be traced to Google’s business model of using data to maximize watch time and ad revenue,” he added.

Google already began to crack down on some of these concerns, independent of an FTC requirement, however.

To tackle the scourge of inappropriate content targeting minors, YouTube in August expanded its child safety policies to remove — instead of only restrict, as it did before — any “misleading family content, including videos that target younger minors and their families, those that contain sexual themes, violence, obscene, or other mature themes not suitable for younger audiences.”

Separately, YouTube aims to address the issues raised around promotional content in videos.

For example, a video with kids playing with toys could be an innocent home movie or it could involve a business agreement between the video creator and a brand to showcase the products in exchange for free merchandise or direct payment.

The latter should be labeled as advertising, as required by YouTube, but that’s often not the case. And even when ads are disclosed, it’s impossible for young children to know the difference between when they’re being entertained and when they’re being marketed to.

There are also increasing concerns over the lack of child labor laws when it comes to children performing in YouTube videos, which has prompted some parents to exploit their kids for views or even commit child abuse.

YouTube’s “rethinking” of its kids’ experience should also include whether or not it should continue to incentivize the creation of these “kid influencer” and YouTube family videos, where little girls’ and boys’ childhoods have become the source of parents’ incomes.

YouTube’s re-evaluation of the kids’ experience comes at a time when the FTC is also thinking of how to better police general audience platforms on the web, where some content is viewed by kids. The regulator is hosting an October workshop to discuss this issue, where it hopes to come up with ways to encourage others to develop kid-safe zones, too.

The NFL joins TikTok in multi-year partnership

The NFL and social video app TikTok today announced a multi-year partnership to bring NFL content to worldwide fans, just ahead of the NFL’s 100th season kick off on September 5. The partnership includes the launch of an official NFL account on the video platform, as well as a series of NFL-themed hashtag challenges, and other marketing opportunities for brands around the NFL content.

The first hashtag challenge, #WeReady, starts today and runs through Thursday. It encourages fans to show pride for their favorite NFL team while using the #WeReady hashtag. Several popular TikTok creators and NFL clubs will join the fans in the challenge.

TikTok will also have a presence at Soldier Field in Chicago for the Sept. 5 kickoff, where TikTok fans will be about to create videos and show their love for teams and players in an NFL-themed experience.

At launch, the NFL’s Tiktok account already features several videos, ranging from behind-the-scenes action to highlights, to funny memes and even inspirational content.

nfl on tiktok“We’re thrilled to partner with a powerhouse in the sports industry like the NFL to bring new life and a fresh perspective to the sports entertainment experience,” said Mayan Scharf, Global Partnerships, TikTok. “TikTok is a destination where fans can feel like they are a part of the team and we look forward to showcasing content from the NFL that is exciting, authentic and surprising to TikTok community,” he said.

While TikTok is better known for its meme-like, short-form videos featuring lip-syncing, displays of talent like dance, cosplay, comedy, art, and more, the company says that sports content is also a popular category on its service.

The NFL, meanwhile, is not averse to jumping on early with emerging platforms — whether that’s live-streaming video on Twitter, being the first sports league on Snapchat Discover, or launching an Alexa voice app, for example.

In addition, the NFL looks for opportunities that give it the ability to reach international fans, like when it distributed game highlights and recaps on Facebook. This is of particular importance at a time when ratings have become more of a concern for the sports league. Though it finally recovered last season from a multi-year ratings slump, the NFL knows that fans outside the U.S. are also worth courting as they can be just as loyal and engaged.

“Partnering with TikTok is a natural extension of our media strategy,” said Blake Stuchin, Vice President, Digital Media Business Development for the NFL, in a statement. “The platform reaches a fast-growing global audience of NFL fans and future fans. The NFL programming and hashtag challenges are a perfect way to kick off the NFL’s 100th season – with fun, new content that will entertain fans and invites them to celebrate and experience their NFL fandom in a way that’s authentic to the unique experience of TikTok,” he said.

 

Instagram may allow creators to syndicate IGTV videos to Facebook

Following the departure of Instagram’s founders, Facebook is working to more closely integrate the photo-sharing app with its flagship social network. It’s already added its brand name next to Instagram’s, and is working to make both platforms’ messaging products interoperable. Now, Facebook is prototyping a means of syndicating Instagram’s IGTV video to Facebook’s video site, Facebook Watch.

In another find from noted reverse engineer, Jane Manchun Wong, Instagram was found to have under development a feature that would allow Instagram users to post their IGTV content to both Instagram as a preview, as well as to Facebook and Watch — the latter by toggling an additional switch labeled “make visible on Facebook.”

Wong says the feature is still in the prototype stage, as the buttons themselves aren’t functional.

This move, should it come to pass, could prompt more video creators to use IGTV, given that it would boost their videos’ distribution by also including Facebook as a destination for their content. The videos could also be part of an ongoing, episodic series, Wong had found.

This, in turn, could help IGTV — an app which hasn’t quite taken off as a standalone video platform. Today, IGTV takes inspiration from TikTok and Snapchat’s vertical video. It’s meant to engage Instagram users with longer-form, portrait mode video content both within Instagram and in a separate IGTV app. But IGTV has often been filled with poorly cropped and imported web video, rather than content designed specifically for the platform.

Meanwhile, the IGTV app has struggled to rise to the top of the App Store’s charts the way its parent, Instagram, has. Today, it’s ranked No. 159 in the Photo & Video category on the App Store, and unranked in the Overall top charts.

To address some of the issues that creators have complained about, Instagram this week rolled out a few changes to the upload experience. This included the new ability to select the 1:1 crop of an IGTV thumbnail for the creator’s Profile Cover as well as the ability to edit which 5:4 section of the IGTV video shows in the Feed.

IGTV will also now auto-populate Instagram handles and tags on IGTV titles and descriptions, and will now support the ability to longer video from mobile. With the latter change, IGTV has increased the minimum threshold to upload on mobile to one minute, and is allowing mobile uploads up to 15 minutes.

Instagram declined to comment on the possible syndication of IGTV content to Facebook and Facebook Watch.

Ahead of FTC ruling, YouTube Kids is getting its own website

Ahead of the official announcement of an FTC settlement which could force YouTube to direct under-13-year-old users to a separate experience for YouTube’s kid-friendly content, the company has quietly announced plans to launch its YouTube Kids service on the web. Previously, parents would have to download the YouTube Kids app to a mobile device in order to access the filtered version of YouTube.

By bringing YouTube Kids to the web, the company is prepared for the likely outcome of an FTC settlement which would require the company to implement an age-gate on its site, then redirect under-13-year-olds to a separate kid-friendly experience.

In addition, YouTube Kids is gaining a new filter which will allow parents to set the content to being preschooler-appropriate.

The announcement, published to the YouTube Help forums, was first spotted by Android Police.

It’s unclear if YouTube was intentionally trying to keep these changes from being picked up on by a larger audience (or the press) by publishing the news to a forum instead of its official YouTube blog. (The company tells us it publishes a lot of news the forum site. Sure, okay. But with an FTC settlement looming, it seems an odd destination for such a key announcement.)

It’s also worth noting that, around the same time as the news was published, YouTube CEO Susan Wojcicki posted her quarterly update for YouTube creators.

The update is intended to keep creators abreast of what’s in store for YouTube and its community. But this quarter, her missive spoke solely about the value in being an open platform, and didn’t touch on anything related to kids content or the U.S. regulator’s investigation.

However, it’s precisely YouTube’s position on “openness” that concerns parents when it comes to their kids watching YouTube videos. The platform’s (almost) “anything goes” nature means kids can easily stumble upon content that’s too adult, controversial, hateful, fringe, or offensive.

The YouTube Kids app is meant to offer a safer destination, but YouTube isn’t manually reviewing each video that finds its way there. That has led to inappropriate and disturbing content slipping through the cracks on numerous occasions, and eroding parents’ trust.

youtube kids website

Because many parents don’t believe YouTube Kids’ algorithms can filter content appropriately, the company last fall introduced the ability for parents to whitelist specific videos or channels in the Kids app. It also rolled out a feature that customized the app’s content for YouTube’s older users, ages 8 through 12. This added gaming content and music videos.

Now, YouTube is further breaking up its “Younger” content level filter, which was previously 8 and under, into two parts. Starting now, “Younger” applies to ages 5 through 7, while the new “Preschool” filter is for the age 4 and under group. The latter will focus on videos that promote “creativity, playfulness, learning, and exploration,” says YouTube.

Above: the content filter before

YouTube confirmed to TechCrunch that its forum announcement is accurate, but the company would not say when the YouTube Kids web version would go live, beyond “this week.”

The YouTube Kids changes are notable because they signal that YouTube is getting things in place before an FTC settlement announcement that will impact how the company handles kids content and its continued use by young children.

It’s possible that YouTube will be fined by the FTC for its violations of COPPA, as Musical.ly (TikTok) was earlier this year. One report, citing unnamed sources, says the FTC’s YouTube settlement has, in fact, already been finalized and includes a multimillion-dollar fine.

YouTube will also likely be required to implement an age-gate on its site and in its apps that will direct under-13-year-olds to the YouTube Kids platform instead of YouTube proper. The settlement may additionally require YouTube to stop targeting ads on videos aimed at children, as has been reported by Bloomberg. 

We probably won’t see the FTC issuing a statement about its ruling ahead of this Labor Day weekend, but it may do so in advance of its October workshop focused on refining the COPPA regulation — an event that has the regulator looking for feedback on how to properly handle sites like YouTube.

 

 

‘Momo’ videos on YouTube cannot be monetized…but that’s not a new policy

Be warned, YouTube creators: making videos about the latest viral hoax, the “Momo challenge,” will not make you money. Over the past couple of days, the Momo challenge has gone viral once again, leading to a sharp increase news coverage and the number of YouTube videos discussing the topic of the creepy character and the supposed “challenge” that encourages kids to commit acts of self-harm.

The Momo challenge itself isn’t real, to be clear.

As meticulously documented by Taylor Lorenz at The Atlantic, it’s just the latest resurgence of an urban myth that has reared its head repeatedly over the years.  In reality, “Momo” was a sculpture created by the artist Keisuke Aisawa. Photographs of its frightening form made their way to Instagram and Reddit after being exhibited in Tokyo a couple of years ago. Thus, an urban legend was born, Lorenz explained.

According to one version of the myth, Momo sends kids instructions to harm themselves on WhatsApp. But urban legends take on many variations over time.

For example, my child’s entire 3rd grade class currently believes that Momo will randomly appear in YouTube videos and then come out of your sink drain. (This, also, is not true!)

Over the past few days, a social media post from Kim Kardashian and a lot of irresponsible reporting by local news outlets amplified the hoax, warning parents and schools of the dangerous “self harm” challenge. That, in turn, led to more “Momo” videos on YouTube, and a flood of posts across all other social media sites.

The Verge reported this morning that YouTube had begun demonetizing Momo videos on YouTube.

However, a spokesperson at YouTube clarified to TechCrunch that it wasn’t taking action against Momo videos as some sort of new policy or decision on the company’s part. It was simply enforcing its current policies.

The company’s existing advertiser-friendly guidelines, which govern the kinds of videos it shows ads on, do not allow any videos that discuss a harmful or dangerous act to be monetized. That includes any videos from news outlets referencing the Momo challenge, or those from other YouTube creators. This is the same policy that prevented prior YouTube videos about other dangerous challenges and hoaxes from showing advertising, they also noted. For example, any video about the Tide Pods challenge or the choking challenge could not show ads.

Demonetizing videos, to be clear, is not the same thing as disallowing the videos from showing on YouTube. The site today permits news stories and videos that are intended to raise awareness of and educate against the challenge, the spokesperson explained – like those from news outlets.

However, content that promotes the Momo challenge that is not news, educational, or documentary footage is prohibited on the site.

YouTube additionally reaffirmed that the company hadn’t seen any evidence of Momo videos on its platform until widespread media coverage began. And it had not received any links flagged or otherwise shared with the company about videos that either showed or promoted the Momo challenge directly.

“Contrary to press reports, we’ve not received any recent evidence of videos showing or promoting the Momo challenge on YouTube. Content of this kind would be in violation of our policies and removed immediately,” YouTube said, in a statement.

In addition, no Momo videos should be discoverable on YouTube’s kid-friendly app, YouTube Kids, the spokesperson said. And no such content has ever been found in the YouTube Kids app, to date.

Though YouTube hasn’t implemented a new policy here, simply having its name in the press around unsafe, scary content targeting children comes at a bad time for the company, which only yesterday turned off comments on videos of children after reports of a pedophile ring operating within the comments sections of videos. And it’s the latest in a longer string of controversies around advertiser-unfriendly content and false information which has led to other changes around its policies, including, most recently, the demonetization of anti-vaccination videos.

But in the case of Momo, YouTube isn’t the only platform afflicted by the hoax – the topic is being discussed across social media sites, including Facebook, Instagram, and Twitter.

Companies including Nestle, Epic and reportedly Disney suspend YouTube ads over child exploitation concerns

Days after a YouTube creator accused the platform of enabling a “soft-core pedophilia ring,” several companies have suspended advertising on the platform, including Nestle, Epic, and reportedly Disney and McDonald’s.

Nestle told CNBC that all of its companies in the U.S. have paused advertising on YouTube, while a spokesperson for Epic, maker of the massively popular game Fortnite, said it has suspended all pre-roll advertising. Other companies that confirmed publicly they are pausing YouTube advertising include Purina, GNC, Fairlife, Canada Goose, and Vitacost. Bloomberg and the Wall Street Journal report that Walt Disney Co. and McDonald’s, respectively, have pulled advertising, too.

Other advertisers, including Peloton and Grammarly, said they are calling on YouTube to resolve the issue.

The latest scandal over YouTube’s content moderation problems took off on Sunday when YouTube creator Matt Watson posted a video and in-depth Reddit post describing how pedophiles are able to manipulate the platform’s recommendation algorithm to redirect a search for “bikini haul” videos, featuring adult women, to exploitative clips of children. Some otherwise innocuous videos also had inappropriate comments, including some with timestamps that captured children in compromising positions.

A YouTube spokesperson sent a statement to TechCrunch that said “Any content – including comments – that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. We took immediate action by deleting accounts and channels, reporting illegal activity to authorities and disabling comments on tens of millions of videos that include minors. There’s more to be done, and we continue to work to improve and catch abuse more quickly.”

The platform has also reported comments to the National Center for Missing and Exploited Children and is taking further steps against child exploitation, including hiring more experts.

Watson’s report, however, highlights that YouTube continues to struggle with content that violates its own policies, even after a series of reports two years ago led to what creators dubbed the “adpocalpyse.” In an effort to appease advertisers, YouTube gave them more control over what videos their ads would appear before and also enacted more stringent policies for creators. Many YouTubers, however, have complained that the policies are unevenly enforced with little transparency, dramatically lowering their revenue but giving them little recourse to fix issues or appeal the platform’s decisions, even as objectionable content remains on the platform.