The RIAA is coming for the YouTube downloaders

In ye olden days of piracy, RIAA takedown notices were a common thing — I received a few myself. But that’s mostly fallen off as tracking pirates has gotten more difficult. But the RIAA can still issue nastygrams — to the creators of software that could potentially be used to violate copyright, like YouTube downloaders.

One such popular tool used by many developers, YouTube-DL, has been removed from GitHub for the present after an RIAA threat, as noted by Freedom of the Press Foundation’s Parker Higgins earlier today.

This is a different kind of takedown notice than the ones we all remember from the early 2000s, though. Those were the innumerable DMCA notices that said “your website is hosting such-and-such protected content, please take it down.” And they still exist, of course, but lots of that has become automated, with sites like YouTube removing infringing videos before they even go public.

What the RIAA has done here is demand that YouTube -DL be taken down because it violates Section 1201 of U.S. copyright law, which basically bans stuff that gets around DRM. “No person shall circumvent a technological measure that effectively controls access to a work protected under this title.”

That’s so it’s illegal not just to distribute, say, a bootleg Blu-ray disc, but also to break its protections and duplicate it in the first place.

If you stretch that logic a bit, you end up including things like YouTube-DL, which is a command-line tool that takes in a YouTube URL and points the user to the raw video and audio, which of course have to be stored on a server somewhere. With the location of the file that would normally be streamed in the YouTube web player, the user can download a video for offline use or backup.

But what if someone were to use that tool to download the official music video for Taylor Swift’s “Shake it off”? Shock! Horror! Piracy! YouTube-DL enables this, so it must be taken down, they write.

As usual, it only takes a moment to arrive at analogous (or analog) situations that the RIAA has long given up on. For instance, wouldn’t using a screen and audio capture utility accomplish the same thing? What about a camcorder? Or for that matter, a cassette recorder? They’re all used to “circumvent” the DRM placed on Tay’s video by creating an offline copy without the rights-holder’s permission.

Naturally this takedown will do almost nothing to prevent the software, which was probably downloaded and forked thousands of times already, from being used or updated. There are also dozens of sites and apps that do this — and the RIAA by the logic in this letter may very well take action against them as well.

Of course, the RIAA is bound by duty to protect against infringement, and one can’t expect it to stand by idly as people scrape official YouTube accounts to get high-quality bootlegs of artists’ entire discographies. But going after the basic tools is like the old, ineffective “Home taping is killing the music industry” line. No one’s buying it. And if we’re going to talk about wholesale theft of artists, perhaps the RIAA should get its own house in order first — streaming services are paying out pennies with the Association’s blessing. (Go buy stuff on Bandcamp instead.)

Tools like YouTube-DL, like cassette tapes, cameras and hammers, are tech that can be used legally or illegally. Fair use doctrines allow tools like these for good-faith efforts like archiving content that might be lost because Google stops caring, or for people who for one reason or another want to have a local copy of some widely available, free piece of media for personal use.

YouTube and other platforms, likewise in good faith, do what they can to make obvious and large-scale infringement difficult. There’s no “download” button next to the latest Top 40 hit, but there are links to buy it, and if I used a copy — even one I’d bought — as background for my own video, I wouldn’t even be able to put it on YouTube in the first place.

Temporarily removing YouTube-DL’s code from GitHub is a short-sighted reaction to a problem that can’t possibly amount to more than a rounding error in the scheme of things. They probably lose more money to people sharing logins. It or something very much like it will be back soon, a little smarter and a little better, making the RIAA’s job that much harder, and the cycle will repeat.

Maybe the creators of Whack-a-Mole will sue the RIAA for infringement on their unique IP.

YouTube Premium subscribers get a new perk with launch of testing program

YouTube has long allowed its users to test new features and products before they go live to a wider audience. But in a recent change, YouTube’s latest series of experiments are being limited to those who subscribe the Premium tier of YouTube’s service. Currently, paid subscribers are the only ones able to test several new product features, including one that allows iOS users to watch YouTube videos directly on the homescreen.

This is not the same thing as the Picture-in-Picture option that’s become available to app developers with iOS 14, to be clear. Instead, YouTube says this feature allows users who are scrolling on their YouTube homepage to watch videos with the sound on while they scroll through their feed.

Two other experiments are related to search. One lets you filter topics you search for by additional languages, including Spanish, French, or Portuguese. The other lets you use voice search to pull up videos when using the Chrome web browser.

Image Credits: YouTube, screenshot via TechCrunch

None of these tests will be very lengthy, however. Two of the three new experiments wrap up on Oct. 20, 2020 for example. The other wraps on Oct. 27. And they’ve only been live for a few weeks.

In years past, YouTube had allowed all users to try out new features in development from a dedicated site dubbed “TestTube.” In more recent years, however, it began to use the website YouTube.com/new to direct interested users to upcoming features before they rolled out publicly. For example, when YouTube introduced its redesign in 2017, users could visit that same website to opt-in to the preview ahead of its launch.

Now, the site is being used to promote other limited-time tests.

YouTube says the option to test the features was highlighted to Premium subscribers a few weeks ago within the YouTube app. It’s also the first time that YouTube has run an experimentation program tied to the Premium service, we’re told.

The company didn’t make a formal public announcement, but the addition was just spotted by several blogs, including XDA Developers and Android Central, for example.

Contrary to some reports, however, it does not appear that YouTube’s intention is to close off all its experiments to anyone except its paid subscribers. The company’s own help documentation, in fact, notes this limitation will only apply to “some” of its tests. 

YouTube also clarified to TechCrunch that the tests featured on the site represent only a “small minority” of those being run across YouTube. And they are not at all inclusive of the broader set of product experiments the company runs, according to the company.

In addition, non-Premium users can opt to sign up to be notified of additional opportunities to participate in other YouTube research studies, if they choose. This option appears at the bottom of the YouTube.com/new page. 

YouTube says the goal with the new experiments is two-fold. It allows product teams to feedback on different features and it allows Premium subscribers to act as early testers, if they want to.

Premium users who choose to participate can opt into and out of the new features individually, but can only try out one experiment at a time.

This could serve to draw more YouTube users to the Premium subscription, as there’s a certain amount of clout involved with being able to try out features and products ahead of the general public. Consider it another membership perk then — something extra on top of the baseline Premium tier features like ad-free videos, downloads, background play and more.

YouTube, which today sees over 2 billion monthly users, said earlier this year it’s converted at least 20 million users to a paid subscription service. (YouTube Premium / YouTube Music). As of Q3 2020, YouTube was the No. 3 largest app by consumer spend worldwide across iOS and Android, per App Annie data.

 

 

 

A quarter of US adults now get news from YouTube, Pew Research study finds

Around a quarter of U.S. adults, or roughly 26%, say they get news by watching YouTube videos, according to a new study from Pew Research Center, which examined the Google-owned video platform’s growing influence over news distribution in the U.S., as well as its consumption. The study, not surprisingly, found that established news organizations no longer have full control over the news Americans watch, as only one-in-five YouTube consumers (23%) said they “often” get their news from channels affiliated with established news organizations. The exact same percentage said they “often” get their news from independent channels instead.

Independent channels in this study were defined as those that do not have a clear external affiliation. A news organization channel, meanwhile, would be a channel associated with an external news organization — like CNN or Fox News, for instance.

These two different types of news channels are common, Pew found, as 49% of popular news channels are affiliated with a news organization, while 42% are not.

A small percentage (9%) were those from “other” organizations publishing news, including government agencies, research organizations and advocacy organizations.

Image Credits: Pew Research

To determine its findings, Pew Research ran a representative panel survey of 12,638 U.S. adults from January 6-January 20, 2020.

This study found that a majority, or 72%, of Americans said YouTube was either an important (59%) or the most important (13%) way they get their news. Most also said they didn’t see any big issues with getting their news from the site, but they did express some moderate concern about misinformation, political bias, YouTube’s demonetization practices and censorship.

Image Credits: Pew Research

Republicans and independents who lean Republican were more likely to say censorship, demonetization and political bias were YouTube’s biggest problems, while Democrats and independents who lean Democrat were more likely to say the biggest problems were misinformation and harassment.

A second part of the research involved content analysis of the 377 most popular YouTube news channels in November 2019 and the content of YouTube videos published by the 100 channels with the highest median of views in December 2019. This was performed by a combination of humans and computational methods, says Pew.

The analysis discovered that more than four-in-ten (44%) popular YouTube channels can be characterized as “personality-driven,” meaning the channel is oriented around an individual. This could be a journalist employed by an established news organization or it could be an independent host.

However, it’s more often true of the latter, as 70% of independent channels are centered around an individual, often a “YouTuber” who has gained a following. Indeed, 57% of independent channels are YouTuber-driven versus the 13% centered around people who were public figures before gaining attention on YouTube.

Image Credits:

The study also looked into other aspects of the YouTube news environment and the topics being presented.

According to YouTube news consumers themselves, a clear majority (66%) said watching YouTube news videos helped them to better understand current events; 73% said they believe the videos to be largely accurate, and they tend to watch them closely (68% do) instead of playing them in the background.

Around half (48%) said they’re looking for “straight reporting” on YouTube — meaning, information and facts only. Meanwhile, 51% said they are primarily looking for opinions and commentary.

In response to an open-ended question about why YouTube was a unique place to get the news, the most common responses involved those related to the content of the videos — for instance, that they included news outside the mainstream or that they featured many different opinions and views.

Image Credits: Pew Research

Pew also examined how often news channels mentioned conspiracy theories, like those related to QAnon, Jeffrey Epstein and the anti-vax movement.

An analysis of nearly 3,000 videos by the 100 most viewed YouTube channels in December 2019 found that 21% of videos by independent channels mentioned a conspiracy theory, compared with just 2% of those from established news organizations. QAnon was the most commonly referenced conspiracy theory, as 14% of videos from independent channels had discussed it, compared with 2% of established news organizations.

Independent channels were also about twice as likely as established news organizations to present the news with a negative tone.

Overall, the videos from the top 100 most viewed YouTube news channels assessed in December 2019, were neither too negative or positive (69%). But broken down by type, 37% of videos on the independent channels were negative, compared with 17% for established news organizations. Negative videos were more popular, too. Across all channels, negative videos averaged 184,000 views compared with 172,000 for neutral or mixed tone videos and 117,000 views for positive videos.

Image Credits: Pew Research

Meanwhile, videos about the Trump administration made up the largest share of views in December 2019, as roughly a third (36%) were about the impeachment and 31% were about other domestic issues, like gun control, abortion or immigration. Another 9% were about international affairs. Videos about the Trump administration saw around 250,000 average views compared with videos on other topics, which averaged 122,000 views. Trump was the most common video focus in about a quarter of the videos studied, or 24%.

Videos about the 2020 elections, which at the time were centered around the Democratic primary, were the topic of just 12% of news videos, by comparison.

Image Credits: Pew Research

The study also examined how YouTube news channels presented themselves. It found that the vast majority don’t clearly state a political ideology even when the content of their videos makes it clear they have an ideological slant.

Only around 12% of YouTube news channels presented their political ideology in their description. Of those, 8% were right-leaning and 4% were left-leaning. Independent news channels were more likely to present themselves using partisan terms and more likely to say they leaned right.

The demographics of the typical YouTube news consumer was a part of the study, too. Pew Research found the news video viewers were more likely to be young and male, and less likely to be White, compared with U.S. adults overall. About a third (34%) are under the age of 30, compared with 21% of all U.S. adults; 71% are under 50, compared with 55% of U.S. adults overall.

And 58% of YouTube news consumers are more likely to be male, compared with 48% of U.S. adults overall. Half (50%) are White, 14% are Black and 25% are Hispanic. In the U.S., 63% of adults are White, 12% are Black and 16% are Hispanic.

The full study is available via the Pew Research Center website.

YouTube will add mail-in voting info box next to videos that discuss voting by mail

YouTube has been relatively quiet about its strategy to battle the flow of misinformation leading into the 2020 U.S. election, but the platform has a few new measures in store.

The video sharing site will begin attaching a box with vetted facts about mail-in voting on any videos that discuss the topic. The new mail-in voting info boxes will link out to the Bipartisan Policy Center, a bipartisan think tank.

YouTube first rolled these info panels out in 2018 and this year expanded them to address misinformation around COVID-19. The platform’s fact-checking info boxes resemble similarly unobtrusive info labels on Twitter and Facebook. While Twitter in particular has begun taking stronger action on election-related posts that break platform rules, social platforms have opted to broadly serve up contextual facts rather than targeting misinformation with more eye-catching warnings.

Example of YouTube COVID-19 info panel

YouTube is a little late to the party, but it will also add a few features encouraging users to register to vote. Searches about voter registration will soon point users to an info box at the top of the page leading them to state-specific resources like registration deadlines and how to check voter registration status.

Similarly, queries about “how to vote” will point YouTube users to vetted information from non-partisan third-party partners about state voting rules, requirements and deadlines. These searches will surface voting resources in both English and Spanish. The company will also begin surfacing new information in searches for federal candidates for Congress or the presidency.

Like Snapchat, Twitter, and Facebook, YouTube is also launching its own set of original informational election videos that will package facts on voting in the U.S. The YouTube videos take a playful approach, spoofing popular video trends like cooking tutorials. YouTube will also add reminders during “key moments” for the 2020 election reminding users to register and telling them how and where to vote.

YouTube hit with UK class action style suit seeking $3BN+ for ‘unlawful’ use of kids’ data

Another class action style lawsuit has been lodged against a tech giant in the UK alleging violations of privacy and seeking major damages. The latest representative action, filed against Google-owned YouTube, accuses the platform of routinely breaking UK and European data protection laws by unlawfully targeting up to five million under-13-year-olds with addictive programming and harvests their data for advertisers.

UK and EU law contain specific protections for children’s data, limiting the age at which minors can legally consent to their data being processed — in the case of the UK’s Data Protection Act to aged 13.

The suit is being brought by international law firm Hausfeld and Foxglove, a tech justice non-profit, which says they’re seeking damages from YouTube of more than £2.5BN (~$3.2BN).

Per the firms, it’s the first such representative litigation brought against a tech giant on behalf of children and among the largest such cases to date. (Last month a similar class style action was filed against Oracle in the UK alleging breaches of Europe’s General Data Protection Regulation (GDPR) related to cookie tracking.)

If the case succeeds, they say millions of British households whose kids watch YouTube may be owed “hundreds of pounds” in damages.

Duncan McCann, a researcher on the digital economy and father of three children all under 13 who watch YouTube and have their data collected and ads targeted at them by Google, is serving as representative claimant in the case.

Commenting in a statement, McCann said: “My kids love YouTube, and I want them to be able to use it. But it isn’t ‘free’ — we’re paying for it with our private lives and our kids’ mental health. I try to be relatively conscious of what’s happening with my kids’ data online but even so it’s just impossible to combat Google’s lure and influence, which comes from its surveillance power. There’s a massive power imbalance between us and them, and it needs to be fixed.”

“The [YouTube] website has no user practical age requirements and makes no adequate attempt to limit usage by youngsters,” notes Hausfeld in a press release about the lawsuit.

While a Foxglove release about the suit points to YouTube pitch materials intended for toy makers Mattel and Hasbro (and made public via an earlier FTC suit against Google) — in which it says the platform described itself as “the new Saturday morning cartoons”, “the number one website visited regularly by kids”, “today’s leader in reaching children age 6-11 against top TV channels”, and “unanimously voted as the favorite website of kids 2-12”.

Reached for comment, a YouTube spokesperson sent us this statement: “We don’t comment on pending litigation. YouTube is not for children under the age of 13. We launched the YouTube Kids app as a dedicated destination for kids and are always working to better protect kids and families on YouTube.”

The tech giant maintains that YouTube is not for under 13s — pointing to the existence of YouTube Kids, a dedicated kids’ app it launched in 2015 to offer what it called a “safer and easier” space for children to discover “family-focused content”, to back up the claim.

Although the company has never claimed that no children under 13 use YouTube. And last year the FTC agreed a $170M settlement with Google to end an investigation by the regulator and the New York Attorney General into alleged collection of children’s personal information by YouTube without the consent of their parents.

The rise in class action style lawsuits being filed in the UK seeking damages for breaches of data protection law follow a notable appeals court decision, just under a year ago, also against Google.

In that case the appeals court unblocked a class-action style lawsuit against the tech giant related to bypassing iOS privacy settings to track iPhone users.

In the US, Google paid $22.5M to the FTC back in 2012 to settle the same charge, and later paid a smaller sum to settle a number of US class action lawsuits. The UK case, meanwhile, continues.

While Europe has historically strong data protection laws, there has been — and still is — a lack of robust regulatory enforcement which is leaving a gap that litigation funders are increasingly willing to plug.

In the UK the challenge for those seeking damages for large scale violations is there’s no direct equivalent to a US class action. But last year’s appeals court ruling in the Safari bypass case has opened the door to representative actions.

The court also said damages could be sought for a breach of the law without needing to prove pecuniary loss or distress, establishing a route to redress for consumers that’s now being tested by several cases.

Graphic video of suicide spreads from Facebook to TikTok to YouTube as platforms fail moderation test

A graphic video of a man dying by suicide on Facebook Live has spread from there to TikTok, Twitter, Instagram and now YouTube, where his image ran alongside ads and attracted thousands more views. Do what they will, these platforms can’t seem to stop the spread, echoing past failures to block violent acts and disinformation.

The original video was posted to Facebook two weeks ago and has made its way onto all the major video platforms, often beginning with innocuous footage then cutting to the man’s death. These techniques go back many years in the practice of evading automatic moderation; By the time people have flagged the video manually, the original goal of exposing unwitting viewers to it will have been accomplished.

It’s similar in many ways to the way in which COVID-19 disinformation motherlode Plandemic spread and wreaked havoc despite these platforms deploying their ostensibly significant moderating resources towards preventing that.

For all the platforms’ talk of advanced algorithms and instant removal of rule-violating content, these events seem to show them failing when they count the most: In extremity.

The video of Ronnie McNutt’s suicide originated on August 31, and took nearly three hours to take down in the first place, by which time it had been seen and downloaded by innumerable people. How could something so graphic and plainly violating the platform’s standards, being actively flagged by users, be allowed to stay up for so long?

In a “community standards enforcement report” issued Friday, Facebook admitted that its army of (contractor) human reviewers, whose thankless job it is to review violent and sexual content all day, had been partly disabled due to the pandemic.

With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram.

The number of appeals is also much lower in this report because we couldn’t always offer them. We let people know about this and if they felt we made a mistake, we still gave people the option to tell us they disagreed with our decision.

McNutt’s friend and podcast co-host Josh Steen told TechCrunch that the stream had been flagged long before he killed himself. “I firmly believe, because I knew him and how these interactions worked, had the stream ended it would’ve diverted his attention enough for SOME kind of intervention,” Steen wrote in an email. “It’s pure speculation, but I think if they’d have cut his stream off he wouldn’t have ended his life.”

When I asked Facebook about this, I received the same statement others have: “We are reviewing how we could have taken down the livestream faster.” One certainly hopes so.

But Facebook cannot contain the spread of videos like this — and the various shootings and suicides that have occurred on its Live platform in the past — once they’re out there. At the same time, it’s difficult to imagine how other platforms are caught flat-footed: TikTok had the video queued up in users’ “For You” page, exposing countless people by an act of algorithmic irresponsibility. Surely even if it’s not possible to keep the content off the service entirely, there ought to be something preventing it from being actively recommended to people.

YouTube is another, later offender: Steen and others have captured many cases of McNutt’s video or image being used, sometimes being monetized. He sent screenshots and video showing ads from Squarespace and the Motley Fool running ahead of the video of McNutt.

It’s disappointing that the largest video platforms on the planet, which seem to never cease crowing about their prowess in shutting down this kind of content, don’t seem to have any serious response. TikTok, for instance, bans any account that makes multiple attempts to upload the clip. What’s the point of giving people a second or third chance here?

Facebook couldn’t seem to decide whether the content is in violation or not, as evidenced by several re-uploads of the content in various forms that were not taken down when flagged. Perhaps these are just the ones slipping through the cracks, while thousands more are nipped in the bud, but why should we give a company like Facebook, which commands billions of dollars and tens of thousands of employees, the benefit of the doubt when they fail for the nth time on something so important?

“Facebook went on record in early August saying they were returning back to normal moderation rates, but that their AI tech actually had been improved during the COVID slow downs,” Steen said. “So why’d they totally blow their response to the livestream and the response time after?”

“We know from the Christchurch Live incident that they have the ability to tell us a couple of things that really need to be divulged at this point because of the viral spread: how many people in total viewed the livestream and how many times was it shared, and how many people viewed the video and how many times was it shared? To me these stats are important because it shows the impact that the video had in real time. That data will also confirm, I think, where the viewships spiked in the livestream,” he continued.

On Twitter and Instagram, entire accounts have popped up just to upload the video, or impersonate McNutt using various transformations of his username. Some even add “suicide” or “dead” or the like to the name. These are accounts created with the singular intent of violating the rules. Where are the fake and bot activity precautions?

Videos of the suicide have appeared on YouTube, and others simply use McNutt’s image or the earlier parts of his stream to attract viewers. Steen and others who knew McNutt have been reporting these regularly, with mixed success. YouTube says it removes any copies that appear and demonetizes others that discuss or show parts of it, in keeping with its policy on sensitive content.

One channel I saw had pulled in more than half a million views by leveraging McNutt’s suicide, originally posting the live video (with preroll ad) and then using his face on other videos to perhaps attract morbid users. When I pointed these out to YouTube, they demonetized them and removed the one shown above — though Steen and his friends had reported it days ago. I can’t help but feel that the next time this happens — or more likely, elsewhere on these platforms where it is happening right now — there will be less or no accountability because there are no press outlets making a fuss.

The focus from the platforms is on invisible suppression of the content and retention of users and activity; if stringent measures reduce those all-important metrics, they won’t be taken, as we’ve seen on other social media platforms.

But as this situation and others before it demonstrate, there seems to be something fundamentally lacking from the way this service is provided and monitored. Obviously it can be of enormous benefit, as a tool to report current events and so on, but it can be and has been used to stream horrific acts and for other forms of abuse.

“These companies still aren’t fully cooperating and still aren’t really being honest,” said Steen. “This is exactly why I created #ReformForRonnie because we kept seeing over and over and over again that their reporting systems did nothing. Unless something changes it is just going to keep happening.”

Steen is feeling the loss of his friend, of course, but also disappointment and anger at the platforms that allow his image to be abused and mocked with only a perfunctory response. He’s been rallying people around the hashtag to put pressure on the major social platforms to say something, anything substantial about this situation. How could they have prevented this? How can they better handle it when it is already out there? How can they respect the wishes of loved ones? Perhaps none of these things are possible — but if that’s the case, don’t expect them to admit it.

If you or someone you know needs help, please call the National Suicide Prevention Lifeline at 800-273-TALK (8255) or text the Crisis Text Line at 741-741. International resources are available here.

Recorded music revenue is up on streaming growth, as physical sales plummet

With touring ground to a halt for the foreseeable future, 2020 has become the most difficult year for musicians in recent memory. One’s ability to survive on music depends on a variety of factors, of course, including things like audience, reach and how their fans access their output.

The world of recorded music has been a mixed bag throughout the pandemic. New industry figures from the Recording Industry Association of America out this week show that revenue for recorded music is actually up for the first half of 2020, owing, unsurprisingly, to the growth of music streaming.

With vastly more people stuck inside seeking novel methods of entertainment, paid subscriptions (Spotify, Apple Music, et al.) are up 24% year-over-year. Revenues on streaming music are up 12% overall, hitting $2.4 billion for the first half of the year. The figured has been hampered by an overall drop in ad sales that certainly isn’t limited to the music industry. That has had a sizable impact on services like YouTube, Vevo and Spotify’s free tier.

Physical sales of CDs and vinyl took a massive hit to an already rocky foundation, down 23% for that time period. Streaming now makes up 85% of all revenue in the U.S., with physical sales only commanding 7% — just slightly higher than the 6% made by digital downloads. It’s a troubling figure, given the difficulty many more independent artists have faced in monetizing streaming.

Spotify CEO Daniel Ek faced backlash from the industry for comments surrounding streaming revenue. “There is a narrative fallacy here, combined with the fact that, obviously, some artists that used to do well in the past may not do well in this future landscape, where you can’t record music once every three to four years and think that’s going to be enough,” the executive said in a recent interview.

The comments came as many musicians have struggled to keep their heads above water during a sustained touring hiatus. They also come as the streaming service has continued to pump money into acquisitions in an attempt to build out its podcasting presence.

Movies Anywhere officially launches its digital movie-lending feature, ‘Screen Pass’

Digital locker service Movies Anywhere is today officially launching its movie-sharing feature dubbed “Screen Pass,” which lets you lend out one of your purchased movies to a friend or family member. The feature was rushed into beta testing this March, followed by a more open public beta trial in April, thanks to increased demand from consumers stuck at home during coronavirus government lockdowns.

The Movies Anywhere app today allows customers centralized access to their purchased movies from across a number of services, including iTunes, Vudu, Prime Video, YouTube, Xfinity and many others.

Today, the app is jointly operated by Disney, Universal, WB, Sony Pictures and 20th Century Fox. The product itself had evolved from a 2014 version known as Disney Movies Anywhere, but later migrated to a new platform in 2017. The app was also rebuilt to accommodate an expanded group of operating partners, rebranded, and now operates as a different business than it did in years past.

In April, Movies Anywhere reported over 6,000 of the titles in its app were Screen Pass-eligible. Since then, it’s added 500 more movies to the collection, including “Who Framed Roger Rabbit?,” “A Star Is Born Encore,” “The Muppets,” and “National Treasure.” With the additions, over 80% of its library titles can be shared through the new feature.

Image Credits: Movies Anywhere

To share a title, you’ll click the Screen Pass icon on the title and enter the details, like the recipient’s information. Shared movies can be sent out over text, email or a message to the recipient, who has a week to accept. The shared movie then works like a digital movie rental, for the most part, as the viewer will have up to 14 days to watch and up to 72 hours to compete viewing after the movie has been started. But unlike rentals, the recipient doesn’t have to pay to watch — it’s free to both share and watch.

The company says early data from Screen Pass beta tests indicated the feature has the potential to drive new acquisitions and purchases.

45% of senders shared a movie using Screen Pass because someone else had first shared a movie with them. 30% of those who received a shared movie were new to the Movies Anywhere platform. A small number of movies drove around 9% of total shares, including “Ready Player One,” “The Prestige,” “Tombstone,” “The Mule,” “Bad Times at the El Royale,” and “Jaws.” This data indicates that Screen Pass shares weren’t limited to newer titles, as one may expect, but also included older classics.

In addition, around half of sharers (53%) chose the movie they were lending, versus 47% who let the recipient choose. But this could be due to how the Screen Pass beta test was structured, as recipients would be opted into the beta test by accepting a share from another member. It’s likely that some portion of the early group was simply inviting their friends by sharing a title with them.

In addition to Screen Pass sharing, Movies Anywhere also recently introduced a co-viewing feature called Watch Together that offers a synced viewing experience with up to nine other people. This product works via Screen Pass and competes with a variety of solutions that emerged or grew in popularity amid the pandemic, including those from Hulu, Amazon and third parties like Scener and Netflix Party, among others.

Screen Pass is launching today to all Movies Anywhere customers in the U.S. The Movies Anywhere app works across a range of devices, including iOS and Android mobile, Apple TV, Roku, Kindle Fire, Amazon Fire TV, Chromecast, and LG and Vizio smart TVs.

TikTok’s rivals in India struggle to cash in on its ban

For years, India has served as the largest open battleground for Silicon Valley and Chinese firms searching for their next billion users.

With more than 400 million WhatsApp users, India is already the largest market for the Facebook-owned service. The social juggernaut’s big blue app also reaches more than 300 million users in the country.

Google is estimated to reach just as many users in India, with YouTube closely rivaling WhatsApp for the most popular smartphone app in the country.

Several major giants from China, like Alibaba and Tencent (which a decade ago shut doors for most foreign firms), also count India as their largest overseas market. At its peak, Alibaba’s UC Web gave Google’s Chrome a run for its money. And then there is TikTok, which also identified India as its biggest market outside of China.

Though the aggressive arrival of foreign firms in India helped accelerate the growth of the local ecosystem, their capital and expertise also created a level of competition that made it too challenging for most Indian firms to claim a slice of their home market.

New Delhi’s ban on 59 Chinese apps on June 30 on the basis of cybersecurity concerns has changed a lot of this.

Indian apps that rarely made an appearance in the top 20 have now flooded the charts. But are these skyrocketing download figures translating to sustaining users?

An industry executive leaked the download, monthly active users, weekly active users and daily active users figures from one of the top mobile insight firms. In this Extra Crunch report, we take a look at the changes New Delhi’s ban has enacted on the world’s second largest smartphone market.

TikTok copycats

Scores of startups in India, including news aggregator DailyHunt, on-demand video streamer MX Player and advertising giant InMobi Group, have launched their short-video format apps in recent months.

Google warns users in Australia free services are at risk if it’s forced to share ad revenue with “big media”

Google has fired a lobbying pot-shot at a looming change to the law in Australia that will force it to share ad revenue with local media businesses whose content its platforms monetize — seeking to mobilize its users against “big media”.

Last month Australia’s Competition and Consumer Commission (ACCC) published a draft of a mandatory code that seeks to address what it described as “acute bargaining power imbalances” between local news media and tech giants, Facebook and Google,  by engaging in good faith negotiations and via a binding “final offer” arbitration process.

Back in April the country’s government announced it would adopt a mandatory code requiring the two tech giants to share ad revenue with media business after an attempt to negotiate a voluntary arrangement with the companies failed to make progress.

In an open letter addressing users in Australia, which is attributed to Mel Silva, MD for Google Australia, the tech giant warns that their experience of its products will suffer and their data could be at risk as a consequence of the regulation. It also suggests it may no longer be able to offer free services in the country.

The letter is being pushed at users of Google search in the country via a pop-up that warns “the way Aussies use Google is at risk”, according to the Guardian.

“This law wouldn’t just impact the way Google and YouTube work with news media businesses — it would impact all of our Australian users, so we wanted to let you know,” Google writes, adding that it’s “going to do everything we possibly can to get this proposal changed”.

In the blog post, it deploys three scare tactics to try to recruit users to lobby the government on its behalf — claiming the regulation will result in:

  1. a “dramatically worse Google Search and YouTube”: Google says the content users see will be less relevant and “helpful” as it will be forced to give news businesses information that will help them “artificially” inflate their ranking “over everyone else”
  2. risks to users’ search data because Google will have to tell news media businesses “how they can gain access” to data about their use of its products. “There’s no way of knowing if any data handed over would be protected, or how it might be used by news media businesses,” adds the data-mining tech giant
  3. overarching risks to free Google services; Giving “big media companies” special treatment will encourage them to make “enormous and unreasonable demands that would put our free services at risk”, is the claim

Google’s open letter instructs users to expect to hear more from it in the coming days — without offering further detail — so it remains to be seen what additional scare tactics the company cooks up.

Consultation on the draft code closes on August 28, with the ACCC saying last month that it intends for it to be finalized “shortly”, so Google’s window to lobby for changes is fast closing.

It’s not the first tech giant to try to repurpose the reach and scale of its platform to mobilize its own users to drum up helpful opposition to government action that threatens its corporate interests.

Over the last half decade or so, similar tactics have been deployed by a variety of gig economy platforms, including Airbnb, Lyft and Uber, to try to politize and overturn regulations which present a barrier to their continued growth.

Such efforts have, it must be said, only had very fleeting successes vs the scale of the platforms’ regulatory ‘reform’ ambitions. (Gig giants Uber and Lyft are facing a huge fight in their own backyard on key issues like worker reclassification, for example, so in fact regulators and courts have successfully pushed back against BS.)

But it’s interesting to see the tactic moving onto the front page of Google — perhaps signalling the scale of alarm the company feels over the prospect of being forced to share ad revenue with publishers whose content it monetizes, creating a model that other countries and regions might seek to follow.

In a statement responding to Google’s open letter, the ACCC went on the attack — accusing the tech giant of publishing “misinformation” about the draft code.

“Google will not be required to share any additional user data with Australian news businesses unless it chooses to do so,” the regulator writes, further asserting that any move to charge for free Google services like YouTube and search would be the company’s own decision.

“The draft code will allow Australian news businesses to negotiate for fair payment for their journalists’ work that is included on Google services. This will address a significant bargaining power imbalance between Australian news media businesses and Google and Facebook,” it goes on, adding: “A healthy news media sector is essential to a well-functioning democracy.”

Google’s parent entity, Alphabet, reported full year revenue of $161.8BN in 2019 — up from $136.8BN in 2018.