The Chainsmokers, Alexis Ohanian, Amy Schumer, Kevin Hart, Mark Cuban, Marshmello, and Snoop Dogg back Pearpop

Pearpop, the marketplace for social collaborations between the teeming hordes of musicians, craftspeople, chefs, clowns, diarists, dancers, artists, actors, acrobats, aspiring celebrities and actual celebrities, has raised $16 million in funding that includes what seems like half of Hollywood, along with Alexis Ohanian’s Seven Seven Six venture firm and Bessemer Venture Partners.

The funding was actually split between a $6 million seed funding round co-led by Ashton Kutcher and Guy Oseary’s Sound Ventures and Slow Ventures, with participation from Atelier Ventures and Chapter One Ventures and a $10 million additional investment led by Ohanian’s Seven Seven Six with participation from Bessemer.

TechCrunch first covered pearpop last year and there’s no denying that the startup is on to something. It basically takes Cameo’s celebrity marketplace for private shout-outs and makes it public. Allowing social media personalities to boost their followers by paying more popular personalities to shout out, duet, or comment on their posts.

“I’ve invested in pearpop because it’s been on my mind for a while that the creator economy has resulted in a lot of not equitable outcomes for creators. Where i talked about the missing middle class of the creator economy,” said Li Jin, the founder of Atelier Ventures and author of a critical piece on creator economics, “The creator economy needs a middle class“. 

“When I saw pearpop I felt like there was a really big potential for pearpop to be the one of the creators of the creative middle class. They’ve introduced this mechanism by which larger creators can help smaller creators and everyone has something of value to offer something to everyone else in the ecosystem.”

Jin discovered pearpop through the TechCrunch piece, she said. “You wrote that article and then i reached out to the team,” said Jin.

The idea was so appealing, it brought in a slew of musicians, athletes, actors and entertainers, including: Abel Makkonen (The Weeknd), Amy Schumer, The Chainsmokers, Diddy, Gary Vaynerchuk, Griffin Johnson, Josh Richards, Kevin Durant (Thirty 5 Ventures), Kevin Hart (HartBeat Ventures), Mark Cuban, Marshmello, Moe Shalizi, Michael Gruen (Animal Capital), MrBeast (Night Media Ventures), Rich Miner (Android co-founder) and Snoop Dogg.

“Pearpop has the potential to benefit all social media platforms by delivering new users and engagement, while simultaneously leveling the playing field of opportunity for creators,” said Alexis Ohanian, Founder, Seven Seven Six, in a statement. “The company has created a revolutionary new marketplace model that is set to completely reimagine how we think of social media monetization. As both a social media founder and an investor, I’m excited for what’s to come with pearpop.”

Already Heidi Klum, Loren Gray, Snoop Dogg, and Tony Hawk have gotten paid to appear in social media posts from aspiring auteurs on the social media platform TikTok.

Using the platform is relatively simple. A social media user (for now, that means just TikTok) sends a post that exists on their social feed and requests that another social media user interacts with it in some way — either commenting, posting a video in response, or adding a sound. If the request seems okay, or “on brand”, then the person who accepts the request performs the prescribed action.

Pearpop takes a 25% cut of all transactions with the social media user who’s performing the task getting the other 75%.

The company wouldn’t comment on revenue numbers, except to say that it’s on track to bring in seven figures this year.

Users on the platform set their prices and determine which kinds of services they’re willing to provide to boost the social media posts of their contractors.

Prices range anywhere from $5 to $10,000 depending on the size of a user’s following and the type of request that’s being made. Right now, the most requested personality on the marketplace is the TikTok star, Anna Banana.

These kinds of transactions do have impacts. The company said that personalities on the platform were able to increase their follower count with the service. For instance, Leah Svoboda went from 20K to 141K followers, after a pearpop duet with Anna Shumate.

If this all makes you feel like you’ve tripped and fallen through a Black Mirror into a dystopian hellscape where everything and every interaction is a commodity to be mined for money, well… that’s life.

“What I appreciate most about pearpop is the control it gives me as a creator,” said Anna Shumate, TikTok influencer @annabananaxdddd. “The platform allows me to post what I want and when I want. My followers still love my content because it’s authentic and true to me, which is what sets pearpop apart from all of the other opportunities on social media.”

Talent agencies, too, see the draw. Early adopters include Talent X, Get Engaged, and Next Step Talent and The Fuel Injector, which has added its entire roster of talent to pearpop, which includes Kody Antle, Brooke Monk and Harry Raftus, the company said.

“The initial concept came out of an obvious gap within the space: no marketplace existed for creators of all sizes to monetize through simple, authentic collaborations that are mutually beneficial,” said Cole Mason, co-founder & CEO, pearpop.  “It soon became clear that this was a product that people had been waiting for, as thousands of people rely on our platform today to gain full control of their social capital for the first time starting with TikTok.”

Deep fake video app Avatarify, which process on-phone, plans digital watermark for videos

Making deep fake videos used to be hard. Now all you need is a smartphone. Avatarify, a startup that allows people to make deep-fake videos directly on their phone rather than in the Cloud, is soaring up the app charts after being used by celebrities such as Victoria Beckham.

However, the problem with many deep fake videos is that there is no digital watermark to determine that the video has been tampered with. So Avatarify says it will soon launch a digital watermark to prevent this from happening.

Run out of Moscow but with a US HQ, Avatarify launched in July 2020 and since then has been downloaded millions of times. The founders say that 140 million deepfake videos were created with Avatarify this year alone. There are now 125 million views of videos with the hashtag #avatarify on TikTok. While its competitors include the well-funded Reface, Snapchat, Wombo.ai, Mug Life, Xpression, Avatarify has yet to raise any money beyond an Angel round.

Despite taking only $120,000 in angel funding, the company has yet to accept any venture capital and says it has bootstrapped its way from zero to almost 10 million downloads and claims to have a $10 million annual run-rate with a team of less than 10 people.

It’s not hard to see why. Avatarify has a freemium subscription model. They offer a 7-day free trial and a 12-month subscription for $34.99 or a weekly plan for $2.49. Without a subscription, they offer the core features of the App for free, but videos then carry a visible watermark.

The founders also say the app protects privacy, because the videos are processed directly on the phone, rather than in the cloud where they could be hacked.

Avatarify processes user’s photos and turns them into short videos by animating faces, using machine learning algorithms, and adding sounds. The user chooses a picture she wants to animate, chooses the effects and music, and then taps to animate the picture. This short video can then be posted on Instagram or TikTok.

The Avatarify videos are taking off on TikTok because teens no longer need to learn a dance or be much more creative than finding a photo of a celebrity to animate to.

Avartify says you can’t use their app to impersonate someone, but there is of course no way to police this.

Founders Ali Aliev and Karim Iskakov wrote the app during the COVID-19 lockdown in April 2020. Ali spent 2 hours writing a program in Python to transfer his facial expressions to the other person’s face and use a filter in Zoom. The result was a real-time video, which could be streamed to Zoom. He joined a call with Elon Mask’s face and everyone on the call was shocked. The team posted the video, which then went viral.

The code on Github and immediately saw the number of downloads grow. The repository was published on 6 April 2020, and as of 19 March 2021 had been downloaded 50,000 times.

Ali left his job at Samsung AI Centre and devoted himself to the app. After Avatarify’s iOS app was released on 28 June 2020, viral videos on TikTok, created with the app, led it to App Store’s top charts without paid acquisition. In February 2021, Avatarify was ranked first among Top Free Apps worldwide. Between February and March, the app 2021 generated more than $1M in revenue (Source: AppMagic).

However, despite Avartify’s success, the ongoing problems with deep-fake videos remain, such as using these apps to make non-consensual porn, using the faces of innocent people.

Facebook, Instagram users can now ask ‘oversight’ panel to review decisions not to remove content

Facebook’s self-styled ‘Oversight Board’ (FOB) has announced an operational change that looks intended to respond to criticism of the limits of the self-regulatory content-moderation decision review body: It says it’s started accepting requests from users to review decisions to leave content up on Facebook and Instagram.

The move expands the FOB’s remit beyond reviewing (and mostly reversing) content takedowns — an arbitrary limit that critics said aligns it with the economic incentives of its parent entity, given that Facebook’s business benefits from increased engagement with content (and outrageous content drives clicks and makes eyeballs stick).

“So far, users have been able to appeal content to the Board which they think should be restored to Facebook or Instagram. Now, users can also appeal content to the Board which they think should be removed from Facebook or Instagram,” the FOB writes, adding that it will “use its independent judgment to decide what to leave up and what to take down”.

“Our decisions will be binding on Facebook,” it adds.

The ability to request an appeal on content Facebook wouldn’t take down has been added across all markets, per Facebook. But the tech giant said it will take some “weeks” for all users to get access as it said it’s rolling out the feature “in waves to ensure stability of the product experience”.

While the FOB can now get individual pieces of content taken down from Facebook/Instagram — i.e. if the Board believes it’s justified in reversing an earlier decision by the company not to remove content — it cannot make Facebook adopt any associated suggestions vis-a-vis its content moderation policies generally.

That’s because Facebook has never said it will be bound by the FOB’s policy recommendations; only by the final decision made per review.

That in turn limits the FOB’s ability to influence the shape of the tech giant’s approach to speech policing. And indeed the whole effort remains inextricably bound to Facebook which devised and structured the FOB — writing the Board’s charter and bylaws, and hand picking the first cohort of members. The company thus continues to exert inescapable pull on the strings linking its self-regulatory vehicle to its lucrative people-profiling and ad-targeting empire.

The FOB getting the ability to review content ‘keep ups’ (if we can call them that) is also essentially irrelevant when you consider the ocean of content Facebook has ensured the Board won’t have any say in moderating — because its limited resources/man-power mean it can only ever consider a fantastically tiny subset of cases referred to it for review.

For an oversight body to provide a meaningful limit on Facebook’s power it would need to have considerably more meaty (i.e. legal) powers; be able to freely range across all aspects of Facebook’s business (not just review user generated content); and be truly independent of the adtech mothership — as well as having meaningful powers of enforcement and sanction.

So, in other words, it needs to be a public body, functioning in the public interest.

Instead, while Facebook applies its army of in house lawyers to fight actual democratic regulatory oversight and compliance, it has splashed out to fashion this bespoke bureaucracy that can align with its speech interests — handpicking a handful of external experts to pay to perform a content review cameo in its crisis PR drama.

Unsurprisingly, then, the FOB has mostly moved the needle in a speech-maximizing direction so far — while expressing some frustration at the limited deck of cards Facebook has dealt it.

Most notably, the Board still has a decision pending on whether to reverse Facebook’s indefinitely ban on former US president Donald Trump. If it reverses that decision Facebook users won’t have any recourse to appeal the restoration of Trump’s account.

The only available route would, presumably, be for users to report future Trump content to Facebook for violating its policies — and if Facebook refuses to take that stuff down, users could try to request a FOB review. But, again, there’s no guarantee the FOB will accept any such review requests. (Indeed, if the board chooses to reinstate Trump that may make it harder for it to accept requests to review Trump content, at least in the short term (in the interests of keeping a diverse case file, so… )

How to ask for a review after content isn’t removed

To request the FOB review a piece of content that’s been left up a user of Facebook/Instagram first has to report the content to Facebook/Instagram.

If the company decides to keep the content up Facebook says the reporting person will receive an Oversight Board Reference ID (a ten-character string that begins with ‘FB’) in their Support Inbox — which they can use to appeal its ‘no takedown’ decision to the Oversight Board.

There are several hoops to jump through to make an appeal: Following on-screen instructions Facebook says the user will be taken to the Oversight Board website where they need to log in with the account to which the reference ID was issued.

They will then be asked to provide responses to a number of questions about their reasons for reporting the content (to “help the board understand why you think Facebook made the wrong decision”).

Once an appeal has been submitted, the Oversight Board will decide whether or not to review it. The board only selects a certain number of “eligible appeals” to review; and Facebook has not disclosed the proportion of requests the Board accepts for review vs submissions it receives — per case or on aggregate. So how much chance of submission success any user has for any given piece of content is an unknown (and probably unknowable) quantity.

Users who have submitted an appeal against content that was left up can check the status of their appeal via the FOB’s website — again by logging in and using the reference ID.

A further limitation is time, as Facebook notes there’s a time limit on appealing decisions to the FOB

“Bear in mind that there is a time limit on appealing decisions to the Oversight Board. Once the window to appeal a decision has expired, you will no longer be able to submit it,” it writes in its Help Center, without specifying how long users have to get their appeal in. 

Facebook takes down 16,000 groups trading fake reviews after another poke by UK’s CMA

Facebook has removed 16,000 groups that were trading fake reviews on its platform after another intervention by the UK’s Competition and Markets Authority (CMA), the regulator said today.

The CMA has been leaning on tech giants to prevent their platforms being used as thriving marketplaces for selling fake reviews since it began investigating the issue in 2018 — pressuring both eBay and Facebook to act against fake review sellers back in 2019.

The two companies pledged to do more to tackle the insidious trade last year, after coming under further pressure from the regulator — which found that Facebook-owned Instagram was also a thriving hub of fake review trades.

The latest intervention by the CMA looks considerably more substantial than last year’s action — when Facebook removed a mere 188 groups and disabled 24 user accounts. Although it’s not clear how many accounts the tech giant has banned and/or suspended this time it has removed orders of magnitude more groups. (We’ve asked.)

Facebook was contacted with questions but it did not answer what we asked directly, sending us this statement instead:

“We have engaged extensively with the CMA to address this issue. Fraudulent and deceptive activity is not allowed on our platforms, including offering or trading fake reviews. Our safety and security teams are continually working to help prevent these practices.”

Since the CMA has been raising the issue of fake review trading, Facebook has been repeatedly criticised for not doing enough to clean up its platforms, plural.

Today the regulator said the social media giant has made further changes to the systems it uses for “identifying, removing and preventing the trading of fake and/or misleading reviews on its platforms to ensure it is fulfilling its previous commitments”.

It’s not clear why it’s taken Facebook well over a year — and a number of high profile interventions — to dial up action against the trade in fake reviews. But the company suggested that the resources it has available to tackle the problem had been strained as a result of the COVID-19 pandemic and associated impacts, such as home working. (Facebook’s full year revenue increased in 2020 but so too did its expenses.)

According to the CMA changes Facebook has made to its system for combating traders of fake reviews include:

  • suspending or banning users who are repeatedly creating Facebook groups and Instagram profiles that promote, encourage or facilitate fake and misleading reviews
  • introducing new automated processes that will improve the detection and removal of this content
  • making it harder for people to use Facebook’s search tools to find fake and misleading review groups and profiles on Facebook and Instagram
  • putting in place dedicated processes to make sure that these changes continue to work effectively and stop the problems from reappearing

Again it’s not clear why Facebook would not have already been suspending or banning repeat offenders — at least, not if it was actually taking good faith action to genuinely quash the problem, rather than seeing if it could get away with doing the bare minimum.

Commenting in a statement, Andrea Coscelli, chief executive of the CMA, essentially makes that point, saying: “Facebook has a duty to do all it can to stop the trading of such content on its platforms. After we intervened again, the company made significant changes — but it is disappointing it has taken them over a year to fix these issues.”

“We will continue to keep a close eye on Facebook, including its Instagram business. Should we find it is failing to honour its commitments, we will not hesitate to take further action,” Coscelli added.

A quick search on Facebook’s platform for UK groups trading in fake reviews appears to return fewer obviously dubious results than when we’ve checked in on this problem in 2019 and 2020. Although the results that were returned included a number of private groups so it was not immediately possible to verify what content is being solicited from members.

We did also find a number of Facebook groups offering Amazon reviews intended for other European markets, such as France and Spain (and in one public group aimed at Amazon Spain we found someone offering a “fee” via PayPal for a review; see below screengrab) — suggesting Facebook isn’t applying the same level of attention to tackling fake reviews that are being traded by users in markets where it’s faced fewer regulatory pokes than it has in the UK.

Screengrab: TechCrunch

Facebook, Instagram, and WhatsApp are super broken right now (Update: but starting to work again)

Are Facebook, Instagram, and WhatsApp down for you right now? Us too! And lots and lots of other people too, it seems.

We’re getting reports left and right of outages across the three Facebook properties, with no indication so far as to the cause. It’s all down so hard that Facebook’s own server status page won’t even load to explain what’s up. Some of the respective mobile apps appear to load, but are just loading cached data; refresh or try to pull in a new page, and things probably won’t load correctly.

When Facebook on the web does load, it’s largely throwing the following error message:

 

This outage comes just a few weeks after one that took out Instagram and WhatsApp in March.

(Update, 3:19 PM: It appears things are coming back online, about an hour after the outage first began.)

 

Facebook, Instagram, and WhatsApp are super broken right now (Update: but starting to work again)

Are Facebook, Instagram, and WhatsApp down for you right now? Us too! And lots and lots of other people too, it seems.

We’re getting reports left and right of outages across the three Facebook properties, with no indication so far as to the cause. It’s all down so hard that Facebook’s own server status page won’t even load to explain what’s up. Some of the respective mobile apps appear to load, but are just loading cached data; refresh or try to pull in a new page, and things probably won’t load correctly.

When Facebook on the web does load, it’s largely throwing the following error message:

 

This outage comes just a few weeks after one that took out Instagram and WhatsApp in March.

(Update, 3:19 PM: It appears things are coming back online, about an hour after the outage first began.)

 

Lawmakers press Instagram for details on its plans for kids

A group of Democratic lawmakers wrote to Mark Zuckerberg this week to press the CEO on his plans to curate a version of Instagram for children. In a hearing last month, Zuckerberg confirmed reporting by BuzzFeed that the company was exploring an age-gated version of its app designed for young users.

Senators Ed Markey (D-MA), Richard Blumenthal (D-CT) and Representatives Lori Trahan (D-MA) and Kathy Castor (D-FL) signed the letter, expressing “serious concerns” about the company’s ability to protect the privacy and well-being of young users.

“Facebook has an obligation to ensure that any new platforms or projects targeting children put those users’ welfare first, and we are skeptical that Facebook is prepared to fulfill this obligation,” the lawmakers wrote.

They cited previous failures with products like Messenger Kids, which had a flaw that allowed kids to chat with people beyond their privacy parameters.

“Although software bugs are common, this episode illustrated the privacy threats to children online and evidenced Facebook’s inability to protect the kids the company actively invited onto this platform,” the lawmakers wrote.

“In light of these and other previous privacy and security issues on Facebook’s platforms, we are not confident that Facebook will be able to adequately protect children’s privacy on a version of Instagram for young users.”

The letter set a deadline of April 26 for the company to provide answers to a comprehensive and helpfully specific set of questions about a future kid-targeted product.

In the letter, lawmakers posed a number of questions about how Facebook will handle the private data for young users and if that data would be deleted when an account is terminated. They also asked the company to commit to not targeting kids with ads and not employing push alerts and behavior-shaping features designed to make apps more addictive.

During last month’s big tech hearing in the House, committee members from both political parties grilled Zuckerberg about how Facebook and Instagram adversely affect mental health in young users. Rep. Castor also pressed the chief executive about underage users who circumvent Instagram’s existing age guidelines to use a platform full of posts, videos and ads designed for adults.

“Of course, every parent knows there are kids under the age of 13 on Instagram, and the problem is that you know it,” Zuckerberg said.

Lowkey raises $7 million from a16z to help game streamers capitalize on short-form video

While the growth of game-streaming audiences have continued on desktop platforms, the streaming space has felt surprisingly stagnant at times, particularly due to the missing mobile element and a lack of startup competitors.

Lowkey, a gaming startup that builds software for game streamers, is aiming to build out opportunities in bit-sized clips on mobile. The startup wants to be a hub for both creating and viewing short gaming clips but also sees a big opportunity in helping streamers cut down their existing content for distribution on platforms like Instagram and TikTok where short-form gaming content sees a good deal of engagement.

The startup announced today that they’ve closed a $7 million Series A led by Andreessen Horowitz with participation from a host of angel investors including Figma’s Dylan Field, Loom’s Joe Thomas and Plaid’s Zach Perret & William Hockey.

We last covered Lowkey in early 2020 when the company was looking to build out a games tournament platform for adults. At the time, the company had already pivoted after going through YC as Camelot but which allowed audiences on Twitch and YouTube pay creators to take on challenges. This latest shift brings Lowkey back to the streaming world but more focused on becoming a tool for streamers and a hub for viewers.

Twitch and YouTube Gaming have proven to be pretty uninterested in short-form content, favoring the opportunities of long-form streams that allow creators to press broadcast and upload lengthy streams. Lowkey users can easily upload footage captured from Lowkey’s desktop app or directly import a linked stream. This allows content creators to upload and comment on their own footage or remix and respond to another streamer’s content.

One of the challenges for streamers has been adapting widescreen content for a vertical video form factor, but CEO Jesse Zhang says that it’s not really a problem with most modern games. “Games inherently want to focus you attention on the center of the screen,” Zhang tells TechCrunch. “So, almost all clips extend really cleanly to like a mobile format, which is what we’ve done.”

Lowkey’s desktop app is available on Windows and their new mobile app is now live for iOS.

“Link-in-bio” company Linktree raises $45M Series B for its social commerce features

If you browse Instagram, you are probably familiar with the term “link in bio.” Links aren’t allowed in post captions, and users are only allowed one URL in their bios, so many create a simple website with multiple links for their followers. Linktree, one of the most popular “link in bio” services with more than 12 million users, announced today it has raised $45 million in Series B funding. The round was co-led by Index Ventures and Coatue, with participation from returning investors AirTree Ventures and Insight Partners.

Coatue chairman Dan Rose will join Linktree’s board of directors. The Sydney, Australia-based startup’s last round was a $10.7 million Series A announced in October 2020. Linktree’s latest funding will be used on tools that make social commerce easier.

Linktree says about a third, or 4 million, of its users signed up within the last three months. This is in partly because people have been spending more time on social media and e-commerce shopping during the pandemic.

Founded in 2016, Linktree now competes with a roster of “link in bio” services, including Shorby, Linkin.bio and the recently launched Beacons.

“When we launched Linktree, we created an entirely new category. We were first to market and, with over 12 million users globally, still hold 88% of market share,” founder and chief executive officer Alex Zaccaria told TechCrunch. “Inevitably we’ve seen plenty of competitors pop up as a result, but part of the uniqueness of Linktree is its deceptively simple design.”

Zaccaria added that one of Linktree’s differentiators is its adoption by users in a wide range of categories, including health and wellness, real estate, sports, music, politics, publishing and food. It’s used for bio links by Shopify, Facebook, TikTok, YSL, HBO and Major League Baseball, and celebrities like Jonathan Van Ness, Jamie Oliver and Pharrell.

“We might have started as a link-in-bio tool, but over time Linktree has evolved and the platform has become a social identity layer of the internet. Our vision for how the platform will sit at the intersection of digital self-expression and action means we’re thinking boldly when it comes to our roadmap.”

Instagram adds new teen safety tools as competition with TikTok heats up

Earlier this year, TikTok made an update to its privacy settings and defaults to further lock down the app for its teenage users. This morning, Instagram followed suit with teen-focused privacy updates of its own. But the Facebook-owned social app didn’t choose to add more privacy to teen accounts by default, as TikTok did — it largely made it more difficult for adults to interact with the app’s teen users.

The company said it’s rolling out new safety features that would restrict adult users from being able to contact teens who didn’t already follow them. The exception to this rule would still allow the teen to interact with adult family members and other trusted adults on the platform, like family friends. In the case that an adult tried to DM a teen who didn’t follow them, they’d receive a notification informing them this wasn’t possible.

And if the teen has already connected with an adult and is DM’ing with them, they’ll be notified if that adult is exhibiting suspicious behavior — like sending a large amount of friend requests or messages to users under 18. This tool will also then allow the teen to end the conversation, block, report or restrict the adult from further contact.

Image Credits: Instagram

In addition, Instagram said it will make it more difficult for adults to find and follow teens in other places within the Instagram app, including Explore, Reels, and more. This will include restricting adults from seeing teen accounts in the “Suggested Users” section of the app, as well as hiding their comments on public posts.

The company also noted it’s developing new A.I. and machine learning-based technology that would make it possible to find teens lying about their age on the app. This could result in these features being applied, even if the teen in question had lied about their birth date when signing up for the app, but the technology isn’t fully live yet.

Other additions rolling out as part of today’s updates include new safety resources for parents in the app’s Parents Guide and educational material for teens that will better explain what it means to have a public account on the app, and encourage them to choose private options.

Image Credits: Instagram

The launch timing here is notable, as TikTok has recently focused on making its platform safer for teens — not only with the changes to its default settings, but also with the addition of parental controls last year. The company last year took the unusual step of bundling a parental control mechanism directly into its app that lets a parent link to a child’s TikTok account to control their profile’s privacy, what they’re allowed to do on the app, and even which feed they can view. The company has continued to expand these controls following their launch, indicating that it considers these core features. By making privacy and parental controls a key part of the experience, the app is more likely to be blessed by parents who would otherwise restrict their teens’ social media access — and that helps TikTok grow its user base and teens’ time spent in the app, sometimes at Instagram’s expense.