Study finds half of Americans get news on social media, but percentage has dropped

A new report from Pew Research finds that around a third of U.S. adults continue to get their news regularly from Facebook, though the exact percentage has slipped from 36% in 2020 to 31% in 2021. This drop reflects an overall slight decline in the number of Americans who say they get their news from any social media platform — a percentage that also fell by 5 percentage points year-over-year, going from 53% in 2020 to a little under 48%, Pew’s study found.

By definition, “regularly” here means the survey respondents said they get their news either “often” or “sometimes,” as opposed to “rarely,” “never,” or “don’t get digital news.”

The change comes at a time when tech companies have come under heavy scrutiny for allowing misinformation to spread across their platforms, Pew notes. That criticism has ramped up over the course of the pandemic, leading to vaccine hesitancy and refusal, which in turn has led to worsened health outcomes for many Americans who consumed the misleading information.

Despite these issues, the percentage of Americans who regularly get their news from various social media sites hasn’t changed too much over the past year, demonstrating how much a part of people’s daily news habits these sites have become.

Image Credits: Pew Research

In addition to the one-third of U.S. adults who regularly get their news on Facebook, 22% say they regularly get news on YouTube. Twitter and Instagram are regular news sources for 13% and 11% of Americans, respectively.

However, many of the sites have seen small declines as a regular source of news among their own users, says Pew. This is a different measurement compared with the much smaller percentage of U.S. adults who use the sites for news, as it speaks to how the sites’ own user bases may perceive them. In a way, it’s a measurement of the shifting news consumption behaviors of the often younger social media user, more specifically.

Today, 55% of Twitter users regularly get news from its platform, compared with 59% last year. Meanwhile, Reddit users’ use of the site for news dropped from 42% to 39% in 2021. YouTube fell from 32% to 30%, and Snapchat fell from 19% to 16%. Instagram is roughly the same, at 28% in 2020 to 27% in 2021.

Only one social media platform grew as a news source during this time: TikTok.

In 2020, 22% of the short-form video platform’s users said they regularly got their news there, compared with an increased 29% in 2021.

Overall, though, most of these sites have very little traction with the wider adult population in the U.S. Fewer than 1 in 10 Americans regularly get their news from Reddit (7%), TikTok (6%), LinkedIn (4%), Snapchat (4%), WhatsApp (3%) or Twitch (1%).

Image Credits: Pew Research

There are demographic differences between who uses which sites, as well.

White adults tend to turn to Facebook and Reddit for news (60% and 54%, respectively). Black and Hispanic adults make up significant proportions of the regular news consumers on Instagram (20% and 33%, respectively.) Younger adults tend to turn to Snapchat and TikTok, while the majority of news consumers on LinkedIn have four-year college degrees.

Of course, Pew’s latest survey, conducted from July 26 to Aug. 8, 2021, is based on self-reported data. That means people’s answers are based on how the users perceive their own usage of these various sites for newsgathering. This can produce different results compared with real-world measurements of how often users visited the sites to read news. Some users may underestimate their usage and others may overestimate it.

People may also not fully understand the ramifications of reading news on social media, where headlines and posts are often molded into inflammatory clickbait in order to entice engagement in the form of reactions and comments. This, in turn, may encourage strong reactions — but not necessarily from those worth listening to. In recent Pew studies, it found that social media news consumers tended to be less knowledgeable about the facts on key news topics, like elections or Covid-19. And social media consumers were more frequently exposed to fringe conspiracies (which is pretty apparent to anyone reading the comments!)

For the current study, the full sample size was 11,178 respondents, and the margin of sampling error was plus or minus 1.4 percentage points.

 

Facebook knows Instagram harms teens. Now, its plan to open the app to kids looks worse than ever

Facebook is in the hot seat again.

The Wall Street Journal published a powerful multi-part series on the company this week, drawing from internal documents on everything from the company’s secretive practice of whitelisting celebrities to its knowledge that Instagram is taking a serious toll on the mental health of teen girls.

The flurry of investigative pieces makes it clear that what Facebook says in public doesn’t always reflect the company’s knowledge on known issues behind the scenes. The revelations still managed to shock even though Facebook has been playing dumb about the various social ills it sows for years. (Remember when Mark Zuckerberg dismissed the notion that Facebook influenced the 2016 election as “crazy?”) Facebook’s longstanding PR playbook is to hide its dangers, denying knowledge of its darker impacts on society publicly, even as research spells them out internally.

That’s all well and good until someone gets ahold of the internal research.

One of the biggest revelations from the WSJ’s report: The company knows that Instagram poses serious dangers to mental health in teenage girls. An internal research slide from 2019 acknowledged that “We make body image issues worse for one in three teen girls” — a shocking admission for a company charging ahead with plans to expand to even younger and more vulnerable age groups.

As recently as May, Instagram’s Adam Mosseri dismissed concerns around the app’s negative impact on teens as “quite small.”

But internally, the picture told a different story. According to the WSJ, from 2019 to 2021, the company conducted a thorough deep dive into teen mental health including online surveys, diary studies, focus groups and large-scale questionnaires.

According to one internal slide, the findings showed that 32 percent of teenage girls reported that Instagram made them have a worse body image. Of research participants who experienced suicidal thoughts, 13 percent of British teens and 6 percent of American teens directly linked their interest in killing themselves to Instagram.

“Teens blame Instagram for increases in the rate of anxiety and depression,” another internal slide stated. “This reaction was unprompted and consistent across all groups.”

Following the WSJ report, Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) announced a probe into Facebook’s lack of transparency around internal research showing that Instagram poses serious and even lethal danger to teens. The Senate Subcommittee on Consumer Protection, Product Safety, and Data Security will launch the investigation.

“We are in touch with a Facebook whistleblower and will use every resource at our disposal to investigate what Facebook knew and when they knew it – including seeking further documents and pursuing witness testimony,” Senators Blackburn and Blumenthal wrote. “The Wall Street Journal’s blockbuster reporting may only be the tip of the iceberg.”

Blackburn and Blumenthal weren’t the only U.S. lawmakers alarmed by the new report. Sen. Ed Markey (D-MA), Rep. Kathy Castor (D-FL), and Lori Trahan (D-MA) sent Facebook their own letter demanding that the company walk away from its plan to launch Instagram for kids. “Children and teens are uniquely vulnerable populations online, and these findings paint a clear and devastating picture of Instagram as an app that poses significant threats to young people’s wellbeing,” the lawmakers wrote.

 

In May, a group of 44 state attorneys general wrote to Instagram to encourage the company to abandon its plans to bring Instagram to kids under the age of 13. “It appears that Facebook is not responding to a need, but instead creating one, as this platform appeals primarily to children who otherwise do not or would not have an Instagram account,” the group of attorneys general wrote. They warned that an Instagram for kids would be “harmful for myriad reasons.”

In April, a collection of the same Democratic lawmakers expressed “serious concerns” about Instagram’s potential impact on the well-being of young users. That same month, a coalition of consumer advocacy organizations also demanded that the company reconsider launching a version of Instagram for kids.

According to the documents obtained by the WSJ, all of those concerns look extremely valid. In spite of extensive internal research and their deeply troubling findings, Facebook has downplayed its knowledge publicly, even as regulators regularly pressed the company for what it really knows.

Instagram’s Mosseri may have made matters worse Thursday when he made a less than flattering analogy between social media platforms and vehicles. “We know that more people die than would otherwise because of car accidents, but by and large, cars create way more value in the world than they destroy,” Mosseri told Peter Kafka on Recode’s media podcast. “And I think social media is similar.”

Mosseri dismissed any comparison between social media and drugs or cigarettes in spite of social media’s well-researched addictive effects, likening social platforms to the auto industry instead. Naturally, the company’s many critics jumped on the car comparison, pointing to their widespread lethality and the fact that the auto industry is heavily regulated — unlike social media.

Facebook revamps its business tool lineup following threats to its ad targeting business

Facebook today is announcing the launch of new products and features for business owners, following the threat to its ad targeting business driven by Apple’s new privacy features, which now allow mobile users to opt out of being tracked across their iOS apps. The social networking giant has repeatedly argued that Apple’s changes would impact small businesses that relied on Facebook ads to reach their customers. But it was not successful in getting any of Apple’s changes halted. Instead, the market is shifting to a new era focused more on user privacy, where personalization and targeting are more of an opt-in experience. That’s required Facebook to address its business advertiser base in new ways.

As the ability to track consumers declines — very few consumers are opting into tracking, studies find — Facebook is rolling out new features that will allow businesses to better position themselves in front of relevant audiences. This includes updates that will let them reach customers, advertise to customers, chat with customers across Facebook apps, generate leads, acquire customers and more.

The company earlier this year began testing a way for customers to explore businesses from underneath News Feed posts by tapping on topics they were interested in — like beauty, fitness, and clothing, and explore content from other related businesses. The feature allows people to come across new businesses that may also like, and would allow Facebook to create its own data set of users who like certain types of content. Over time, it could possibly even turn the feature into an ad unit, where businesses could pay for higher placement.

But for the time being, Facebook will expand this feature to more users across the U.S., and launch it in Australia, Canada, Ireland, Malaysia, New Zealand, Philippines, Singapore, South Africa, and the U.K.

Image Credits: Facebook

Facebook is also making it easier for businesses to chat with customers. They’re already able to buy ads that encourage people to message them on Facebook’s various chat platforms — Messenger, Instagram Direct, or WhatsApp. Now, they’ll be able to choose all the messaging platforms where they’re available, and Facebook will default the chat app showcased in the ad based on where the conversation is most likely to happen.

Image Credits: Facebook

The company will tie WhatsApp to Instagram, as well, as part of this effort. Facebook explains that many businesses market themselves or run shops across Instagram, but rely on WhatsApp to communicate with customers and answer questions. So, Facebook will now allow businesses to add a WhatsApp click-to-chat button to their Instagram profiles.

This change, in particular, represents another move that ties Facebook’s separate apps more closely together, at a time when regulators are considering breaking up Facebook over antitrust concerns. Already, Facebook interconnected Facebook’s Messenger and Instagram messaging services, which would make such a disassembly more complicated. And more recently, it’s begun integrating Messenger directly into Facebook’s platform itself.

Image Credits: Facebook

In a related change, soon businesses will be able to create ads that send users directly to WhatsApp from the Instagram app. (Facebook also already offers ads like this.)

Separately from this news, Facebook announced the launch of a new business directory on WhatsApp, allowing consumers to find shops and services on the chat platform, as well.

Another set of changes being introduced involve an update to Facebook Business Suite. Businesses will be able to manage emails through Inbox and sending remarketing emails; use a new File Manager for creating, managing, and posting content; and access a feature that will allow businesses to test different versions of a post to see which one is most effective.

Image Credits: Facebook

Other new products include tests of paid and organic lead generation tools on Instagram; quote requests on Messenger, where customers answer a few questions prior to their conversations; and a way for small businesses to access a bundle of tools to get started with Facebook ads, which includes a Facebook ad coupon along with free access to QuickBooks for 3 months or free access to Canva Pro for 3 months.

Image Credits: Facebook

Facebook will also begin testing something called “Work Accounts,” which will allow business owners to access their business products, like Business Manager, separately from their personal Facebook account. They’ll be able to manage these accounts on behalf of employees and use single-sign-on integrations.

Work Accounts will be tested through the remainder of the year with a small group of businesses, and Facebook says it expects to expand availability in 2022.

Other efforts it has in store include plans to incorporate more content from creators and local businesses and new features that let users control the content they see, but these changes were not detailed at this time.

Most of the products being announced are either rolling out today or will begin to show up soon.

The FDA should regulate Instagram’s algorithm as a drug

The Wall Street Journal on Tuesday reported Silicon Valley’s worst-kept secret: Instagram harms teens’ mental health; in fact, its impact is so negative that it introduces suicidal thoughts.

Thirty-two percent of teen girls who feel bad about their bodies report that Instagram makes them feel worse. Of teens with suicidal thoughts, 13% of British and 6% of American users trace those thoughts to Instagram, the WSJ report said. This is Facebook’s internal data. The truth is surely worse.

President Theodore Roosevelt and Congress formed the Food and Drug Administration in 1906 precisely because Big Food and Big Pharma failed to protect the general welfare. As its executives parade at the Met Gala in celebration of the unattainable 0.01% of lifestyles and bodies that we mere mortals will never achieve, Instagram’s unwillingness to do what is right is a clarion call for regulation: The FDA must assert its codified right to regulate the algorithm powering the drug of Instagram.

The FDA should consider algorithms a drug impacting our nation’s mental health: The Federal Food, Drug and Cosmetic Act gives the FDA the right to regulate drugs, defining drugs in part as “articles (other than food) intended to affect the structure or any function of the body of man or other animals.” Instagram’s internal data shows its technology is an article that alters our brains. If this effort fails, Congress and President Joe Biden should create a mental health FDA.

Researchers can study what Facebook prioritizes and the impact those decisions have on our minds. How do we know this? Because Facebook is already doing it — they’re just burying the results.

The public needs to understand what Facebook and Instagram’s algorithms prioritize. Our government is equipped to study clinical trials of products that can physically harm the public. Researchers can study what Facebook privileges and the impact those decisions have on our minds. How do we know this? Because Facebook is already doing it — they’re just burying the results.

In November 2020, as Cecilia Kang and Sheera Frenkel report in “An Ugly Truth,” Facebook made an emergency change to its News Feed, putting more emphasis on “News Ecosystem Quality” scores (NEQs). High NEQ sources were trustworthy sources; low were untrustworthy. Facebook altered the algorithm to privilege high NEQ scores. As a result, for five days around the election, users saw a “nicer News Feed” with less fake news and fewer conspiracy theories. But Mark Zuckerberg reversed this change because it led to less engagement and could cause a conservative backlash. The public suffered for it.

Facebook likewise has studied what happens when the algorithm privileges content that is “good for the world” over content that is “bad for the world.” Lo and behold, engagement decreases. Facebook knows that its algorithm has a remarkable impact on the minds of the American public. How can the government let one man decide the standard based on his business imperatives, not the general welfare?

Upton Sinclair memorably uncovered dangerous abuses in “The Jungle,” which led to a public outcry. The free market failed. Consumers needed protection. The 1906 Pure Food and Drug Act for the first time promulgated safety standards, regulating consumable goods impacting our physical health. Today, we need to regulate the algorithms that impact our mental health. Teen depression has risen alarmingly since 2007. Likewise, suicide among those 10 to 24 is up nearly 60% between 2007 and 2018.

It is of course impossible to prove that social media is solely responsible for this increase, but it is absurd to argue it has not contributed. Filter bubbles distort our views and make them more extreme. Bullying online is easier and constant. Regulators must audit the algorithm and question Facebook’s choices.

When it comes to the biggest issue Facebook poses — what the product does to us — regulators have struggled to articulate the problem. Section 230 is correct in its intent and application; the internet cannot function if platforms are liable for every user utterance. And a private company like Facebook loses the trust of its community if it applies arbitrary rules that target users based on their background or political beliefs. Facebook as a company has no explicit duty to uphold the First Amendment, but public perception of its fairness is essential to the brand.

Thus, Zuckerberg has equivocated over the years before belatedly banning Holocaust deniers, Donald Trump, anti-vaccine activists and other bad actors. Deciding what speech is privileged or allowed on its platform, Facebook will always be too slow to react, overcautious and ineffective. Zuckerberg cares only for engagement and growth. Our hearts and minds are caught in the balance.

The most frightening part of “The Ugly Truth,” the passage that got everyone in Silicon Valley talking, was the eponymous memo: Andrew “Boz” Bosworth’s 2016 “The Ugly.”

In the memo, Bosworth, Zuckerberg’s longtime deputy, writes:

“So we connect more people. That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.”

Zuckerberg and Sheryl Sandberg made Bosworth walk back his statements when employees objected, but to outsiders, the memo represents the unvarnished id of Facebook, the ugly truth. Facebook’s monopoly, its stranglehold on our social and political fabric, its growth at all costs mantra of “connection,” is not de facto good. As Bosworth acknowledges, Facebook causes suicides and allows terrorists to organize. This much power concentrated in the hands of one corporation, run by one man, is a threat to our democracy and way of life.

Critics of FDA regulation of social media will claim this is a Big Brother invasion of our personal liberties. But what is the alternative? Why would it be bad for our government to demand that Facebook accounts to the public its internal calculations? Is it safe for the number of sessions, time spent and revenue growth to be the only results that matters? What about the collective mental health of the country and world?

Refusing to study the problem does not mean it does not exist. In the absence of action, we are left with a single man deciding what is right. What is the price we pay for “connection”? This is not up to Zuckerberg. The FDA should decide.

TikTok expands mental health resources, as negative reports of Instagram’s effect on teens leak

TikTok announced this morning that it is implementing new tactics to educate its users about the negative mental health impacts of social media. As part of these changes, TikTok is rolling out a “well-being guide” in its Safety Center, a brief primer on eating disorders, expanded search interventions, and opt-in viewing screens on potentially triggering searches.

Developed in collaboration with International Association for Suicide PreventionCrisis Text LineLive For TomorrowSamaritans of Singapore, and Samaritans (UK), the new well-being guide offers more targeted advice toward people using TikTok, encouraging users to consider how it might impact them to share their mental health stories on a platform where any post has the potential to go viral. TikTok wants users to think about why they’re sharing their experience, if they’re ready for a wider audience to hear their story if sharing could be harmful to them, and if they’re prepared to hear others’ stories in response.

The platform also added a brief, albeit generic memo about the impact of eating disorders under the “topics” section of the Safety Center, which was developed with the National Eating Disorders Association (NEDA). NEDA has a long track record of collaborating with social media platforms, most recently working with Pinterest to prohibit ads promoting weight loss.

Already, TikTok directs users to local resources when they search for words or phrases like #suicide,* but now, the platform will also share content from creators with the intent of helping someone in need. The platform told TechCrunch that it chose this content following consultation with independent experts. Additionally, if someone enters a search phrase that might be alarming (TikTok offered “scary makeup” as an example), the content will be blurred out, asking users to opt-in to see the search results.

As TikTok unveils these changes, its competitor Instagram is facing scrutiny after The Wall Street Journal leaked documents that reveal its parent company Facebook’s own research on the harm Instagram poses for teen girls. Similar to the Gen Z-dominated TikTok, more than 40% of Instagram users are 22 or younger, and 22 million teens log into Instagram in the U.S. each day. In one anecdote, a 19-year-old interviewed by The Wall Street Journal said that after searching Instagram for workout ideas, her explore page has been flooded with photos about how to lose weight (Instagram has previously fessed up to errors with its search function, which recommended that users search topics like “fasting” and “appetite suppressants”). Angela Guarda, director for the eating-disorders program at Johns Hopkins Hospital, told The Wall Street Journal that her patients often say they learned about dangerous weight loss tactics via social media.

“The question on many people’s minds is if social media is good or bad for people. The research on this is mixed; it can be both,” Instagram wrote in a blog post today.

As TikTok nods to with its advice on sharing mental health stories, social media can often be a positive resource, allowing people who are dealing with certain challenges to learn from others who have gone through similar experiences. So, despite these platforms’ outsized influence, it’s also on real people to think twice about what they post and how it might influence others. Even when Facebook experimented with hiding the number of “likes” on Instagram, employees said that it didn’t improve overall user well-being. These revelations about the negative impact of social media on mental health and body image aren’t ground-breaking, but they generate a renewed pressure for these powerful platforms to think about how to support their users (or, at the very least, add some new memos to their security center). 

*If you or someone you know is struggling with depression or has had thoughts of harming themselves or taking their own life, The National Suicide Prevention Lifeline (1-800-273-8255) provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations.

Instagram is building a ‘Favorites’ feature so you don’t miss important posts

Instagram confirmed it’s developing a new feature called “Favorites,” which would allow users to select certain accounts whose posts they would like to see higher in their feed. A similar feature already exists on Facebook where it gives users a bit more control over the News Feed algorithm. On Facebook, users can select up to 30 friends or Facebook Pages whose posts get shown higher in the News Feed. It’s unclear what limit an Instagram Favorites feature would have, however.

The Instagram Favorites feature was recently spotted in development by reverse engineer Alessandro Paluzzi, who found a new pushpin icon for Favorites in the Instagram Settings menu, and other details about how the feature may work.

According to screenshots Paluzzi posted on Twitter, users will be able to search across the Instagram accounts they are currently following to create a list of Favorites. This list can be edited at any time, and Instagram notes that users would not be notified when they’re added to someone’s Favorites.

This is a similar level of privacy as offered by Instagram’s several years-old “Close Friends” feature, which instead focuses on allowing users to create a separate list of followers so they can share their more private and personal Instagram Stories with a select group of their own choosing.

Paluzzi tells us he was able to add contacts to the Favorites list, but didn’t yet notice any changes to the Instagram feed after doing so. That implies the feature is still being built and a launch is not imminent.

“This feature is an internal prototype that’s still in development, and not testing externally,” an Instagram spokesperson told TechCrunch. They declined to share any other specifics about the feature.

A Favorites feature could play into Instagram’s larger plans to better establish itself as a home for creator content. In other leaks, Paluzzi had also found the company was building out “Fan Subscriptions,” which would allow users to pay for elevated access to creator content — like exclusive live videos or Stories, for example. Paid subscribers may also be given a special badge that would highlight their name when they commented, DM’ed, or viewed the creator’s Stories.

Given that users who were paying for content would not want to miss a moment, it would make sense to give them tools to designate those creators as “Favorites” whose posts were also more highly ranked in their Feed.

A Favorites feature could also be useful to those who had taken a break from Instagram and would rather see the important photos and videos they missed from favorite accounts upon their return, rather than just the most recent or interesting updates from across all of the accounts they follow.

And while not likely the main goal, the new feature could help to address users’ complaints about the algorithmic feed in general.

Today, there are still a number of people who want to be able to see Instagram posts in chronological order, preferring to not have posts re-ordered by an algorithm they can’t control. Favorites wouldn’t give in to this demand (though Instagram has tested a chronological feed in the past). But it would at least give users the ability to ensure they weren’t missing the posts from those whose updates they wanted to see the most.

Though Instagram did say it’s working on the development of Favorites, it doesn’t necessarily mean such a feature will launch to the public. Companies of Instagram’s size often prototype new ideas, but only some of those tests make it to a general release.

SpotOn raises $300M at a $3.15B valuation and acquires Appetize

Last year at this time, SpotOn was on the brink of announcing a $60 million Series C funding round at a $625 million valuation.

Fast-forward to almost exactly one year later and a lot has changed for the payments and software startup.

Today, SpotOn said it has closed on $300 million in Series E financing that values the company at $3.15 billion — more than 5x of its valuation at the time of its Series C round, and significantly higher than its $1.875 billion valuation in May (yes, just three and a half months ago) when it raised $125 million in a Series D funding event.

Andreessen Horowitz (a16z) led both the Series D and E rounds for the company, which says it has seen 100% growth year over year and a tripling in revenue over the past 18 months. Existing investors DST Global, 01 Advisors, Dragoneer Investment Group, Franklin Templeton and Mubadala Investment Company too doubled down on their investments in SpotOn, joining new backers Wellington Management and Coatue Management. Advisors Douglas Merritt, CEO of Splunk, and Mike Scarpelli, CFO of Snowflake, also made individual investments as angels. With the new capital, SpotOn has raised $628 million since its inception.

The latest investment is being used to finance the acquisition of another company in the space — Appetize, a digital and mobile commerce payments platform for enterprises such as sports and entertainment venues, theme parks and zoos. SpotOn is paying $415 million in cash and stock for the Los Angeles-based company.

Since its 2017 inception, SpotOn has been focused on providing software and payments technology to SMBs with an emphasis on restaurants and retail businesses. The acquisition of Appetize extends SpotOn’s reach to the enterprise space in a major way. Appetize will go to market as SpotOn and will work to grow its client base, which already includes an impressive list of companies and organizations, including Live Nation, LSU, Dodger Stadium and Urban Air. 

In fact, Appetize currently covers 65% of all major league sports stadiums, specializing in contactless payments, mobile ordering and menu management. So for example, when you’re ordering food at a game or concert, Appetize’s technology makes it easier to pay in a variety of contactless ways through point of sale (POS) devices, self-service kiosks, handheld devices, online ordering, mobile web and API integrations.

Image Credits: SpotOn

SpotOn is taking on the likes of Square in the payments space. But the company says its offering extends beyond traditional payment processing and point-of-sale software. Its platform aims to give SMBs the ability to run their businesses “from building a brand to taking payments and everything in between.” SpotOn’s goal is to be a “one-stop shop” by incorporating tools that include things such as custom website development, scheduling software, marketing, appointment scheduling, review management, analytics and digital loyalty.

The combined company will have 1,600 employees — 1,300 from SpotOn and 300 from Appetize. SpotOn will now have over 500 employees on its product and technology team, according to co-founder and co-CEO Zach Hyman. It will also have clients in the tens of thousands, a number that SpotOn says is growing by “thousands more every month.”

The acquisition is not the first for SpotOn, which also acquired SeatNinja last year and Emagine in 2018.

But in Appetize it saw a company that was complementary both in its go-to-market and tech stacks, and a “natural fit.”

“SMEs are going to benefit from the scalable tech that can grow with them, including things like kiosks and offline modes, and for the enterprise clients of Appetize, they’re going to be able to leverage products like sophisticated loyalty programs and extended marketing capabilities,” Hyman told TechCrunch. 

SpotOn was not necessarily planning to raise another round so soon, Hyman added, but the opportunity came up to acquire Appetize.

“We spent a lot of time together, and it was too compelling to pass up,” he told TechCrunch.

For its part, Appetize — which has raised over $77 million over its lifetime, according to Crunchbase — too saw the combination as a logical one.

“It was important to us to retain a stake in the business. We were not looking to cash out,” said Appetize CEO Max Roper. “We are deeply invested in growing the business together. It’s a big win for our team and our clients over the long term. This is a rocketship that we are excited to be on.” 

No doubt that the COVID-19 pandemic only emphasized the need for more digital offerings from small businesses to enterprises alike.

“There has been a high demand for our services and now as businesses are faced with a Covid resurgence, no one is closing down,” Hyman said. “So they see a responsibility to install the necessary technology to properly run their business.”

One of the moves SpotOn has made, for example, is launching a vaccination alert system in its reservation management software platform to make it easier for consumers to confirm they are vaccinated for cities and states that have those requirements.

Clearly, a16z General Partner David George too was bullish on the idea of a combined company.

He told TechCrunch that the two companies fit together “extremely nicely.”

“It felt like a no-brainer for us to want to lead the round, and continue to support them,” George said.

Since first investing in SpotOn in May, the startup’s growth has “exceeded” a16z’s expectations, he added.

“When companies are growing as fast as it is organically, they don’t need to rely on acquisitions to fuel growth,” he said. “But the strategic rationale here is so strong, that the acquisition will only turbocharge what is already high growth.”

While the Series E capital is primarily funding the acquisition, SpotOn continues to double down on its product and technology.

“This is our time to shine and invest in the future with forward-thinking technology,” Hyman told TechCrunch. “We’re thinking about things like how are consumers going to be ordering their beer at a Dodgers game in three years? Are they going to be standing in line for 25 minutes or are they going to be interacting and buying merchandise in other unique ways? Those are the things we’re looking to solve for.”

Equity Monday: Women’s employment drops, as Delta’s drama continues

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast where we unpack the numbers behind the headlines.

This is Equity Monday, our weekly kickoff to catch up on weekend news and prep for the days ahead. We’re here on Tuesday this week since us folks in the United States had off for labor day. You can follow the show on Twitter here, and while you’re at it, throw me a follow too.

  • Jobs report: Over the weekend, the US government posted the Jobs Report. It wasn’t ideal, with a sharp drop in percentage of women rejoining the workforce. I give you the startup angle, and talk about a somewhat poetic unicorn.
  • Instacart, meet Instagram: WSJ reports that new Instacart CEO Fidji Simo is expanding the grocery delivery store’s consumer-product advertising business, with a goal of hitting $1 billion in revenue next year. I riff on why this makes sense and what challenges the business make come up against.
  • Behemoths, beware: The largest Series A within Africa just closed, and it’s not even close. Wave is taking on telecom-led mobile money, now with four-big name backers. It’s not the only startup trying to take on a behemoth. I also gave a shout out to Glass, which wants to take on Instagram as a new go-to destination for photographers to share their content.

And that’s a wrap. I have a fun edtech piece coming out on Extra Crunch this week, so keep your eyes out for it.

Equity drops every Monday at 7:00 a.m. PST, Wednesday, and Friday at 6:00 a.m. PST, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts!

Playbyte’s new app aims to become the ‘TikTok for games’

A startup called Playbyte wants to become the TikTok for games. The company’s newly launched iOS app offers tools that allow users to make and share simple games on their phone, as well as a vertically scrollable, fullscreen feed where you can play the games created by others. Also like TikTok, the feed becomes more personalized over time to serve up more of the kinds of games you like to play.

While typically, game creation involves some aspect of coding, Playbyte’s games are created using simple building blocks, emoji and even images from your Camera Roll on your iPhone. The idea is to make building games just another form of self-expression, rather than some introductory, educational experience that’s trying to teach users the basics of coding.

At its core, Playbyte’s game creation is powered by its lightweight 2D game engine built on web frameworks, which lets users create games that can be quickly loaded and played even on slow connections and older devices. After you play a game, you can like and comment using buttons on the right-side of the screen, which also greatly resembles the TikTok look-and-feel. Over time, Playbyte’s feed shows you more of the games you enjoyed as the app leverages its understanding of in-game imagery, tags and descriptions, and other engagement analytics to serve up more games it believes you’ll find compelling.

At launch, users have already made a variety of games using Playbyte’s tools — including simulators, tower defense games, combat challenges, obbys, murder mystery games, and more.

According to Playbyte founder and CEO Kyle Russell — previously of Skydio, Andreessen Horowitz, and (disclosure!) TechCrunch — Playbyte is meant to be a social media app, not just a games app.

“We have this model in our minds for what is required to build a new social media platform,” he says.

What Twitter did for text, Instagram did for photos and TikTok did for video was to combine a constraint with a personalized feed, Russell explains. “Typically. [they started] with a focus on making these experiences really brief…So a short, constrained format and dedicated tools that set you up for success to work within that constrained format,” he adds.

Similarly, Playbyte games have their own set of limitations. In addition to their simplistic nature, the games are limited to five scenes. Thanks to this constraint, a format has emerged where people are making games that have an intro screen where you hit “play,” a story intro, a challenging gameplay section, and then a story outro.

In addition to its easy-to-use game building tools, Playbyte also allows game assets to be reused by other game creators. That means if someone who has more expertise makes a game asset using custom logic or which pieced together multiple components, the rest of the user base can benefit from that work.

“Basically, we want to make it really easy for people who aren’t as ambitious to still feel like productive, creative game makers,” says Russell. “The key to that is going to be if you have an idea — like an image of a game in your mind — you should be able to very quickly search for new assets or piece together other ones you’ve previously saved. And then just drop them in and mix-and-match — almost like Legos — and construct something that’s 90% of what you imagined, without any further configuration on your part,” he says.

In time, Playbyte plans to monetize its feed with brand advertising, perhaps by allowing creators to drop sponsored assets into their games, for instance. It also wants to establish some sort of patronage model at a later point. This could involve either subscriptions or even NFTs of the games, but this would be further down the road.

The startup had originally began as a web app in 2019, but at the end of last year, the team scrapped that plan and rewrote everything as a native iOS app with its own game engine. That app launched on the App Store this week, after previously maxing out TestFlight’s cap of 10,000 users.

Currently, it’s finding traction with younger teenagers who are active on TikTok and other collaborative games, like Roblox, Minecraft, or Fortnite.

“These are young people who feel inspired to build their own games but have been intimidated by the need to learn to code or use other advanced tools, or who simply don’t have a computer at home that would let them access those tools,” notes Russell.

Playbyte is backed by $4 million in pre-seed and seed funding from investors including FirstMark (Rick Heitzmann), Ludlow Ventures (Jonathon Triest and Blake Robbins), Dream Machine (former Editor-in-Chief at TechCrunch, Alexia Bonatsos), and angels such as Fred Ehrsam, co-founder of Coinbase; Nate Mitchell, co-founder of Oculus; Ashita Achuthan, previously of Twitter; and others.

The app is a free download on the App Store.

UK now expects compliance with children’s privacy design code

In the UK, a 12-month grace period for compliance with a design code aimed at protecting children online expires today — meaning app makers offering digital services in the market which are “likely” to be accessed by children (defined in this context as users under 18 years old) are expected to comply with a set of standards intended to safeguard kids from being tracked and profiled.

The age appropriate design code came into force on September 2 last year however the UK’s data protection watchdog, the ICO, allowed the maximum grace period for hitting compliance to give organizations time to adapt their services.

But from today it expects the standards of the code to be met.

Services where the code applies can include connected toys and games and edtech but also online retail and for-profit online services such as social media and video sharing platforms which have a strong pull for minors.

Among the code’s stipulations are that a level of ‘high privacy’ should be applied to settings by default if the user is (or is suspected to be) a child — including specific provisions that geolocation and profiling should be off by default (unless there’s a compelling justification for such privacy hostile defaults).

The code also instructs app makers to provide parental controls while also providing the child with age-appropriate information about such tools — warning against parental tracking tools that could be used to silently/invisibly monitor a child without them being made aware of the active tracking.

Another standard takes aim at dark pattern design — with a warning to app makers against using “nudge techniques” to push children to provide “unnecessary personal data or weaken or turn off their privacy protections”.

The full code contains 15 standards but is not itself baked into legislation — rather it’s a set of design recommendations the ICO wants app makers to follow.

The regulatory stick to make them do so is that the watchdog is explicitly linking compliance with its children’s privacy standards to passing muster with wider data protection requirements that are baked into UK law.

The risk for apps that ignore the standards is thus that they draw the attention of the watchdog — either through a complaint or proactive investigation — with the potential of a wider ICO audit delving into their whole approach to privacy and data protection.

“We will monitor conformance to this code through a series of proactive audits, will consider complaints, and take appropriate action to enforce the underlying data protection standards, subject to applicable law and in line with our Regulatory Action Policy,” the ICO writes in guidance on its website. “To ensure proportionate and effective regulation we will target our most significant powers, focusing on organisations and individuals suspected of repeated or wilful misconduct or serious failure to comply with the law.”

It goes on to warn it would view a lack of compliance with the kids’ privacy code as a potential black mark against (enforceable) UK data protection laws, adding: “If you do not follow this code, you may find it difficult to demonstrate that your processing is fair and complies with the GDPR [General Data Protection Regulation] or PECR [Privacy and Electronics Communications Regulation].”

Tn a blog post last week, Stephen Bonner, the ICO’s executive director of regulatory futures and innovation, also warned app makers: “We will be proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell us how their services are designed in line with the code. We will identify areas where we may need to provide support or, should the circumstances require, we have powers to investigate or audit organisations.”

“We have identified that currently, some of the biggest risks come from social media platforms, video and music streaming sites and video gaming platforms,” he went on. “In these sectors, children’s personal data is being used and shared, to bombard them with content and personalised service features. This may include inappropriate adverts; unsolicited messages and friend requests; and privacy-eroding nudges urging children to stay online. We’re concerned with a number of harms that could be created as a consequence of this data use, which are physical, emotional and psychological and financial.”

“Children’s rights must be respected and we expect organisations to prove that children’s best interests are a primary concern. The code gives clarity on how organisations can use children’s data in line with the law, and we want to see organisations committed to protecting children through the development of designs and services in accordance with the code,” Bonner added.

The ICO’s enforcement powers — at least on paper — are fairly extensive, with GDPR, for example, giving it the ability to fine infringers up to £17.5M or 4% of their annual worldwide turnover, whichever is higher.

The watchdog can also issue orders banning data processing or otherwise requiring changes to services it deems non-compliant. So apps that chose to flout the children’s design code risk setting themselves up for regulatory bumps or worse.

In recent months there have been signs some major platforms have been paying mind to the ICO’s compliance deadline — with Instagram, YouTube and TikTok all announcing changes to how they handle minors’ data and account settings ahead of the September 2 date.

In July, Instagram said it would default teens to private accounts — doing so for under 18s in certain countries which the platform confirmed to us includes the UK — among a number of other child-safety focused tweaks. Then in August, Google announced similar changes for accounts on its video charing platform, YouTube.

A few days later TikTok also said it would add more privacy protections for teens. Though it had also made earlier changes limiting privacy defaults for under 18s.

Apple also recently got itself into hot water with the digital rights community following the announcement of child safety-focused features — including a child sexual abuse material (CSAM) detection tool which scans photo uploads to iCloud; and an opt in parental safety feature that lets iCloud Family account users turn on alerts related to the viewing of explicit images by minors using its Messages app.

The unifying theme underpinning all these mainstream platform product tweaks is clearly ‘child protection’.

And while there’s been growing attention in the US to online child safety and the nefarious ways in which some apps exploit kids’ data — as well as a number of open probes in Europe (such as this Commission investigation of TikTok, acting on complaints) — the UK may be having an outsized impact here given its concerted push to pioneer age-focused design standards.

The code also combines with incoming UK legislate which is set to apply a ‘duty of care’ on platforms to take a rboad-brush safety-first stance toward users, also with a big focus on kids (and there it’s also being broadly targeted to cover all children; rather than just applying to kids under 13s as with the US’ COPPA, for example).

In the blog post ahead of the compliance deadline expiring, the ICO’s Bonner sought to take credit for what he described as “significant changes” made in recent months by platforms like Facebook, Google, Instagram and TikTok, writing: “As the first-of-its kind, it’s also having an influence globally. Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the standards in the ICO’s code for children in America.”

“The Data Protection Commission in Ireland is preparing to introduce the Children’s Fundamentals to protect children online, which links closely to the code and follows similar core principles,” he also noted.

And there are other examples in the EU: France’s data watchdog, the CNIL, looks to have been inspired by the ICO’s approach — issuing its own set of right child-protection focused recommendations this June (which also, for example, encourage app makers to add parental controls with the clear caveat that such tools must “respect the child’s privacy and best interests”).

The UK’s focus on online child safety is not just making waves overseas but sparking growth in a domestic compliance services industry.

Last month, for example, the ICO announced the first clutch of GDPR certification scheme criteria — including two schemes which focus on the age appropriate design code. Expect plenty more.

Bonner’s blog post also notes that the watchdog will formally set out its position on age assurance this autumn — so it will be providing further steerage to organizations which are in scope of the code on how to tackle that tricky piece, although it’s still not clear how hard a requirement the ICO will support, with Bonner suggesting it could be actually “verifying ages or age estimation”. Watch that space. Whatever the recommendations are, age assurance services are set to spring up with compliance-focused sales pitches.

Children’s safety online has been a huge focus for UK policymakers in recent years, although the wider (and long in train) Online Safety (neé Harms) Bill remains at the draft law stage.

An earlier attempt by UK lawmakers to bring in mandatory age checks to prevent kids from accessing adult content websites — dating back to 2017’s Digital Economy Act — was dropped in 2019 after widespread criticism that it would be both unworkable and a massive privacy risk for adult users of porn.

But the government did not drop its determination to find a way to regulate online services in the name of child safety. And online age verification checks look set to be — if not a blanket, hardened requirement for all digital services — increasingly brought in by the backdoor, through a sort of ‘recommended feature’ creep (as the ORG has warned). 

The current recommendation in the age appropriate design code is that app makers “take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users”, suggesting they: “Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.” 

At the same time, the government’s broader push on online safety risks conflicting with some of the laudable aims of the ICO’s non-legally binding children’s privacy design code.

For instance, while the code includes the (welcome) suggestion that digital services gather as little information about children as possible, in an announcement earlier this summer UK lawmakers put out guidance for social media platforms and messaging services — ahead of the planned Online Safety legislation — that recommends they prevent children from being able to use end-to-end encryption.

That’s right; the government’s advice to data-mining platforms — which it suggests will help prepare them for requirements in the incoming legislation — is not to use ‘gold standard’ security and privacy (e2e encryption) for kids.

So the official UK government messaging to app makers appears to be that, in short order, the law will require commercial services to access more of kids’ information, not less — in the name of keeping them ‘safe’. Which is quite a contradiction vs the data minimization push on the design code.

The risk is that a tightening spotlight on kids privacy ends up being fuzzed and complicated by ill-thought through policies that push platforms to monitor kids to demonstrate ‘protection’ from a smorgasbord of online harms — be it adult content or pro-suicide postings, or cyber bullying and CSAM.

The law looks set to encourage platforms to ‘show their workings’ to prove compliance — which risks resulting in ever closer tracking of children’s activity, retention of data — and maybe risk profiling and age verification checks (that could even end up being applied to all users; think sledgehammer to crack a nut). In short, a privacy dystopia.

Such mixed messages and disjointed policymaking seem set to pile increasingly confusing — and even conflicting — requirements on digital services operating in the UK, making tech businesses legally responsible for divining clarity amid the policy mess — with the simultaneous risk of huge fines if they get the balance wrong.

Complying with the ICO’s design standards may therefore actually be the easy bit.