Study finds half of Americans get news on social media, but percentage has dropped

A new report from Pew Research finds that around a third of U.S. adults continue to get their news regularly from Facebook, though the exact percentage has slipped from 36% in 2020 to 31% in 2021. This drop reflects an overall slight decline in the number of Americans who say they get their news from any social media platform — a percentage that also fell by 5 percentage points year-over-year, going from 53% in 2020 to a little under 48%, Pew’s study found.

By definition, “regularly” here means the survey respondents said they get their news either “often” or “sometimes,” as opposed to “rarely,” “never,” or “don’t get digital news.”

The change comes at a time when tech companies have come under heavy scrutiny for allowing misinformation to spread across their platforms, Pew notes. That criticism has ramped up over the course of the pandemic, leading to vaccine hesitancy and refusal, which in turn has led to worsened health outcomes for many Americans who consumed the misleading information.

Despite these issues, the percentage of Americans who regularly get their news from various social media sites hasn’t changed too much over the past year, demonstrating how much a part of people’s daily news habits these sites have become.

Image Credits: Pew Research

In addition to the one-third of U.S. adults who regularly get their news on Facebook, 22% say they regularly get news on YouTube. Twitter and Instagram are regular news sources for 13% and 11% of Americans, respectively.

However, many of the sites have seen small declines as a regular source of news among their own users, says Pew. This is a different measurement compared with the much smaller percentage of U.S. adults who use the sites for news, as it speaks to how the sites’ own user bases may perceive them. In a way, it’s a measurement of the shifting news consumption behaviors of the often younger social media user, more specifically.

Today, 55% of Twitter users regularly get news from its platform, compared with 59% last year. Meanwhile, Reddit users’ use of the site for news dropped from 42% to 39% in 2021. YouTube fell from 32% to 30%, and Snapchat fell from 19% to 16%. Instagram is roughly the same, at 28% in 2020 to 27% in 2021.

Only one social media platform grew as a news source during this time: TikTok.

In 2020, 22% of the short-form video platform’s users said they regularly got their news there, compared with an increased 29% in 2021.

Overall, though, most of these sites have very little traction with the wider adult population in the U.S. Fewer than 1 in 10 Americans regularly get their news from Reddit (7%), TikTok (6%), LinkedIn (4%), Snapchat (4%), WhatsApp (3%) or Twitch (1%).

Image Credits: Pew Research

There are demographic differences between who uses which sites, as well.

White adults tend to turn to Facebook and Reddit for news (60% and 54%, respectively). Black and Hispanic adults make up significant proportions of the regular news consumers on Instagram (20% and 33%, respectively.) Younger adults tend to turn to Snapchat and TikTok, while the majority of news consumers on LinkedIn have four-year college degrees.

Of course, Pew’s latest survey, conducted from July 26 to Aug. 8, 2021, is based on self-reported data. That means people’s answers are based on how the users perceive their own usage of these various sites for newsgathering. This can produce different results compared with real-world measurements of how often users visited the sites to read news. Some users may underestimate their usage and others may overestimate it.

People may also not fully understand the ramifications of reading news on social media, where headlines and posts are often molded into inflammatory clickbait in order to entice engagement in the form of reactions and comments. This, in turn, may encourage strong reactions — but not necessarily from those worth listening to. In recent Pew studies, it found that social media news consumers tended to be less knowledgeable about the facts on key news topics, like elections or Covid-19. And social media consumers were more frequently exposed to fringe conspiracies (which is pretty apparent to anyone reading the comments!)

For the current study, the full sample size was 11,178 respondents, and the margin of sampling error was plus or minus 1.4 percentage points.

 

Cameo launches Cameo Calls, a service for fans to video chat with celebs

If you really want to video chat tonight with William Hung of retro American Idol fame… got twenty bucks to spare? Yesterday, Cameo launched its Cameo Calls products, which lets fans video chat for up to 15 minutes one-on-one with their favorite influencers and celebrities. The talent sets the duration, time, and price of their call, which Cameo says averages around $31.

To book a call, users can go to Cameo’s website or app to see a schedule of upcoming Cameo Calls that they can buy. These also appear on individual talent’s Cameo pages. When you purchase a Cameo Call, you get a unique ticket code that you enter on the app to join your call.

In June 2020, Cameo enabled users to book Zoom calls with celebrities as lockdown became a global norm, but Cameo phased out that feature in April. Instead, Cameo Calls now offers a native experience in the app, rather than relying on third-party software. The downside for consumers, though, is that this makes it more difficult to invite your favorite reality star to your office’s Zoom happy hour. But on the bright side, the Cameo Calls includes a dedicated photo opp at the end of the call, so you can get your celebrity selfie without dealing with the awkwardness of asking to take a photo.

Experiences like Cameo Calls make sense in light of the COVID-19 pandemic, when celebrity meet-and-greets might not be safe in many places. But Cameo also thinks this product can stand in for a typical meet and greet even in “normal” times. Often, celebrity meet-and-greets require waiting in a long line to only have 5 or 10 seconds of time with the talent. Even though many Cameo Calls sessions are only a few minutes long, you might be able to get a more personal experience than if you were the 100th fan in a long line in person.

“We foresee Cameo Calls replacing meet and greets at music festivals and world tours, fan conventions, sporting events, and more,” said Cameo Co-founder & CEO Steven Galanis.

Cameo says it tested this product with over 3,000 calls — during testing, talent-hosted themed meet-and-greets, coffee chats, private concerts, and tarot card readings. Some performers who tested the feature include James and Oliver Phelps, who played the Weasley twins in the Harry Potter movies, and David Henrie, a former Disney Channel star.

Facebook knows Instagram harms teens. Now, its plan to open the app to kids looks worse than ever

Facebook is in the hot seat again.

The Wall Street Journal published a powerful multi-part series on the company this week, drawing from internal documents on everything from the company’s secretive practice of whitelisting celebrities to its knowledge that Instagram is taking a serious toll on the mental health of teen girls.

The flurry of investigative pieces makes it clear that what Facebook says in public doesn’t always reflect the company’s knowledge on known issues behind the scenes. The revelations still managed to shock even though Facebook has been playing dumb about the various social ills it sows for years. (Remember when Mark Zuckerberg dismissed the notion that Facebook influenced the 2016 election as “crazy?”) Facebook’s longstanding PR playbook is to hide its dangers, denying knowledge of its darker impacts on society publicly, even as research spells them out internally.

That’s all well and good until someone gets ahold of the internal research.

One of the biggest revelations from the WSJ’s report: The company knows that Instagram poses serious dangers to mental health in teenage girls. An internal research slide from 2019 acknowledged that “We make body image issues worse for one in three teen girls” — a shocking admission for a company charging ahead with plans to expand to even younger and more vulnerable age groups.

As recently as May, Instagram’s Adam Mosseri dismissed concerns around the app’s negative impact on teens as “quite small.”

But internally, the picture told a different story. According to the WSJ, from 2019 to 2021, the company conducted a thorough deep dive into teen mental health including online surveys, diary studies, focus groups and large-scale questionnaires.

According to one internal slide, the findings showed that 32 percent of teenage girls reported that Instagram made them have a worse body image. Of research participants who experienced suicidal thoughts, 13 percent of British teens and 6 percent of American teens directly linked their interest in killing themselves to Instagram.

“Teens blame Instagram for increases in the rate of anxiety and depression,” another internal slide stated. “This reaction was unprompted and consistent across all groups.”

Following the WSJ report, Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) announced a probe into Facebook’s lack of transparency around internal research showing that Instagram poses serious and even lethal danger to teens. The Senate Subcommittee on Consumer Protection, Product Safety, and Data Security will launch the investigation.

“We are in touch with a Facebook whistleblower and will use every resource at our disposal to investigate what Facebook knew and when they knew it – including seeking further documents and pursuing witness testimony,” Senators Blackburn and Blumenthal wrote. “The Wall Street Journal’s blockbuster reporting may only be the tip of the iceberg.”

Blackburn and Blumenthal weren’t the only U.S. lawmakers alarmed by the new report. Sen. Ed Markey (D-MA), Rep. Kathy Castor (D-FL), and Lori Trahan (D-MA) sent Facebook their own letter demanding that the company walk away from its plan to launch Instagram for kids. “Children and teens are uniquely vulnerable populations online, and these findings paint a clear and devastating picture of Instagram as an app that poses significant threats to young people’s wellbeing,” the lawmakers wrote.

 

In May, a group of 44 state attorneys general wrote to Instagram to encourage the company to abandon its plans to bring Instagram to kids under the age of 13. “It appears that Facebook is not responding to a need, but instead creating one, as this platform appeals primarily to children who otherwise do not or would not have an Instagram account,” the group of attorneys general wrote. They warned that an Instagram for kids would be “harmful for myriad reasons.”

In April, a collection of the same Democratic lawmakers expressed “serious concerns” about Instagram’s potential impact on the well-being of young users. That same month, a coalition of consumer advocacy organizations also demanded that the company reconsider launching a version of Instagram for kids.

According to the documents obtained by the WSJ, all of those concerns look extremely valid. In spite of extensive internal research and their deeply troubling findings, Facebook has downplayed its knowledge publicly, even as regulators regularly pressed the company for what it really knows.

Instagram’s Mosseri may have made matters worse Thursday when he made a less than flattering analogy between social media platforms and vehicles. “We know that more people die than would otherwise because of car accidents, but by and large, cars create way more value in the world than they destroy,” Mosseri told Peter Kafka on Recode’s media podcast. “And I think social media is similar.”

Mosseri dismissed any comparison between social media and drugs or cigarettes in spite of social media’s well-researched addictive effects, likening social platforms to the auto industry instead. Naturally, the company’s many critics jumped on the car comparison, pointing to their widespread lethality and the fact that the auto industry is heavily regulated — unlike social media.

Facebook revamps its business tool lineup following threats to its ad targeting business

Facebook today is announcing the launch of new products and features for business owners, following the threat to its ad targeting business driven by Apple’s new privacy features, which now allow mobile users to opt out of being tracked across their iOS apps. The social networking giant has repeatedly argued that Apple’s changes would impact small businesses that relied on Facebook ads to reach their customers. But it was not successful in getting any of Apple’s changes halted. Instead, the market is shifting to a new era focused more on user privacy, where personalization and targeting are more of an opt-in experience. That’s required Facebook to address its business advertiser base in new ways.

As the ability to track consumers declines — very few consumers are opting into tracking, studies find — Facebook is rolling out new features that will allow businesses to better position themselves in front of relevant audiences. This includes updates that will let them reach customers, advertise to customers, chat with customers across Facebook apps, generate leads, acquire customers and more.

The company earlier this year began testing a way for customers to explore businesses from underneath News Feed posts by tapping on topics they were interested in — like beauty, fitness, and clothing, and explore content from other related businesses. The feature allows people to come across new businesses that may also like, and would allow Facebook to create its own data set of users who like certain types of content. Over time, it could possibly even turn the feature into an ad unit, where businesses could pay for higher placement.

But for the time being, Facebook will expand this feature to more users across the U.S., and launch it in Australia, Canada, Ireland, Malaysia, New Zealand, Philippines, Singapore, South Africa, and the U.K.

Image Credits: Facebook

Facebook is also making it easier for businesses to chat with customers. They’re already able to buy ads that encourage people to message them on Facebook’s various chat platforms — Messenger, Instagram Direct, or WhatsApp. Now, they’ll be able to choose all the messaging platforms where they’re available, and Facebook will default the chat app showcased in the ad based on where the conversation is most likely to happen.

Image Credits: Facebook

The company will tie WhatsApp to Instagram, as well, as part of this effort. Facebook explains that many businesses market themselves or run shops across Instagram, but rely on WhatsApp to communicate with customers and answer questions. So, Facebook will now allow businesses to add a WhatsApp click-to-chat button to their Instagram profiles.

This change, in particular, represents another move that ties Facebook’s separate apps more closely together, at a time when regulators are considering breaking up Facebook over antitrust concerns. Already, Facebook interconnected Facebook’s Messenger and Instagram messaging services, which would make such a disassembly more complicated. And more recently, it’s begun integrating Messenger directly into Facebook’s platform itself.

Image Credits: Facebook

In a related change, soon businesses will be able to create ads that send users directly to WhatsApp from the Instagram app. (Facebook also already offers ads like this.)

Separately from this news, Facebook announced the launch of a new business directory on WhatsApp, allowing consumers to find shops and services on the chat platform, as well.

Another set of changes being introduced involve an update to Facebook Business Suite. Businesses will be able to manage emails through Inbox and sending remarketing emails; use a new File Manager for creating, managing, and posting content; and access a feature that will allow businesses to test different versions of a post to see which one is most effective.

Image Credits: Facebook

Other new products include tests of paid and organic lead generation tools on Instagram; quote requests on Messenger, where customers answer a few questions prior to their conversations; and a way for small businesses to access a bundle of tools to get started with Facebook ads, which includes a Facebook ad coupon along with free access to QuickBooks for 3 months or free access to Canva Pro for 3 months.

Image Credits: Facebook

Facebook will also begin testing something called “Work Accounts,” which will allow business owners to access their business products, like Business Manager, separately from their personal Facebook account. They’ll be able to manage these accounts on behalf of employees and use single-sign-on integrations.

Work Accounts will be tested through the remainder of the year with a small group of businesses, and Facebook says it expects to expand availability in 2022.

Other efforts it has in store include plans to incorporate more content from creators and local businesses and new features that let users control the content they see, but these changes were not detailed at this time.

Most of the products being announced are either rolling out today or will begin to show up soon.

TikTok is hiding a viral challenge that has kids stealing their school’s soap dispensers

It’s back to school season and on TikTok, that means students are inexplicably stealing everything that isn’t bolted down.

The latest TikTok trend to generally wreak social havoc sees students pulling off “devious licks” — small-scale heists of everything from soap dispensers, Covid test kits and hand sanitizer to high value items like classroom tech.

The videos are set to an edited version of Lil B’s “Ski Ski BasedGod,” which had appeared in around 100,000 videos on TikTok by Monday, according to Mashable, with the tag #deviouslick collecting more than 175 million views.

TikTok cracked down on Wednesday, limiting search results for the viral stunt, removing videos with the tag and encouraging users to “please be kind” to teachers.

“We expect our community to stay safe and create responsibly, and we do not allow content that promotes or enables criminal activities,” a TikTok spokesperson told TechCrunch. “We are removing this content and redirecting hashtags and search results to our Community Guidelines to discourage such behavior.”

While it’s hard to know which videos are staged and which are legitimate, the trend is real enough to have teachers and parents across the country stressed out. At a middle school in Las Vegas, school administrators report students swiping speed limit signs, fire alarms, soap dispensers and classroom projectors. And in Portland, Oregon at least one high school saw an entire building’s worth of soap dispensers go missing — not a great start to another school year in the throes of a global pandemic.

Clubhouse hires a head of news from NPR to build out publisher relationships

Clubhouse has hired a veteran editor from NPR to lead news publishing for the app. Nina Gregory will serve as Clubhouse’s Head of News and Media Publishers, working as a liaison between news publishers and the Clubhouse’s ecosystem of audio-based communities.

Gregory led NPR’s Arts Desk for the last seven years, shaping the news outlet’s culture and entertainment coverage. “As an audio journalist, [Clubhouse] aligned with what I’ve always believed is the best medium for news,” Gregory told CNN. “You don’t need to know how to read to be able to hear radio news. You don’t need to have an expensive subscription. You don’t need cable.”

Helping publishers and other brands get plugged in is one path toward maturation for Clubhouse. Online media properties from USA Today to TechCrunch have built a presence on the app, which exploded in growth as the pandemic limited in-person social interactions. But with competition from more entrenched competitors looming, Clubhouse may need to get creative to stay in the game.

Clubhouse’s quick ascent saw Twitter, Spotify, Facebook and other established tech companies scramble to integrate live audio rooms into their own products. Twitter quickly launched Spaces, while Spotify launched a standalone Clubhouse clone known as Greenroom. Facebook first announced its own live audio rooms in April, opening them to U.S. users two months later.

The kind of viral attention that Clubhouse enjoyed over the last year is almost impossible to maintain, but the company has added features, introduced an Android app and opened its doors to everyone. Clubhouse might not be able to top its February peak, but the app still notched 7.7 million global monthly downloads after expanding to Android this summer, and continues to build out its vision for audio-first social networking.

The FDA should regulate Instagram’s algorithm as a drug

The Wall Street Journal on Tuesday reported Silicon Valley’s worst-kept secret: Instagram harms teens’ mental health; in fact, its impact is so negative that it introduces suicidal thoughts.

Thirty-two percent of teen girls who feel bad about their bodies report that Instagram makes them feel worse. Of teens with suicidal thoughts, 13% of British and 6% of American users trace those thoughts to Instagram, the WSJ report said. This is Facebook’s internal data. The truth is surely worse.

President Theodore Roosevelt and Congress formed the Food and Drug Administration in 1906 precisely because Big Food and Big Pharma failed to protect the general welfare. As its executives parade at the Met Gala in celebration of the unattainable 0.01% of lifestyles and bodies that we mere mortals will never achieve, Instagram’s unwillingness to do what is right is a clarion call for regulation: The FDA must assert its codified right to regulate the algorithm powering the drug of Instagram.

The FDA should consider algorithms a drug impacting our nation’s mental health: The Federal Food, Drug and Cosmetic Act gives the FDA the right to regulate drugs, defining drugs in part as “articles (other than food) intended to affect the structure or any function of the body of man or other animals.” Instagram’s internal data shows its technology is an article that alters our brains. If this effort fails, Congress and President Joe Biden should create a mental health FDA.

Researchers can study what Facebook prioritizes and the impact those decisions have on our minds. How do we know this? Because Facebook is already doing it — they’re just burying the results.

The public needs to understand what Facebook and Instagram’s algorithms prioritize. Our government is equipped to study clinical trials of products that can physically harm the public. Researchers can study what Facebook privileges and the impact those decisions have on our minds. How do we know this? Because Facebook is already doing it — they’re just burying the results.

In November 2020, as Cecilia Kang and Sheera Frenkel report in “An Ugly Truth,” Facebook made an emergency change to its News Feed, putting more emphasis on “News Ecosystem Quality” scores (NEQs). High NEQ sources were trustworthy sources; low were untrustworthy. Facebook altered the algorithm to privilege high NEQ scores. As a result, for five days around the election, users saw a “nicer News Feed” with less fake news and fewer conspiracy theories. But Mark Zuckerberg reversed this change because it led to less engagement and could cause a conservative backlash. The public suffered for it.

Facebook likewise has studied what happens when the algorithm privileges content that is “good for the world” over content that is “bad for the world.” Lo and behold, engagement decreases. Facebook knows that its algorithm has a remarkable impact on the minds of the American public. How can the government let one man decide the standard based on his business imperatives, not the general welfare?

Upton Sinclair memorably uncovered dangerous abuses in “The Jungle,” which led to a public outcry. The free market failed. Consumers needed protection. The 1906 Pure Food and Drug Act for the first time promulgated safety standards, regulating consumable goods impacting our physical health. Today, we need to regulate the algorithms that impact our mental health. Teen depression has risen alarmingly since 2007. Likewise, suicide among those 10 to 24 is up nearly 60% between 2007 and 2018.

It is of course impossible to prove that social media is solely responsible for this increase, but it is absurd to argue it has not contributed. Filter bubbles distort our views and make them more extreme. Bullying online is easier and constant. Regulators must audit the algorithm and question Facebook’s choices.

When it comes to the biggest issue Facebook poses — what the product does to us — regulators have struggled to articulate the problem. Section 230 is correct in its intent and application; the internet cannot function if platforms are liable for every user utterance. And a private company like Facebook loses the trust of its community if it applies arbitrary rules that target users based on their background or political beliefs. Facebook as a company has no explicit duty to uphold the First Amendment, but public perception of its fairness is essential to the brand.

Thus, Zuckerberg has equivocated over the years before belatedly banning Holocaust deniers, Donald Trump, anti-vaccine activists and other bad actors. Deciding what speech is privileged or allowed on its platform, Facebook will always be too slow to react, overcautious and ineffective. Zuckerberg cares only for engagement and growth. Our hearts and minds are caught in the balance.

The most frightening part of “The Ugly Truth,” the passage that got everyone in Silicon Valley talking, was the eponymous memo: Andrew “Boz” Bosworth’s 2016 “The Ugly.”

In the memo, Bosworth, Zuckerberg’s longtime deputy, writes:

“So we connect more people. That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.”

Zuckerberg and Sheryl Sandberg made Bosworth walk back his statements when employees objected, but to outsiders, the memo represents the unvarnished id of Facebook, the ugly truth. Facebook’s monopoly, its stranglehold on our social and political fabric, its growth at all costs mantra of “connection,” is not de facto good. As Bosworth acknowledges, Facebook causes suicides and allows terrorists to organize. This much power concentrated in the hands of one corporation, run by one man, is a threat to our democracy and way of life.

Critics of FDA regulation of social media will claim this is a Big Brother invasion of our personal liberties. But what is the alternative? Why would it be bad for our government to demand that Facebook accounts to the public its internal calculations? Is it safe for the number of sessions, time spent and revenue growth to be the only results that matters? What about the collective mental health of the country and world?

Refusing to study the problem does not mean it does not exist. In the absence of action, we are left with a single man deciding what is right. What is the price we pay for “connection”? This is not up to Zuckerberg. The FDA should decide.

Beware the hidden bias behind TikTok resumes

Social media has served as a launchpad to success almost as long as it has been around. The stories of going viral from a self-produced YouTube video and then securing a record deal established the mythology of social media platforms. Ever since, social media has consistently gravitated away from text-based formats and toward visual mediums like video sharing.

For most people, a video on social media won’t be a ticket to stardom, but in recent months, there have been a growing number of stories of people getting hired based on videos posted to TikTok. Even LinkedIn has embraced video assets on user profiles with the recent addition of the “Cover Story” feature, which allows workers to supplement their profiles with a video about themselves.

As technology continues to evolve, is there room for a world where your primary resume is a video on TikTok? And if so, what kinds of unintended consequences and implications might this have on the workforce?

Why is TikTok trending for jobs?

In recent months, U.S. job openings have risen to an all-time high of 10.1 million. For the first time since the pandemic began, available jobs have exceeded available workers. Employers are struggling to attract qualified candidates to fill positions, and in that light, it makes sense that many recruiters are turning to social platforms like TikTok and video resumes to find talent.

But the scarcity of workers does not negate the importance of finding the right employee for a role. Especially important for recruiters is finding candidates with the skills that align with their business’ goals and strategy. For example, as more organizations embrace a data-driven approach to operating their business, they need more people with skills in analytics and machine learning to help them make sense of the data they collect.

Recruiters have proven to be open to innovation where it helps them find these new candidates. Recruiting is no longer the manual process it used to be, with HR teams sorting through stacks of paper resumes and formal cover letters to find the right candidate. They embraced the power of online connections as LinkedIn rose to prominence and even figured out how to use third-party job sites like GlassDoor to help them draw in promising candidates. On the back end, many recruiters use advanced cloud software to sort through incoming resumes to find the candidates that best match their job descriptions. But all of these methods still rely on the traditional text-based resume or profile as the core of any application.

Videos on social media provide the ability for candidates to demonstrate soft skills that may not be immediately apparent in written documents, such as verbal communication and presentation skills. They are also a way for recruiters to learn more about the personality of the candidate to determine how they’d fit into the culture of the company. While this may be appealing for many, are we ready for the consequences?

We’re not ready for the close-up

While innovation in recruiting is a big part of the future of work, the hype around TikTok and video resumes may actually take us backward. Despite offering a new way for candidates to market themselves for opportunities, it also carries potential pitfalls that candidates, recruiters and business leaders need to be aware of.

The very element that gives video resumes their potential also presents the biggest problems. Video inescapably highlights the person behind the skills and achievements. As recruiters form their first opinions about a candidate, they will be confronted with information they do not usually see until much later in the process, including whether they belong to protected classes because of their race, disability or gender.

Diversity, equity and inclusion (DE&I) concerns have had a major surge in attention over the last couple of years amid heightened awareness and scrutiny around how employers are — or are not — prioritizing diversity in the workplace.

But evaluating candidates through video could erase any progress made by introducing more opportunities for unconscious, or even conscious, bias. This could create a dangerous situation for businesses if they do not act carefully because it could open them up to consequences such as damage to their reputation or even something as severe as discrimination lawsuits.

A company with a poor track record for diversity may have the fact that they reviewed videos from candidates used against them in court. Recruiters reviewing the videos may not even be aware of how the race or gender of candidates are impacting their decisions. For that reason, many of the businesses I have seen implement an option for video in their recruiting flow do not allow their recruiters to watch the video until late in the recruiting process.

But even if businesses address the most pressing issues of DE&I by managing bias against those protected classes, by accepting videos there are still issues of diversity in less protected classes such as neurodiversity and socioeconomic status. A candidate with exemplary skills and a strong track record may not present themselves well through a video, coming across as awkward to the recruiter watching the video. Even if that impression is irrelevant to the job, it could still influence the recruiter’s stance on hiring.

Furthermore, candidates from affluent backgrounds may have access to better equipment and software to record and edit a compelling video resume. Other candidates may not, resulting in videos that may not look as polished or professional in the eyes of the recruiter. This creates yet another barrier to the opportunities they can access.

As we sit at an important crossroads in how we handle DE&I in the workplace, it is important for employers and recruiters to find ways to reduce bias in the processes they use to find and hire employees. While innovation is key to moving our industry forward, we have to ensure top priorities are not being compromised.

Not left on the cutting room floor

Despite all of these concerns, social media platforms — especially those based on video — have created new opportunities for users to expand their personal brands and connect with potential job opportunities. There is potential to use these new systems to benefit both job seekers and employers.

The first step is to ensure that there is always a place for a traditional text-based resume or profile in the recruiting process. Even if recruiters can get all the information they need about a candidate’s capabilities from video, some people will just naturally feel more comfortable staying off camera. Hiring processes need to be about letting people put their best foot forward, whether that is in writing or on video. And that includes accepting that the best foot to put forward may not be your own.

Instead, candidates and businesses should consider using videos as a place for past co-workers or managers to endorse the candidate. An outside endorsement can do a lot more good for an application than simply stating your own strengths because it shows that someone else believes in your capabilities, too.

Video resumes are hot right now because they are easier to make and share than ever and because businesses are in desperate need of strong talent. But before we get caught up in the novelty of this new way of sharing our credentials, we need to make sure that we are setting ourselves up for success.

The goal of any new recruiting technology should be to make it easier for candidates to find opportunities where they can shine without creating new barriers. There are some serious kinks to work out before video resumes can achieve that, and it is important for employers to consider the repercussions before they damage the success of their DE&I efforts.

Ireland probes TikTok’s handling of kids’ data and transfers to China

Ireland’s Data Protection Commission (DPC) has yet another ‘Big Tech’ GDPR probe to add to its pile: The regulator said yesterday it has opened two investigations into video sharing platform TikTok.

The first covers how TikTok handles children’s data, and whether it complies with Europe’s General Data Protection Regulation.

The DPC also said it will examine TikTok’s transfers of personal data to China, where its parent entity is based — looking to see if the company meets requirements set out in the regulation covering personal data transfers to third countries.

TikTok was contacted for comment on the DPC’s investigation.

A spokesperson told us:

“The privacy and safety of the TikTok community, particularly our youngest members, is a top priority. We’ve implemented extensive policies and controls to safeguard user data and rely on approved methods for data being transferred from Europe, such as standard contractual clauses. We intend to fully cooperate with the DPC.”

The Irish regulator’s announcement of two “own volition” enquiries follows pressure from other EU data protection authorities and consumers protection groups which have raised concerns about how TikTok handles’ user data generally and children’s information specifically.

In Italy this January, TikTok was ordered to recheck the age of every user in the country after the data protection watchdog instigated an emergency procedure, using GDPR powers, following child safety concerns.

TikTok went on to comply with the order — removing more than half a million accounts where it could not verify the users were not children.

This year European consumer protection groups have also raised a number of child safety and privacy concerns about the platform. And, in May, EU lawmakers said they would review the company’s terms of service.

On children’s data, the GDPR sets limits on how kids’ information can be processed, putting an age cap on the ability of children to consent to their data being used. The age limit varies per EU Member State but there’s a hard cap for kids’ ability to consent at 13 years old (some EU countries set the age limit at 16).

In response to the announcement of the DPC’s enquiry, TikTok pointed to its use of age gating technology and other strategies it said it uses to detect and remove underage users from its platform.

It also flagged a number of recent changes it’s made around children’s accounts and data — such as flipping the default settings to make their accounts privacy by default and limiting their exposure to certain features that intentionally encourage interaction with other TikTok users if those users are over 16.

While on international data transfers it claims to use “approved methods”. However the picture is rather more complicated than TikTok’s statement implies. Transfers of Europeans’ data to China are complicated by there being no EU data adequacy agreement in place with China.

In TikTok’s case, that means, for any personal data transfers to China to be lawful, it needs to have additional “appropriate safeguards” in place to protect the information to the required EU standard.

When there is no adequacy arrangement in place, data controllers can, potentially, rely on mechanisms like Standard Contractual Clauses (SCCs) or binding corporate rules (BCRs) — and TikTok’s statement notes it uses SCCs.

But — crucially — personal data transfers out of the EU to third countries have faced significant legal uncertainty and added scrutiny since a landmark ruling by the CJEU last year which invalidated a flagship data transfer arrangement between the US and the EU and made it clear that DPAs (such as Ireland’s DPC) have a duty to step in and suspend transfers if they suspect people’s data is flowing to a third country where it might be at risk.

So while the CJEU did not invalidate mechanisms like SCCs entirely they essentially said all international transfers to third countries must be assessed on a case-by-case basis and, where a DPA has concerns, it must step in and suspend those non-secure data flows.

The CJEU ruling means just the fact of using a mechanism like SCCs doesn’t mean anything on its own re: the legality of a particular data transfer. It also amps up the pressure on EU agencies like Ireland’s DPC to be pro-active about assessing risky data flows.

Final guidance put out by the European Data Protection Board, earlier this year, provides details on the so-called ‘special measures’ that a data controller may be able to apply in order to increase the level of protection around their specific transfer so the information can be legally taken to a third country.

But these steps can include technical measures like strong encryption — and it’s not clear how a social media company like TikTok would be able to apply such a fix, given how its platform and algorithms are continuously mining users’ data to customize the content they see and in order to keep them engaged with TikTok’s ad platform.

In another recent development, China has just passed its first data protection law.

But, again, this is unlikely to change much for EU transfers. The Communist Party regime’s ongoing appropriation of personal data, through the application of sweeping digital surveillance laws, means it would be all but impossible for China to meet the EU’s stringent requirements for data adequacy. (And if the US can’t get EU adequacy it would be ‘interesting’ geopolitical optics, to put it politely, were the coveted status to be granted to China…)

One factor TikTok can take heart from is that it does likely have time on its side when it comes to the’s EU enforcement of its data protection rules.

The Irish DPC has a huge backlog of cross-border GDPR investigations into a number of tech giants.

It was only earlier this month that Irish regulator finally issued its first decision against a Facebook-owned company — announcing a $267M fine against WhatsApp for breaching GDPR transparency rules (but only doing so years after the first complaints had been lodged).

The DPC’s first decision in a cross-border GDPR case pertaining to Big Tech came at the end of last year — when it fined Twitter $550k over a data breach dating back to 2018, the year GDPR technically begun applying.

The Irish regulator still has scores of undecided cases on its desk — against tech giants including Apple and Facebook. That means that the new TikTok probes join the back of a much criticized bottleneck. And a decision on these probes isn’t likely for years.

On children’s data, TikTok may face swifter scrutiny elsewhere in Europe: The UK added some ‘gold-plaiting’ to its version of the EU GDPR in the area of children’s data — and, from this month, has said it expects platforms meet its recommended standards.

It has warned that platforms that don’t fully engage with its Age Appropriate Design Code could face penalties under the UK’s GDPR. The UK’s code has been credited with encouraging a number of recent changes by social media platforms over how they handle kids’ data and accounts.

TikTok expands mental health resources, as negative reports of Instagram’s effect on teens leak

TikTok announced this morning that it is implementing new tactics to educate its users about the negative mental health impacts of social media. As part of these changes, TikTok is rolling out a “well-being guide” in its Safety Center, a brief primer on eating disorders, expanded search interventions, and opt-in viewing screens on potentially triggering searches.

Developed in collaboration with International Association for Suicide PreventionCrisis Text LineLive For TomorrowSamaritans of Singapore, and Samaritans (UK), the new well-being guide offers more targeted advice toward people using TikTok, encouraging users to consider how it might impact them to share their mental health stories on a platform where any post has the potential to go viral. TikTok wants users to think about why they’re sharing their experience, if they’re ready for a wider audience to hear their story if sharing could be harmful to them, and if they’re prepared to hear others’ stories in response.

The platform also added a brief, albeit generic memo about the impact of eating disorders under the “topics” section of the Safety Center, which was developed with the National Eating Disorders Association (NEDA). NEDA has a long track record of collaborating with social media platforms, most recently working with Pinterest to prohibit ads promoting weight loss.

Already, TikTok directs users to local resources when they search for words or phrases like #suicide,* but now, the platform will also share content from creators with the intent of helping someone in need. The platform told TechCrunch that it chose this content following consultation with independent experts. Additionally, if someone enters a search phrase that might be alarming (TikTok offered “scary makeup” as an example), the content will be blurred out, asking users to opt-in to see the search results.

As TikTok unveils these changes, its competitor Instagram is facing scrutiny after The Wall Street Journal leaked documents that reveal its parent company Facebook’s own research on the harm Instagram poses for teen girls. Similar to the Gen Z-dominated TikTok, more than 40% of Instagram users are 22 or younger, and 22 million teens log into Instagram in the U.S. each day. In one anecdote, a 19-year-old interviewed by The Wall Street Journal said that after searching Instagram for workout ideas, her explore page has been flooded with photos about how to lose weight (Instagram has previously fessed up to errors with its search function, which recommended that users search topics like “fasting” and “appetite suppressants”). Angela Guarda, director for the eating-disorders program at Johns Hopkins Hospital, told The Wall Street Journal that her patients often say they learned about dangerous weight loss tactics via social media.

“The question on many people’s minds is if social media is good or bad for people. The research on this is mixed; it can be both,” Instagram wrote in a blog post today.

As TikTok nods to with its advice on sharing mental health stories, social media can often be a positive resource, allowing people who are dealing with certain challenges to learn from others who have gone through similar experiences. So, despite these platforms’ outsized influence, it’s also on real people to think twice about what they post and how it might influence others. Even when Facebook experimented with hiding the number of “likes” on Instagram, employees said that it didn’t improve overall user well-being. These revelations about the negative impact of social media on mental health and body image aren’t ground-breaking, but they generate a renewed pressure for these powerful platforms to think about how to support their users (or, at the very least, add some new memos to their security center). 

*If you or someone you know is struggling with depression or has had thoughts of harming themselves or taking their own life, The National Suicide Prevention Lifeline (1-800-273-8255) provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations.