Yo Facebook & Instagram, stop showing Stories reruns

If I watch a Story cross-posted from Instagram to Facebook on either of the apps, it should appear as “watched” at the back of the Stories row on the other app. Why waste my time showing me Stories I already saw?

It’s been over two years since Instagram Stories launched cross-posting to Stories. Countless hours of each feature’s 500 million daily users have been squandered viewing repeats. Facebook and Messenger already synchronized the watched/unwatched state of Stories. It’s long past time that this was expanded to encompass Instagram.

I asked Facebook and Instagram if it had plans for this. A company spokesperson told me that it built cross-posting to make sharing easier to people’s different audiences on Facebook and Instagram, and it’s continuing to explore ways to simplify and improve Stories. But they gave no indication that Facebook realizes how annoying this is or that a solution is in the works.

The end result if this gets fixed? Users would spend more time watching new content, more creators would feel seen, and Facebook’s choice to jam Stories in all its apps would fee less redundant and invasive. If I send a reply to a Story on one app, I’m not going to send it or something different when I see the same Story on the other app a few minutes or hours later. Repeated content leads to more passive viewing and less interactive communication with friends, despite Facebook and Instagram stressing that its this zombie consumption that’s unhealthy.

The only possible downside to changing this could be fewer Stories ad impressions if secondary viewings of peoples’ best friends’ Stories keep them watching more than new content. But prioritizing making money over the user experience is again what Mark Zuckerberg has emphasized is not Facebook’s strategy.

There’s no need to belabor the point any further. Give us back our time. Stop the reruns.

TechCrunch’s Top 10 investigative reports from 2019

Facebook spying on teens, Twitter accounts hijacked by terrorists, and sexual abuse imagery found on Bing and Giphy were amongst the ugly truths revealed by TechCrunch’s investigating reporting in 2019. The tech industry needs more watchdogs than ever as its size enlargens the impact of safety failures and the abuse of power. Whether through malice, naivety, or greed, there was plenty of wrongdoing to sniff out.

Led by our security expert Zack Whittaker, TechCrunch undertook more long-form investigations this year to tackle these growing issues. Our coverage of fundraises, product launches, and glamorous exits only tell half the story. As perhaps the biggest and longest running news outlet dedicated to startups (and the giants they become), we’re responsible for keeping these companies honest and pushing for a more ethical and transparent approach to technology.

If you have a tip potentially worthy of an investigation, contact TechCrunch at [email protected] or by using our anonymous tip line’s form.

Image: Bryce Durbin/TechCrunch

Here are our top 10 investigations from 2019, and their impact:

Facebook pays teens to spy on their data

Josh Constine’s landmark investigation discovered that Facebook was paying teens and adults $20 in gift cards per month to install a VPN that sent Facebook all their sensitive mobile data for market research purposes. The laundry list of problems with Facebook Research included not informing 187,000 users the data would go to Facebook until they signed up for “Project Atlas”, not receiving proper parental consent for over 4300 minors, and threatening legal action if a user spoke publicly about the program. The program also abused Apple’s enterprise certificate program designed only for distribution of employee-only apps within companies to avoid the App Store review process.

The fallout was enormous. Lawmakers wrote angry letters to Facebook. TechCrunch soon discovered a similar market research program from Google called Screenwise Meter that the company promptly shut down. Apple punished both Google and Facebook by shutting down all their employee-only apps for a day, causing office disruptions since Facebookers couldn’t access their shuttle schedule or lunch menu. Facebook tried to claim the program was above board, but finally succumbed to the backlash and shut down Facebook Research and all paid data collection programs for users under 18. Most importantly, the investigation led Facebook to shut down its Onavo app, which offered a VPN but in reality sucked in tons of mobile usage data to figure out which competitors to copy. Onavo helped Facebook realize it should acquire messaging rival WhatsApp for $19 billion, and it’s now at the center of anti-trust investigations into the company. TechCrunch’s reporting weakened Facebook’s exploitative market surveillance, pitted tech’s giants against each other, and raised the bar for transparency and ethics in data collection.

Protecting The WannaCry Kill Switch

Zack Whittaker’s profile of the heroes who helped save the internet from the fast-spreading WannaCry ransomware reveals the precarious nature of cybersecurity. The gripping tale documenting Marcus Hutchins’ benevolent work establishing the WannaCry kill switch may have contributed to a judge’s decision to sentence him to just one year of supervised release instead of 10 years in prison for an unrelated charge of creating malware as a teenager.

The dangers of Elon Musk’s tunnel

TechCrunch contributor Mark Harris’ investigation discovered inadequate emergency exits and more problems with Elon Musk’s plan for his Boring Company to build a Washington D.C.-to-Baltimore tunnel. Consulting fire safety and tunnel engineering experts, Harris build a strong case for why state and local governments should be suspicious of technology disrupters cutting corners in public infrastructure.

Bing image search is full of child abuse

Josh Constine’s investigation exposed how Bing’s image search results both showed child sexual abuse imagery, but also suggested search terms to innocent users that would surface this illegal material. A tip led Constine to commission a report by anti-abuse startup AntiToxin (now L1ght), forcing Microsoft to commit to UK regulators that it would make significant changes to stop this from happening. However, a follow-up investigation by the New York Times citing TechCrunch’s report revealed Bing had made little progress.

Expelled despite exculpatory data

Zack Whittaker’s investigation surfaced contradictory evidence in a case of alleged grade tampering by Tufts student Tiffany Filler who was questionably expelled. The article casts significant doubt on the accusations, and that could help the student get a fair shot at future academic or professional endeavors.

Burned by an educational laptop

Natasha Lomas’ chronicle of troubles at educational computer hardware startup pi-top, including a device malfunction that injured a U.S. student. An internal email revealed the student had suffered a “a very nasty finger burn” from a pi-top 3 laptop designed to be disassembled. Reliability issues swelled and layoffs ensued. The report highlights how startups operating in the physical world, especially around sensitive populations like students, must make safety a top priority.

Giphy fails to block child abuse imagery

Sarah Perez and Zack Whittaker teamed up with child protection startup L1ght to expose Giphy’s negligence in blocking sexual abuse imagery. The report revealed how criminals used the site to share illegal imagery, which was then accidentally indexed by search engines. TechCrunch’s investigation demonstrated that it’s not just public tech giants who need to be more vigilant about their content.

Airbnb’s weakness on anti-discrimination

Megan Rose Dickey explored a botched case of discrimination policy enforcement by Airbnb when a blind and deaf traveler’s reservation was cancelled because they have a guide dog. Airbnb tried to just “educate” the host who was accused of discrimination instead of levying any real punishment until Dickey’s reporting pushed it to suspend them for a month. The investigation reveals the lengths Airbnb goes to in order to protect its money-generating hosts, and how policy problems could mar its IPO.

Expired emails let terrorists tweet propaganda

Zack Whittaker discovered that Islamic State propaganda was being spread through hijacked Twitter accounts. His investigation revealed that if the email address associated with a Twitter account expired, attackers could re-register it to gain access and then receive password resets sent from Twitter. The article revealed the savvy but not necessarily sophisticated ways terrorist groups are exploiting big tech’s security shortcomings, and identified a dangerous loophole for all sites to close.

Porn & gambling apps slip past Apple

Josh Constine found dozens of pornography and real-money gambling apps had broken Apple’s rules but avoided App Store review by abusing its enterprise certificate program — many based in China. The report revealed the weak and easily defrauded requirements to receive an enterprise certificate. Seven months later, Apple revealed a spike in porn and gambling app takedown requests from China. The investigation could push Apple to tighten its enterprise certificate policies, and proved the company has plenty of its own problems to handle despite CEO Tim Cook’s frequent jabs at the policies of other tech giants.

Bonus: HQ Trivia employees fired for trying to remove CEO

This Game Of Thrones-worthy tale was too intriguing to leave out, even if the impact was more of a warning to all startup executives. Josh Constine’s look inside gaming startup HQ Trivia revealed a saga of employee revolt in response to its CEO’s ineptitude and inaction as the company nose-dived. Employees who organized a petition to the board to remove the CEO were fired, leading to further talent departures and stagnation. The investigation served to remind startup executives that they are responsible to their employees, who can exert power through collective action or their exodus.

If you have a tip for Josh Constine, you can reach him via encrypted Signal or text at (585)750-5674, joshc at TechCrunch dot com, or through Twitter DMs

Instagram drops IGTV button, but only 1% downloaded the app

At most, 7 million of Instagram’s 1 billion-plus users have downloaded its standalone IGTV app in the 18 months since launch. And now, Instagram’s main app is removing the annoying orange IGTV button from its home page in what feels like an admission of lackluster results. For reference, TikTok received 1.15 billion downloads in the same period since IGTV launched in June 2018. In just the US, TikTok received 80.5 million downloads compared to IGTV’s 1.1 million since then, according to research commissioned by TechCrunch from Sensor Tower.

To be fair, TikTok has spent huge sums on install ads. But while long-form mobile video might gain steam as the years progress, Instagram hasn’t seemed to crack the code yet.

“As we’ve continued to work on making it easier for people to create and discover IGTV content, we’ve learned that most people are finding IGTV content through previews in Feed, the IGTV channel in Explore, creators’ profiles and the standalone app. Very few are clicking into the IGTV icon in the top right corner of the home screen in the Instagram app” a Facebook company spokesperson tells TechCrunch. “We always aim to keep Instagram as simple as possible, so we’re removing this icon based on these learnings and feedback from our community.”

Instagram users don’t need the separate IGTV app to watch longer videos, as the IGTV experience is embedded in the main app and can be accessed via in-feed teasers, a tab of the Explore page, promo stickers in Stories, and profile tabs. Still, the fact that it wasn’t an appealing enough destination to warrant a home page button shows IGTV hasn’t become a staple like past Instagram launches including video, Stories, augmented reality filters, or Close Friends.

One thing still missing is an open way for Instagram creators to earn money directly from their IGTV videos. Users can’t get an ad revenue share like with YouTube or Facebook Watch. They also can’t receive tips or sell exclusive content subscriptions like on Facebook, Twitch, or Patreon.

The only financial support Facebook and Instagram have offered IGTV creators is reimbursement for production costs for a few celebrities. Those contracts also require creators to avoid making content related to politics, social issues, or elections, according to Bloomberg‘s Lucas Shaw and Sarah Frier.

“In the last few years we’ve offset small production costs for video creators on our platforms and have put certain guidelines in place,” a Facebook spokesperson told Bloomberg. “We believe there’s a fundamental difference between allowing political and issue-based content on our platform and funding it ourselves.” That seems somewhat hypocritical given Facebook CEO Mark Zuckerberg’s criticism of Chinese app TikTok over censorship of political content.

Now users need to tap the IGTV tab inside Instagram Explore to view long-form videoAnother thing absent from IGTV? Large view counts. The first 20 IGTV videos I saw today in its Popular feed all had fewer than 200,000 views. BabyAriel, a creator with nearly 10 million Instagram followers that the company touted as a top IGTV creator has only post 20 of the longer videos to date with only one receiving over 500,000 views.

When the lack of monetization is combined with less than stellar view counts compared to YouTube and TikTok, it’s understandable why some creators might be hesistant to dedicate time to IGTV. Without their content keeping the feature reliably interesting, it’s no surprise users aren’t voluntarily diving in from the home page.

In another sign that Instagram is folding IGTV deeper into its app rather than providing it more breathing room of its own, and that it’s eager for more content, you can now opt to post IGTV videos right from the main Instagram feed post video uploader. AdWeek Social Pro reported this new “long video” upload option yesterday. A Facebook company spokesperson tells me “We want to keep our video upload process as simple as possible” and that “Our goal is to create a central place for video uploads”.

 

IGTV launched with a zealotish devotion to long-form vertical video despite the fact that little high quality content of this nature was being produced. Landscape orientation is helpful for longer clips that often require establishing shots and fitting multiple people on screen, while vertical was better for quick selfie monologues.

Yet Instagram co-founder Kevin Systrom described IGTV to me in August 2018, declaring that “What I’m most proud of is that Instagram took a stand and tried a brand new thing that is frankly hard to pull off. Full-screen vertical video that’s mobile only. That doesn’t exist anywhere else.”

Now it doesn’t exist on Instagram at all since May 2019 when IGTV retreated from its orthodoxy and began allowing landscape content. I’d recommended it do that from the beginning, or at least offer a cropping tool for helping users turn their landscape videos into coherent vertical ones, but nothing’s been launched there either.

If Instagram still cares about IGTV, it needs to attract more must-see videos by helping creators get paid for their art. Or it needs to pour investment into buying high quality programming like Snapchat Discover’s Shows. If Instagram doesn’t care, it should divert development resources to it’s TikTok clone Reels that actually looks very well made and has a shot at stealing market share in the remixable social entertainment space.

For a company that’s won by betting big and moving fast, IGTV feels half-baked and sluggish. That might have been alright when Snapchat was shrinking and TikTok was still Musically, but Instagram is heading into an era of much stiffer competition. Quibi and more want to consume multi-minute spans of video viewing on mobile, and the space could grow as adults familiarize with the format. But offering the platform isn’t enough for Instagram. It needs to actively assist creators with finding what content works, and how to earn sustainable wages marking it.

Instagram tests Direct Messaging on web where encryption fails

Instagram will finally let you chat from your web browser, but the launch contradicts Facebook’s plan for end-to-end encryption in all its messaging apps. Today Instagram began testing Direct Messages on the web for a small percentage of users around the globe, a year after TechCrunch reported it was testing web DMs.

When fully rolled out, Instagram tells us its website users will be able to see when they’ve received new DMs, view their whole inbox, start new message threads or group chats, send photos (but not capture them), double click to Like and share posts from their feed via Direct so they can gossip or blast friends with memes. You won’t be able to send videos, but can view non-disappearing ones. Instagram’s CEO Adam Mosseri tweeted that he hopes to “bring this to everyone soon” once the kinks are worked out.

Web DMs could help office workers, students and others stuck on a full-size computer all day or who don’t have room on their phone for another app to spend more time and stay better connected on Instagram. Direct is crucial to Instagram’s efforts to stay ahead of Snapchat, which has seen its Stories product mercilessly copied by Facebook but is still growing thanks to its rapid fire visual messaging feature that’s popular with teens.

But as Facebook’s former Chief Security Officer Alex Stamos tweeted, “This is fascinating, as it cuts directly against the announced goal of E2E encrypted compatibility between FB/IG/WA. Nobody has ever built a trustworthy web-based E2EE messenger, and I was expecting them to drop web support in FB Messenger. Right hand versus left?”

A year ago Facebook announced it planned to eventually unify Facebook Messenger, WhatsApp and Instagram Direct so users could chat with each other across apps. It also said it would extend end-to-end encryption from WhatsApp to include Instagram Direct and all of Facebook Messenger, though it could take years to complete. That security protocol means that only the sender and recipient would be able to view the contents of a message, while Facebook, governments and hackers wouldn’t know what was being shared.

Yet Stamos explains that historically, security researchers haven’t been able to store cryptographic secrets in JavaScript, which is how the Instagram website runs, though he admits this could be solved in the future. More problematically, Stamos writes that “the model by which code on the web is distributed, which is directly from the vendor in a customizable fashion. This means that inserting a backdoor for one specific user is much much easier than in the mobile app paradigm,” where attackers would have to compromise both Facebook/Instagram and either Apple or Google’s app stores.

“Fixing this problem is extremely hard and would require fundamental changes to how the WWW [world wide web] works” says Stamos. At least we know Instagram has been preparing for today’s launch since at least February when mobile researcher Jane Manchun Wong alerted us. We’ve asked Instagram for more details on how it plans to cover web DMs with end-to-end encryption or whether they’ll be exempt from the plan. [Update: An Instagram spokesperson tells me that as with Instagram Direct on mobile, messages currently are not encrypted. The company is working on making its messaging products end-to-end encrypted, and it continues to consider ways to accomplish this.]

Critics have called the messaging unification a blatant attempt to stifle regulators and prevent Facebook, Instagram and WhatsApp from being broken up. Yet Facebook has stayed the course on the plan while weathering a $5 billion fine plus a slew of privacy and transparency changes mandated by an FTC settlement for its past offenses.

Personally, I’m excited, because it will make DMing sources via Instagram easier, and mean I spend less time opening my phone and potentially being distracted by other apps while working. Almost 10 years after Instagram’s launch and six years since adding Direct, the app seems to finally be embracing its position as a utility, not just entertainment.

Atrium lays off lawyers, explains pivot to legal tech

$75 million-funded legal services startup Atrium doesn’t want to be the next company to implode as the tech industry tightens its belt and businesses chase margins instead of growth via unsustainable economics. That’s why Atrium is laying off most of its   in-house lawyers.

Now, Atrium will focus on its software for startups navigating fundraising, hiring, and collaborating with lawyers. Atrium plans to ramp up its startup advising services. And it’s also doubling down on its year-old network of professional service providers that help clients navigate day-to-day legal work. Atrium’s laid off attorneys will be offered spots as preferred providers in that network if they start their own firm or join another.

“It’s a natural evolution for us to create a sustainable model” Atrium co-founder and CEO Justin Kan tells TechCrunch. “We’ve made the tough decision to restructure the company to accommodate growth into new business services through our existing professional services network” Kan wrote on Atrium’s blog. He wouldn’t give exact figures but confirmed that over 10 but under 50 staffers are impacted by the change, with Atrium having a headcount of 150 as of June.

The change could make Atrium more efficient by keeping fewer expensive lawyers on staff. However, it could weaken its $500 per month Atrium membership that included some services from its in-house lawyers that might be more complicated for clients to attain through its professional network. Atrium will also now have to prove the its client-lawyer collaboration software can survive in the market with firms paying for it rather than it being bundled with its in-house lawyers’ services.

“We’re making these changes to move Atrium to a sustainable model that provides high quality services to our clients. We’re doing it proactively because we see the writing on the wall that it’s important to have a sustainable business” Kan says. “That’s what we’re doing now. We don’t anticipate any disrupt of services to clients. We’re still here.”

Justin Kan (Atrium) at TechCrunch Disrupt SF 2017

Founded in 2017, Atrium promised to merge software with human lawyers to provide quicker and cheaper legal services. Its technology can help automatically generate fundraising contracts, hiring offers, and cap tables for startups while using machine learning to recommend procedures and clauses based on anonymized data from its clients. It also serves like a Dropbox for legal, organizing all of a startup’s documents to ensure everything’s properly signed and teams are working off the latest versions without digging through email.

The $500 per month Atrium membership offered this technology plus limited access to an in-house startup lawyer for consultation, plus access to guide books and events. Clients could pay extra if they needed special help such as with finalizing an acquisition deal, or access to its Fundraising Concierge service for aid with developing a pitch and lining up investor meetings.

Kan tells me Atrium still have some in-house lawyers on staff which will help it honor all its existing membership contracts and power its new emphasis on advising services. He wouldn’t say if Atrium is paid any equity for advising, or just cash. The membership plan may change for future clients so lawyer services are provided through its professional network instead.

“What we noticed was that Atrium has done a really good job of building a brand with startups. Often what they wanted from attorneys was…advice on how to set my company up, how to set my sales and marketing team up, how to get great terms in my fundraising process” so Atrium is pursuing advising, Kan tells me. “As we sat down to look at what’s working and what’s not working, our focus has been to help founders with their super-hero story, connect them with the right providers and advisors, and then helping quarterback everything you need with our in-house specialists.”

LawSites first reported Saturday that Atrium was laying off in-house lawyers. A source says that Atrium’s lawyers only found out a week ago about the changes, and they’ve been trying to pitch Atrium clients on working with them when they leave. One Atrium client said they weren’t surprised by the changes since they got so much legal advice for just $500 per month, which they suspected meant Atrium was losing money on the lawyers’ time since it was so much less expensive than competitors. They also said these cheap legal services rather than the software platform were the main draw of Atrium, and they’re unsure if the tech on its own is valuable enough.

One concern is Atrium might not learn as quickly about what services to translate into software if it doesn’t have as many lawyers in-house. But Kan believes third-party lawyers might be more clear and direct about what they need from legal technology. “I feel like having a true market for the software you’re building is better than having an internal market” he says. “We get feedback from the outside firms we work with. I think in some ways that’s the most valuable feedback. I think there’s a lot of false signals that can happen when you’re the both the employer and the supplier.”

It was critical for Atrium to correct course before getting any bigger given the fundraising problems hitting late-stage startups with poor economics in the wake of the WeWork debacle and SoftBank’s troubles. Atrium had raised a $10.5 million Series A in 2017 led by General Catalyst alongside Kleiner, Founders Fund, Initialized, and Kindred Ventures. The in September 2018 it scored a huge $65 million Series B led by Andresseen Horowitz.

Raising even bigger rounds might have been impossible if Atrium was offering consultations with lawyers at far below market rate. Now it might be in a better position to attract funding. But the question is whether clients will stick with Atrium if they get less access to a lawyer for the same price, and whether the collaboration platform is useful enough for outside law firms to pay for.

Kan had gone through tough pivots in the past. He had strapped a camera to his head to create content for his livestreaming startup Justin.tv, but wisely recentered on the 3% of users letting people watch them play video games. Justin.tv became Twitch and eventually sold to Amazon for $970 million. His on-demand personal assistant startup Exec had to switch to just cleaning in 2013 before shutting down due to rotten economics.

Rather than deny the inevitable and wait until the last minute, with Atrium Kan tried to make the hard decision early.

Instagram’s Boomerang evolves with SloMo, Echo, & Duo effects

Nearly five years after launching, Instagram’s back-and-forth video loop maker Boomerang is finally getting a big update. Users around the globe can now  add SlowMo, “Echo” blurring, and “Duo” rapid rewind special effects to their Boomerangs, as well as trim their length. This is the biggest creative upgrade yet for one of mobile’s most popular content creation tools.

The effects could help keep Instagram interesting. After so many years of Boomerangs, many viewers simply skip past them in Stories after the first loop since they’re so consistent. The extra visual flare of the new effects could keep people’s attention for a few more seconds and unlock new forms of comedy. That’s critical as Instagram tries to compete with TikTok, which has tons of special effects that have spawned their own meme formats.

Starting today, people on Instagram will be able to share new SloMo, Echo and Duo Boomerang modes on Instagram” a Facebook company spokesperson tells TechCrunch. “Your Instagram camera gives you ways to express yourself and easily share what you’re doing, thinking or feeling with your friends. Boomerang is one of the most beloved camera formats and we’re excited to expand the creative ways that you can use Boomerang to turn everyday moments into something fun and unexpected.”

The new Boomerang tools can be found by swiping right on Instagram to open the Stories composer, and then swiping left at the bottom of the screen’s shutter selector. After shooting a Boomerang, an infinity symbol button atop the screen reveals the alternate effects and video trimmer.

Typically, Boomerang captures one second of silent video which is then played forward and then in reverse three times to create a six second loop that can be shared or downloaded as a video. Here are the new effects you can add plus how Instagram described them to me in a statement:

  • SlowMo – Reduces Boomerangs to half-speed so they play for two seconds in each direction instead of one second. “Slows down your Boomerang to capture each detail”
  • Echo – Adds a motion blur effect so a translucent trail appears behind anything moving, almost like you’re drunk or tripping. “Creates a double vision effect.”
  • Duo – Rapidly rewinds the clip to the beginning with a glitchy, digitized look. “Both speeds up and slows down your Boomerang, adding a texturized effect.”
  • Trimming – Shorten your Boomerang with similar controls to iPhone’s camera roll or the Instagram feed video composer. “Edit the length of your Boomerang, and when it starts or ends.”

The effects aren’t entirely original. Snapchat has offered slow-motion and fast-foward video effects since just days after the original launch of Boomerang back in 2015. TikTok meanwhile provides several motion blur filters and glitchy digital transitions. But since these are all available with traditional video, unlike on Instagram where they’re confined to Boomerangs, there’s more creative flexibility to use the effects to hide cuts between takes or play with people’s voices.

Hopefully we’ll see these features brought over to Instagram’s main Stories and video composers. Video trimming would be especially helpful since a boring start to a Story can quickly lead viewers to skip it.

Instagram has had years of domination in the social video space. But with Snapchat finally growing again and TikTok becoming a global phenomenon, Instagram must once again fight to maintain its superiority. Now approaching 10 years old, it’s at risk of becoming stale if it can’t keep giving people ways to make hastily shot phone content compelling.

Zuckerberg ditches annual challenges, but needs cynics to fix 2030

Mark Zuckerberg won’t be spending 2020 focused on wearing ties, learning Mandarin, or just fixing Facebook. “Rather than having year-to-year challenges, I’ve tried to think about what I hope the world and my life will look in 2030” he wrote today on Facebook. As you might have guessed, though, Zuckerberg’s vision for an improved planet involves a lot more of Facebook’s family of apps.

His biggest proclamations in today’s notes include that:

  • AR – Phones will remain the primary computing platform for most of the decade by augmented reality could get devices out from between us so we can be present together — Facebook is building AR glasses
  • VR – Better virtual reality technology could address the housing crisis by letting people work from anywhere — Facebook is building Oculus
  • Privacy – The internet has created a global community where people find it hard to establish themselves as unique, so smaller online groups could make people feel special again – Facebook is building more private groups and messaging options
  • Regulation – That the big questions facing technology are too thorny for private companies to address by themselves, and governments must step in around elections, content moderation, data portability, and privacy — Facebook is trying to self-regulate on these and everywhere else to deter overly onerous lawmaking

Zuckerberg Elections

These are all reasonable predictions and suggestions. However, Zuckerberg’s post does little to address how the broadening of Facebook’s services in the 2010s also contributed to a lot of the problems he presents.

  • Isolation – Constant passive feed scrolling on Facebook and Instagram has created a way to seem like you’re being social without having true back-and-forther interaction with friends
  • Gentrification – Facebook’s shuttled employees have driven up rents in cities around the world, especially the Bay Area
  • Envy – Facebook’s algorithms can make anyone without a glamorous, Instagram-worthy life look less important, while hackers can steal accounts and its moderation systems can accidentally suspend profiles with little recourse for most users
  • Negligence – The growth-first mentality led Facebook’s policies and safety to lag behind its impact, creating the kind of democracy, content, anti-competition, and privacy questions its now asking the government to answer for it

Noticibly absent from Zuckerberg’s post are explicit mentions some of Facebook’s more controversial products and initiatives. He writes about “decentralizing opportunity” by giving small businesses commerce tools, but never mentions cryptocurrency, blockchain, or Libra directly. Instead he seems to suggest that Instagram store fronts, Messenger customer support, and WhatsApp remittance might be sufficient. He also largely leaves out Portal, Facebook’s smart screen that could help distant families stay closer, but that some see as a surveillance and data collection tool.

I’m glad Zuckerberg is taking his role as a public figure and the steward of one of humanity’s fundamental utilities more seriously. His willingness to even think about some of these long-term issues instead of just quarterly-profits is important. Optimism is necessary to create what doesn’t exist.

Still, if Zuckerberg wants 2030 to look better for the world, and for the world to look more kindly on Facebook, he may need to hire more skeptics and cynics that see a dystopic future instead. Their foresight on where societal problems could arise from Facebook’s products could help temper Zuckerberg’s team of idealists to create a company that balances the potential of the future with the risks to the present.

Every new year of the last decade I set a personal challenge. My goal was to grow in new ways outside my day-to-day work…

Posted by Mark Zuckerberg on Thursday, January 9, 2020

Twitter’s new reply blockers could let Trump hide critics

What if politicians could only display Twitter replies from their supporters while stopping everyone else from adding their analysis to the conversation? That’s the risk of Twitter’s upcoming Conversation Participants tool it’s about to start testing that lets you choose if you want replies from everyone, only those your follow or mention or no one.

For most, the reply limiter could help repel trolls and harassment. Unfortunately, it still puts the burden of safety on the victims rather than the villains. Instead of routing out abusers, Twitter wants us to retreat and wall off our tweets from everyone we don’t know. That could reduce the spontaneous yet civil reply chains between strangers that are part of what makes Twitter so powerful.

But in the hands of politicians hoping to avoid scrutiny, the tools could make it appear that their tweets and policies are uniformly supported. By only allowing their sycophants to add replies below their posts, anyone reading along will be exposed to a uniformity of opinion that clashes with Twitter’s position as a marketplace of ideas.

We’ve reached out to Twitter for comment on this issue and whether anyone such as politicians would be prevented from using the new reply-limiting tools. Twitter plans to test the reply-selection tool in Q1 and make modifications if necessary before rolling it out.

Here’s how the new Conversation Participants feature works, according to the preview shared by Twitter’s Suzanne Xie at CES today, though it could change during testing. When users go to tweet, they’ll have the option of selecting who can reply, unlike now when everyone can leave replies but authors can hide certain ones that viewers can opt to reveal. Conversation Participants offers four options:

Global: Replies from anyone

Group: Replies from those you follow or mention in this tweet

Panel: Replies from only those you mention in this tweet

Statement: No replies allowed

Now imagine President Trump opts to make all of his tweets Group-only. Only those who support him and he therefore follows — like his sons, Fox News’ Sean Hannity and his campaign team — could reply. Gone would be the reels of critics fact-checking his statements or arguing against his policies. His tweets would be safeguarded from reproach, establishing an echo chamber filter bubble for his acolytes.

It’s true that some of these responses from the public might constitute abuse or harassment. But those should be dealt with specifically through strong policy and consistent enforcement of adequate punishments when rules are broken. By instead focusing on stopping replies from huge swaths of the community, the secondary effects have the potential to prop up politicians that consistently lie and undam the flow of misinformation.

There’s also the practical matter that this won’t stop abuse, it will merely move it. Civil discussion will be harder to find for the rest of the public, but harassers will still reach their targets. Users blocked from replying to specific tweets can just tweet directly at the author. They can also continue to mention the author separately or screenshot their tweets and then discuss them.

It’s possible that U.S. law prevents politicians discriminating against citizens with different viewpoints by restricting their access to the politician’s comments on a public forum. Judges ruled this makes it illegal for Trump to block people on social media. But with this new tool, because anyone could still see the tweets, reply to the author separately and not be followed by the author likely doesn’t count as discrimination like blocking does, use of the Conversation Participants tool could be permissible. Someone could sue to push the issue to the courts, though, and judges might be wise to deem this unconstitutional.

Again, this is why Twitter needs to refocus on cleaning up its community rather than only letting people build tiny, temporary shelters from the abuse. It could consider blocking replies and mentions from brand new accounts without sufficient engagement or a linked phone number, as I suggested in 2017. It could also create a new mid-point punishment of a “time-out” from sending replies for harassment that it (sometimes questionably) deems below the threshold of an account suspension.

The combination of Twitter’s decade of weakness in the face of trolls with a new political landscape of normalized misinformation threaten to overwhelm its attempts to get a handle on safety.

CES 2020 coverage - TechCrunch

Finding the right reporter to cover your startup

Pitch the wrong reporter or publication, and your story won’t see the light of day.

Before you start seeking press, you’ll need to look for reporters who have reach, respect and expertise when you choose who to talk to. You’ll also need to be prepared to accept the truth about your business, even if it hurts. It’s critical that you find a writer who’s a good fit for the business you’re building and the audience you’re seeking.

If you don’t use a strategic approach when reaching out to journalists, you’ll get few responses, fewer meetings, and articles that either misrepresent you, shortchange you, or blow up in your face. The goal isn’t just to secure positive coverage, because no one will believe it; startups are tough. There are challenges and setbacks and scary looming questions. But an honest article from a respected voice with a big enough audience can legitimize a business as it tries to turn vision into impact.

Here we’ll discuss how to find the publication and reporter who understands you and can tell the story that aligns with your objectives. In part one of this series, we detailed why you should (or shouldn’t) want press coverage and how to know what’s newsworthy enough to pitch.

In future ExtraCrunch posts, I’ll explore how to hire PR help, formulate a pitch, deliver it to reporters, prepare for interviews and conduct an announcement. If you have more questions or ideas for ExtraCrunch posts, feel free to reach out to me via Twitter or elsewhere.

Why should you believe me? I’m editor-at-large for TechCrunch, where I’ve written 4,000 articles about early-stage startups and tech giants. For 10 years, I’ve reviewed startup pitches via email and Twitter, at demo days for accelerators like Y Combinator and on stage as a judge of startup competitions. From warm introductions to cold calls, I’ve seen what gets reporters’ attention and why stories become enduring narratives supporting companies as they grow.

Deciding which publications to target

Which publications do you currently read and respect?

Starting here ensures that you’re approaching PR from a place of knowledge with personal context rather than going by what someone else tells you. But you also have to consider which publications appeal in that way to your target demographic. For example, if you’re aiming to reach teens, parents, or Chief Information Officers, you’ll have very different target publications.

If you appeal to a niche audience aligned with a specific publication, you can definitely score some leads and installs, priming the pump so when users hear about you again, they already have a positive association for your brand. You can score SEO to help your get discovered when people search for keywords related to your business, but if you’re looking for user growth or SEO, be sure to work with a publication that links to the websites and apps they write about, as many don’t. But if you’re hoping for ‘the servers are on fire we’ve got so much traffic’ attention, you need to first build network effects and viral loops directly into your product.

Once you identify a realistic objective for gaining press coverage, you can figure out which reporters and outlets will best help you achieve your goals.

Typically, you’ll aim to work with more prestigious publications and writers first, as they can inspire other outlets to write up follow-on coverage. It rarely works the other way around, since top publishers want to be seen as first to a story and forging trends rather than following them with late coverage. These outlets often have greater reach in terms of home page traffic, social following, SEO and shareability.

The exception to this strategy: if there’s a specific writer at a less-prestigious publisher who’s renowned as the expert in your space whose word has more weight, or if that publication better aligns with your overall goals. For example, you might want to work with a transportation expert like Kirsten Korosec if you’re an electric car company, or a publication focused on startups like TechCrunch if you’re trying to stoke fundraising. If you’re a more general mainstream consumer business or are seeking maximum growth, you might instead choose a popular national newspaper with a big circulation.

Who should tell your story?

After you’ve set goals and have an idea regarding the kind of publication or journalist you want to work with, it’s time to build a ranked list of specific reporters. Here, expertise is key.

ByteDance & TikTok have secretly built a deepfakes maker

TikTok parent company ByteDance has built technology to let you insert your face into videos starring someone else. TechCrunch has learned that ByteDance has developed an unreleased feature using life-like deepfakes technology that the app’s code refers to as Face Swap. Code in both TikTok and its Chinese sister app Douyin asks users to take a multi-angle biometric scan of their face, then choose from a selection of videos they want to add their face to and share.

Users scan themselves, pick a video, and have their face overlaid on the body of someone in the clip with ByteDance’s new Face Swap feature

The deepfakes feature, if launched in Douyin and TikTok, could create a more controlled environment where face swapping technology plus a limited selection of source videos  can be used for fun instead of spreading misinformation. It might also raise awareness of the technology so more people are aware that they shouldn’t believe everything they see online. But it’s also likely to heighten fears about what ByteDance could do with such sensitive biometric data — similar to what’s used to set up FaceID on iPhones.

Several other tech companies have recently tried to consumerize watered-down versions of deepfakes. The app Morphin lets you overlay a computerized rendering of your face on actors in GIFs. Snapchat offered a FaceSwap option for years that would switch the visages of two people in frame, or replace one on camera with one from your camera roll, and there are standalone apps that do that too like Face Swap Live. Then last month, TechCrunch spotted Snapchat’s new Cameos for inserting a real selfie into video clips it provides, though the results aren’t meant to look confusingly realistic.

Most problematic has been Chinese deepfakes app Zao, which uses artificial intelligence to blend one person’s face into another’s body as they move and synchronize their expressions. Zao went viral in September despite privacy and security concerns about how users’ facial scans might be abused. Zao was previously blocked by China’s WeChat for presenting “security risks”. [Correction: While “Zao” is mentioned in the discovered code, it refers to the general concept rather than a partnership between ByteDance and Zao.]

But ByteDance could bring convincingly life-like Deepfakes to TikTok and Douyin, two of the world’s most popular apps with over 1.5 billion downloads.

Zao in the Chinese iOS App Store

Zao in the Chinese iOS App Store

Hidden Inside TikTok and Douyin

TechCrunch received a tip about the news from Israeli in-app market research startup Watchful.ai. The company had discovered code for the Deepfakes feature in the latest version of TikTok’s and Douyin’s Android apps. Watchful.ai was able to activate the code in Douyin to generate screenshots of the feature, though it’s not currently available to the public.

First, users scan their face into TikTok. This also serves as an identity check to make sure you’re only submitting your own face so you can’t make unconsented deepfakes of anyone else using an existing photo or a single shot of their face. By asking you to blink, nod, and open and close your mouth while in focus and proper lighting, Douyin can ensure you’re a live human and create a manipulable scan of your face that it can stretch and move to express different emotions or fill different scenes.

You’ll then be able to pick from videos ByteDance claims to have the rights to use, and it will replace the face of whoever’s in the clip with your own. You can then share or download the deepfake video, though it will include an overlayed watermark the company claims will help distinguish the content as not being real. I received confidential access to videos made by Watchful using the feature, and the face swapping is quite seamless. The motion tracking, expressions, and color blending all look very convincing.

Watchful also discovered unpublished updates to TikTok and Douyin’s terms of service that cover privacy and usage of the deepfakes feature. Inside the US version of TikTok’s Android app, English text in the code explains the feature and some of its terms of use:

Your facial pattern will be used for this feature. Read the Drama Face Terms of Use and Privacy Policy for more details. Make sure you’ve read and agree to the Terms of Use and Privacy Policy before continuing. 1. To make this feature secure for everyone, real identity verification is required to make sure users themselves are using this feature with their own faces. For this reason, uploaded photos can’t be used; 2. Your facial pattern will only be used to generate face-change videos that are only visible to you before you post it. To better protect your personal information, identity verification is required if you use this feature later. 3. This feature complies with Internet Personal Information Protection Regulations for Minors. Underage users won’t be able to access this feature. 4. All video elements related to this feature provided by Douyin have acquired copyright authorization.

ZHEJIANG, CHINA – OCTOBER 18 2019 Two us senators have sent a letter to the us national intelligence agency saying TikTok could pose a threat to us national security and should be investigated. Visitors visit the booth of douyin(Tiktok) at the 2019 smart expo in hangzhou, east China’s zhejiang province, Oct. 18, 2019.- PHOTOGRAPH BY Costfoto / Barcroft Media (Photo credit should read Costfoto / Barcroft Media via Getty Images)

A longer terms of use and privacy policy was also found in Chinese within Douyin. Translated into English, some highlights from the text include:

  • “The ‘face-changing’ effect presented by this function is a fictional image generated by the superimposition of our photos based on your photos. In order to show that the original work has been modified and the video generated using this function is not a real video, we will mark the video generated using this function. Do not erase the mark in any way.”

  • “The information collected during the aforementioned detection process and using your photos to generate face-changing videos is only used for live detection and matching during face-changing. It will not be used for other purposes . . . And matches are deleted immediately and your facial features are not stored.”

  • “When you use this function, you can only use the materials provided by us, you cannot upload the materials yourself. The materials we provide have been authorized by the copyright owner”.

  • “According to the ‘Children’s Internet Personal Information Protection Regulations’ and the relevant provisions of laws and regulations, in order to protect the personal information of children / youths, this function restricts the use of minors”.

We reached out to TikTok and Douyin for comment regarding the deepfakes feature, when it might launch, how the privacy of biometric scans are protected, and the age limit. However, TikTok declined to answer those questions. Instead a spokesperson insisted that “after checking with the teams I can confirm this is definitely not a function in TikTok, nor do we have any intention of introducing it. I think what you may be looking at is something slated for Douyin – your email includes screenshots that would be from Douyin, and a privacy policy that mentions Douyin. That said, we don’t work on Douyin here at TikTok.” They later told TechCrunch that “The inactive code fragments are being removed to eliminate any confusion”, which implicitly confirms that Face Swap code was found in TikTok.

A Douyin spokesperson tells TechCrunch “Douyin follows the laws and regulations of the jurisdictions in which it operates, which is China”. They denied that the Face Swap terms of service appear in TikTok despite TechCrunch reviewing code from the app showing those terms of service and the feature’s functionality.

This is suspicious, and doesn’t explain why code for the deepfakes feature and special terms of service in English for the feature appear in TikTok, and not just Douyin where the app can already be activated and a longer terms of service was spotted. TikTok’s US entity has previously denied complying with censorship requests from the Chinese government in contradiction to sources who told the Washington Post and that TikTok did censor some political and sexual content at China’s behest.

Consumerizing deepfakes

It’s possible that the deepfakes Face Swap feature never officially launches in China or the US. But it’s fully functional, even if unreleased, and demonstrates ByteDance’s willingness to embrace the controversial technology despite its reputation for misinformation and non-consensual pornography. At least it’s restricting the use of the feature by minors, only letting you face-swap yourself, and preventing users from uploading their own source videos. That avoid it being used to create dangerous misinformational videos like the slowed down one making House Speaker Nancy Pelosi seem drunk, or clips of people saying things as if they were President Trump.

“It’s very rare to see a major social networking app restrict a new, advanced feature to their users 18 and over only” Watchful.ai co-founder and CEO Itay Kahana tells TechCrunch. “These deepfake apps might seem like fun on the surface, but they should not be allowed to become trojan horses, compromising IP rights and personal data, especially personal data from minors who are overwhelmingly the heaviest users of TikTok to date.”

TikTok has already been banned by the US Navy and ByteDance’s acquisition and merger of Musically into TikTok is under investigation by the Comittee On Foreign Investment In The United States. Deepfake fears could further heighten scrutiny.

With the proper safeguards, though, face-changing technology could usher in a new era of user generated content where the creator is always at the center of the action. It’s all part of a new trend of personalized media that could be big in 2020. Social media has evolved from selfies to Bitmoji to Animoji to Cameos and now consumerized deepfakes. When there are infinite apps and videos and notifications to distract us, making us the star could be the best way to hold our attention.