Yo Facebook & Instagram, stop showing Stories reruns

If I watch a Story cross-posted from Instagram to Facebook on either of the apps, it should appear as “watched” at the back of the Stories row on the other app. Why waste my time showing me Stories I already saw?

It’s been over two years since Instagram Stories launched cross-posting to Stories. Countless hours of each feature’s 500 million daily users have been squandered viewing repeats. Facebook and Messenger already synchronized the watched/unwatched state of Stories. It’s long past time that this was expanded to encompass Instagram.

I asked Facebook and Instagram if it had plans for this. A company spokesperson told me that it built cross-posting to make sharing easier to people’s different audiences on Facebook and Instagram, and it’s continuing to explore ways to simplify and improve Stories. But they gave no indication that Facebook realizes how annoying this is or that a solution is in the works.

The end result if this gets fixed? Users would spend more time watching new content, more creators would feel seen, and Facebook’s choice to jam Stories in all its apps would fee less redundant and invasive. If I send a reply to a Story on one app, I’m not going to send it or something different when I see the same Story on the other app a few minutes or hours later. Repeated content leads to more passive viewing and less interactive communication with friends, despite Facebook and Instagram stressing that its this zombie consumption that’s unhealthy.

The only possible downside to changing this could be fewer Stories ad impressions if secondary viewings of peoples’ best friends’ Stories keep them watching more than new content. But prioritizing making money over the user experience is again what Mark Zuckerberg has emphasized is not Facebook’s strategy.

There’s no need to belabor the point any further. Give us back our time. Stop the reruns.

India likely to force Facebook, WhatsApp to identify the originator of messages

New Delhi is inching closer to recommending regulations that would require social media companies and instant messaging app providers operating in the nation to help law enforcement agencies identify users who have posted content — or sent messages — it deems questionable, two people familiar with the matter told TechCrunch.

India will submit the suggested change to the local intermediary liability rules to the nation’s apex court later this month. The suggested change, the conditions of which may be altered before it is finalized, currently says that law enforcement agencies will have to produce a court order before exercising such requests, sources who have been briefed on the matter said.

But regardless, asking companies to comply with such a requirement would be “devastating” for international social media companies, a New Delhi-based policy advocate told TechCrunch on the condition of anonymity.

WhatsApp executives have insisted in the past that they would have to compromise end-to-end encryption of every user to meet such a demand — a move they are willing to fight over.

The government did not respond to a request for comment Tuesday evening. Sources spoke under the condition of anonymity as they are not authorized to speak to media.

Scores of companies and security experts have urged New Delhi in recent months to be transparent about the changes it planned to make to the local intermediary liability guidelines.

The Indian government proposed (PDF) a series of changes to its intermediary liability rules in late December 2018 that, if enforced, would require to make significant changes millions of services operated by anyone, from small and medium businesses to large corporate giants such as Facebook and Google.

Among the proposed rules, the government said that intermediaries — which it defines as those services that facilitate communication between two or more users and have five million or more users in India — will have to, among other things, be able to trace the originator of questionable content to avoid assuming full liability for their users’ actions.

At the heart of the changes lies the “safe harbor” laws that technology companies have so far enjoyed in many nations. The laws, currently applicable in the U.S. under the Communications Decency Act and India under its 2000 Information Technology Act, say that tech platforms won’t be held liable for the things their users share on the platform.

Many stakeholders have said in recent months that the Indian government was keeping them in the dark by not sharing the changes it was making to the intermediary liability guidelines.

Nobody outside of a small government circle has seen the proposed changes since January of last year, said Shashi Tharoor, one of India’s suavest and most influential opposition politicians, in a recent interview with TechCrunch.

Software Freedom and Law Centre, a New Delhi-based digital advocacy organization, recommended last week that the government should consider removing the traceability requirement from the proposed changes to the law as it was “technically impossible to satisfy for many online intermediaries.”

“No country is demanding such a broad level of traceability as envisaged by the Draft Intermediaries Guidelines,” it added.

TechCrunch could not ascertain other changes the government is recommending.

Facebook speeds up AI training by culling the weak

Training an artificial intelligence agent to do something like navigate a complex 3D world is computationally expensive and time-consuming. In order to better create these potentially useful systems, Facebook engineers derived huge efficiency benefits from, essentially, leaving the slowest of the pack behind.

It’s part of the company’s new focus on “embodied AI,” meaning machine learning systems that interact intelligently with their surroundings. That could mean lots of things — responding to a voice command using conversational context, for instance, but also more subtle things like a robot knowing it has entered the wrong room of a house. Exactly why Facebook is so interested in that I’ll leave to your own speculation, but the fact is they’ve recruited and funded serious researchers to look into this and related domains of AI work.

To create such “embodied” systems, you need to train them using a reasonable facsimile of the real world. One can’t expect an AI that’s never seen an actual hallway to know what walls and doors are. And given how slow real robots actually move in real life you can’t expect them to learn their lessons here. That’s what led Facebook to create Habitat, a set of simulated real-world environments meant to be photorealistic enough that what an AI learns by navigating them could also be applied to the real world.

Such simulators, which are common in robotics and AI training, are also useful because, being simulators, you can run many instances of them at the same time — for simple ones, thousands simultaneously, each one with an agent in it attempting to solve a problem and eventually reporting back its findings to the central system that dispatched it.

Unfortunately, photorealistic 3D environments use a lot of computation compared to simpler virtual ones, meaning that researchers are limited to a handful of simultaneous instances, slowing learning to a comparative crawl.

The Facebook researchers, led by Dhruv Batra and Eric Wijmans, the former a professor and the latter a PhD student at Georgia Tech, found a way to speed up this process by an order of magnitude or more. And the result is an AI system that can navigate a 3D environment from a starting point to goal with a 99.9 percent success rate and few mistakes.

Simple navigation is foundational to a working “embodied AI” or robot, which is why the team chose to pursue it without adding any extra difficulties.

“It’s the first task. Forget the question answering, forget the context — can you just get from point A to point B? When the agent has a map this is easy, but with no map it’s an open problem,” said Batra. “Failing at navigation means whatever stack is built on top of it is going to come tumbling down.”

The problem, they found, was that the training systems were spending too much time waiting on slowpokes. Perhaps it’s unfair to call them that — these are AI agents that for whatever reason are simply unable to complete their task quickly.

“It’s not necessarily that they’re learning slowly,” explained Wijmans. “But if you’re simulating navigating a one bedroom apartment, it’s much easier to do that than navigate a ten bedroom mansion.”

The central system is designed to wait for all its dispatched agents to complete their virtual tasks and report back. If a single agent takes 10 times longer than the rest, that means there’s a huge amount of wasted time while the system sits around waiting so it can update its information and send out a new batch.

This little explanatory gif shows how when one agent gets stuck, it delays others learning from its experience.

The innovation of the Facebook team is to intelligently cut off these unfortunate laggards before they finish. After a certain amount of time in simulation, they’re done and whatever data they’ve collected gets added to the hoard.

“You have all these workers running, and they’re all doing their thing, and they all talk to each other,” said Wijman. “One will tell the others, ‘okay, I’m almost done,’ and they’ll all report in on their progress. Any ones that see they’re lagging behind the rest will reduce the amount of work that they do before the big synchronization that happens.”

In this case you can see that each worker stops at the same time and shares simultaneously.

If a machine learning agent could feel bad, I’m sure it would at this point, and indeed that agent does get “punished” by the system in that it doesn’t get as much virtual “reinforcement” as the others. The anthropomorphic terms make this out to be more human than it is — essentially inefficient algorithms or ones placed in difficult circumstances get downgraded in importance. But their contributions are still valuable.

“We leverage all the experience that the workers accumulate, no matter how much, whether it’s a success or failure — we still learn from it,” Wijman explained.

What this means is that there are no wasted cycles where some workers are waiting for others to finish. Bringing more experience on the task at hand in on time means the next batch of slightly better workers goes out that much earlier, a self-reinforcing cycle that produces serious gains.

In the experiments they ran, the researchers found that the system, catchily named Decentralized Distributed Proximal Policy Optimization or DD-PPO, appeared to scale almost ideally, with performance increasing nearly linearly to more computing power dedicated to the task. That is to say, increasing the computing power 10x resulted in nearly 10x the results. On the other hand, standard algorithms led to very limited scaling, where 10x or 100x the computing power only results in a small boost to results because of how these sophisticated simulators hamstring themselves.

These efficient methods let the Facebook researchers produce agents that could solve a point to point navigation task in a virtual environment within their allotted time with 99.9 percent reliability. They even demonstrated robustness to mistakes, finding a way to quickly recognize they’d taken a wrong turn and go back the other way.

The researchers speculated that the agents had learned to “exploit the structural regularities,” a phrase that in some circumstances means the AI figured out how to cheat. But Wijmans clarified that it’s more likely that the environments they used have some real-world layout rules.

“These are real houses that we digitized, so they’re learning things about how western style houses tend to be laid out,” he said. Just as you wouldn’t expect the kitchen to enter directly into a bedroom, the AI has learned to recognize other patterns and make other “assumptions.”

The next goal is to find a way to let these agents accomplish their task with fewer resources. Each agent had a virtual camera it navigated with that provided it ordinary and depth imagery, but also an infallible coordinate system to tell where it traveled and a compass that always pointed towards the goal. If only it were always so easy! But until this experiment, even with those resources the success rate was considerably lower even with far more training time.

Habitat itself is also getting a fresh coat of paint with some interactivity and customizability.

Habitat as seen through a variety of virtualized vision systems.

“Before these improvements, Habitat was a static universe,” explained Wijman. “The agent can move and bump into walls, but it can’t open a drawer or knock over a table. We built it this way because we wanted fast, large scale simulation — but if you want to solve tasks like ‘go pick up my laptop from my desk,’ you’d better be able to actually pick up that laptop.”

Therefore now Habitat lets users add objects to rooms, apply forces to those objects, check for collisions, and so on. After all, there’s more to real life than disembodied gliding around a frictionless 3D construct.

The improvements should make Habitat a more robust platform for experimentation, and will also make it possible for agents trained in it to directly transfer their learning to the real world — something the team has already begun work on and will publish a paper on soon.

Fable Studio founder Edward Saatchi on designing virtual beings

In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.

Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:

  • virtual influencers: fictional characters with real-world social media accounts who build and engage with a mass following of fans.
  • virtual companions: AIs oriented toward one-to-one relationships, much like the tech depicted in the films “Her” and “Ex Machina.” They are personalized enough to engage us in entertaining discussions and respond to our behavior (in the physical world or within games) like a human would.

Part 3 of 3: designing virtual companions

In this discussion, Fable CEO Edward Saatchi addresses the technical and artistic dynamics of virtual companions: AIs created to establish one-to-one relationships with consumers. After mobile, Saatchi says he believes such virtual beings will act as the next paradigm for human-computer interaction.

TechCrunch’s Top 10 investigative reports from 2019

Facebook spying on teens, Twitter accounts hijacked by terrorists, and sexual abuse imagery found on Bing and Giphy were amongst the ugly truths revealed by TechCrunch’s investigating reporting in 2019. The tech industry needs more watchdogs than ever as its size enlargens the impact of safety failures and the abuse of power. Whether through malice, naivety, or greed, there was plenty of wrongdoing to sniff out.

Led by our security expert Zack Whittaker, TechCrunch undertook more long-form investigations this year to tackle these growing issues. Our coverage of fundraises, product launches, and glamorous exits only tell half the story. As perhaps the biggest and longest running news outlet dedicated to startups (and the giants they become), we’re responsible for keeping these companies honest and pushing for a more ethical and transparent approach to technology.

If you have a tip potentially worthy of an investigation, contact TechCrunch at [email protected] or by using our anonymous tip line’s form.

Image: Bryce Durbin/TechCrunch

Here are our top 10 investigations from 2019, and their impact:

Facebook pays teens to spy on their data

Josh Constine’s landmark investigation discovered that Facebook was paying teens and adults $20 in gift cards per month to install a VPN that sent Facebook all their sensitive mobile data for market research purposes. The laundry list of problems with Facebook Research included not informing 187,000 users the data would go to Facebook until they signed up for “Project Atlas”, not receiving proper parental consent for over 4300 minors, and threatening legal action if a user spoke publicly about the program. The program also abused Apple’s enterprise certificate program designed only for distribution of employee-only apps within companies to avoid the App Store review process.

The fallout was enormous. Lawmakers wrote angry letters to Facebook. TechCrunch soon discovered a similar market research program from Google called Screenwise Meter that the company promptly shut down. Apple punished both Google and Facebook by shutting down all their employee-only apps for a day, causing office disruptions since Facebookers couldn’t access their shuttle schedule or lunch menu. Facebook tried to claim the program was above board, but finally succumbed to the backlash and shut down Facebook Research and all paid data collection programs for users under 18. Most importantly, the investigation led Facebook to shut down its Onavo app, which offered a VPN but in reality sucked in tons of mobile usage data to figure out which competitors to copy. Onavo helped Facebook realize it should acquire messaging rival WhatsApp for $19 billion, and it’s now at the center of anti-trust investigations into the company. TechCrunch’s reporting weakened Facebook’s exploitative market surveillance, pitted tech’s giants against each other, and raised the bar for transparency and ethics in data collection.

Protecting The WannaCry Kill Switch

Zack Whittaker’s profile of the heroes who helped save the internet from the fast-spreading WannaCry ransomware reveals the precarious nature of cybersecurity. The gripping tale documenting Marcus Hutchins’ benevolent work establishing the WannaCry kill switch may have contributed to a judge’s decision to sentence him to just one year of supervised release instead of 10 years in prison for an unrelated charge of creating malware as a teenager.

The dangers of Elon Musk’s tunnel

TechCrunch contributor Mark Harris’ investigation discovered inadequate emergency exits and more problems with Elon Musk’s plan for his Boring Company to build a Washington D.C.-to-Baltimore tunnel. Consulting fire safety and tunnel engineering experts, Harris build a strong case for why state and local governments should be suspicious of technology disrupters cutting corners in public infrastructure.

Bing image search is full of child abuse

Josh Constine’s investigation exposed how Bing’s image search results both showed child sexual abuse imagery, but also suggested search terms to innocent users that would surface this illegal material. A tip led Constine to commission a report by anti-abuse startup AntiToxin (now L1ght), forcing Microsoft to commit to UK regulators that it would make significant changes to stop this from happening. However, a follow-up investigation by the New York Times citing TechCrunch’s report revealed Bing had made little progress.

Expelled despite exculpatory data

Zack Whittaker’s investigation surfaced contradictory evidence in a case of alleged grade tampering by Tufts student Tiffany Filler who was questionably expelled. The article casts significant doubt on the accusations, and that could help the student get a fair shot at future academic or professional endeavors.

Burned by an educational laptop

Natasha Lomas’ chronicle of troubles at educational computer hardware startup pi-top, including a device malfunction that injured a U.S. student. An internal email revealed the student had suffered a “a very nasty finger burn” from a pi-top 3 laptop designed to be disassembled. Reliability issues swelled and layoffs ensued. The report highlights how startups operating in the physical world, especially around sensitive populations like students, must make safety a top priority.

Giphy fails to block child abuse imagery

Sarah Perez and Zack Whittaker teamed up with child protection startup L1ght to expose Giphy’s negligence in blocking sexual abuse imagery. The report revealed how criminals used the site to share illegal imagery, which was then accidentally indexed by search engines. TechCrunch’s investigation demonstrated that it’s not just public tech giants who need to be more vigilant about their content.

Airbnb’s weakness on anti-discrimination

Megan Rose Dickey explored a botched case of discrimination policy enforcement by Airbnb when a blind and deaf traveler’s reservation was cancelled because they have a guide dog. Airbnb tried to just “educate” the host who was accused of discrimination instead of levying any real punishment until Dickey’s reporting pushed it to suspend them for a month. The investigation reveals the lengths Airbnb goes to in order to protect its money-generating hosts, and how policy problems could mar its IPO.

Expired emails let terrorists tweet propaganda

Zack Whittaker discovered that Islamic State propaganda was being spread through hijacked Twitter accounts. His investigation revealed that if the email address associated with a Twitter account expired, attackers could re-register it to gain access and then receive password resets sent from Twitter. The article revealed the savvy but not necessarily sophisticated ways terrorist groups are exploiting big tech’s security shortcomings, and identified a dangerous loophole for all sites to close.

Porn & gambling apps slip past Apple

Josh Constine found dozens of pornography and real-money gambling apps had broken Apple’s rules but avoided App Store review by abusing its enterprise certificate program — many based in China. The report revealed the weak and easily defrauded requirements to receive an enterprise certificate. Seven months later, Apple revealed a spike in porn and gambling app takedown requests from China. The investigation could push Apple to tighten its enterprise certificate policies, and proved the company has plenty of its own problems to handle despite CEO Tim Cook’s frequent jabs at the policies of other tech giants.

Bonus: HQ Trivia employees fired for trying to remove CEO

This Game Of Thrones-worthy tale was too intriguing to leave out, even if the impact was more of a warning to all startup executives. Josh Constine’s look inside gaming startup HQ Trivia revealed a saga of employee revolt in response to its CEO’s ineptitude and inaction as the company nose-dived. Employees who organized a petition to the board to remove the CEO were fired, leading to further talent departures and stagnation. The investigation served to remind startup executives that they are responsible to their employees, who can exert power through collective action or their exodus.

If you have a tip for Josh Constine, you can reach him via encrypted Signal or text at (585)750-5674, joshc at TechCrunch dot com, or through Twitter DMs

LaunchDarkly CEO Edith Harbaugh explains why her company raised another $54M

This week, LaunchDarkly announced that it has raised another $54 million. Led by Bessemer Venture Partners and backed by the company’s existing investors, it brings the company’s total funding up to $130 million.

For the unfamiliar, LaunchDarkly builds a platform that allows companies to easily roll out new features to only certain customers, providing a dashboard for things like “canary launches” (pushing new stuff to a small group of users to make sure nothing breaks) or launching a feature only in select countries or territories. By productizing an increasingly popular development concept (“feature flagging”) and making it easier to toggle new stuff across different platforms and languages, the company is quickly finding customers in companies that would rather not spend time rolling their own solutions.

I spoke with CEO and co-founder Edith Harbaugh, who filled me in on where the idea for LaunchDarkly came from, how their product is being embraced by product managers and marketing teams and the company’s plans to expand with offices around the world. Here’s our chat, edited lightly for brevity and clarity.

Instagram tests Direct Messaging on web where encryption fails

Instagram will finally let you chat from your web browser, but the launch contradicts Facebook’s plan for end-to-end encryption in all its messaging apps. Today Instagram began testing Direct Messages on the web for a small percentage of users around the globe, a year after TechCrunch reported it was testing web DMs.

When fully rolled out, Instagram tells us its website users will be able to see when they’ve received new DMs, view their whole inbox, start new message threads or group chats, send photos (but not capture them), double click to Like and share posts from their feed via Direct so they can gossip or blast friends with memes. You won’t be able to send videos, but can view non-disappearing ones. Instagram’s CEO Adam Mosseri tweeted that he hopes to “bring this to everyone soon” once the kinks are worked out.

Web DMs could help office workers, students and others stuck on a full-size computer all day or who don’t have room on their phone for another app to spend more time and stay better connected on Instagram. Direct is crucial to Instagram’s efforts to stay ahead of Snapchat, which has seen its Stories product mercilessly copied by Facebook but is still growing thanks to its rapid fire visual messaging feature that’s popular with teens.

But as Facebook’s former Chief Security Officer Alex Stamos tweeted, “This is fascinating, as it cuts directly against the announced goal of E2E encrypted compatibility between FB/IG/WA. Nobody has ever built a trustworthy web-based E2EE messenger, and I was expecting them to drop web support in FB Messenger. Right hand versus left?”

A year ago Facebook announced it planned to eventually unify Facebook Messenger, WhatsApp and Instagram Direct so users could chat with each other across apps. It also said it would extend end-to-end encryption from WhatsApp to include Instagram Direct and all of Facebook Messenger, though it could take years to complete. That security protocol means that only the sender and recipient would be able to view the contents of a message, while Facebook, governments and hackers wouldn’t know what was being shared.

Yet Stamos explains that historically, security researchers haven’t been able to store cryptographic secrets in JavaScript, which is how the Instagram website runs, though he admits this could be solved in the future. More problematically, Stamos writes that “the model by which code on the web is distributed, which is directly from the vendor in a customizable fashion. This means that inserting a backdoor for one specific user is much much easier than in the mobile app paradigm,” where attackers would have to compromise both Facebook/Instagram and either Apple or Google’s app stores.

“Fixing this problem is extremely hard and would require fundamental changes to how the WWW [world wide web] works” says Stamos. At least we know Instagram has been preparing for today’s launch since at least February when mobile researcher Jane Manchun Wong alerted us. We’ve asked Instagram for more details on how it plans to cover web DMs with end-to-end encryption or whether they’ll be exempt from the plan. [Update: An Instagram spokesperson tells me that as with Instagram Direct on mobile, messages currently are not encrypted. The company is working on making its messaging products end-to-end encrypted, and it continues to consider ways to accomplish this.]

Critics have called the messaging unification a blatant attempt to stifle regulators and prevent Facebook, Instagram and WhatsApp from being broken up. Yet Facebook has stayed the course on the plan while weathering a $5 billion fine plus a slew of privacy and transparency changes mandated by an FTC settlement for its past offenses.

Personally, I’m excited, because it will make DMing sources via Instagram easier, and mean I spend less time opening my phone and potentially being distracted by other apps while working. Almost 10 years after Instagram’s launch and six years since adding Direct, the app seems to finally be embracing its position as a utility, not just entertainment.

At CES, companies slowly start to realize that privacy matters

Every year, Consumer Electronics Show attendees receive a branded backpack, but this year’s edition was special; made out of transparent plastic, the bag’s contents were visible without the wearer needing to unzip. It isn’t just a fashion decision. Over the years, security has become more intense and cumbersome, but attendees with transparent backpacks didn’t have to open their bags when entering.

That cheap backpack is a metaphor for an ongoing debate — how many of us are willing to exchange privacy for convenience?

Privacy was on everyone’s mind at this year’s CES in Las Vegas, from CEOs to policymakers, PR agencies and people in charge of programming the panels. For the first time in decades, Apple had a formal presence at the event; Senior Director of Global Privacy Jane Horvath spoke on a panel focused on privacy with other privacy leaders.

Cross-border investments aren’t dead, they’re getting smarter