Oculus Quest and Rift S now shipping

Facebook -owned Oculus is shipping its latest VR headgear from today. Preorders for the PC-free Oculus Quest and the higher end Oculus Rift S opened up three weeks ago.

In a launch blog Oculus touts the new hardware’s “all-in-one, fully immersive 6DOF VR” — writing: “We’re bringing the magic of presence to more people than ever before — and we’re doing it with the freedom of fully untethered movement”.

For a less varnished view on what it’s like to stick a face-computer on your head you can check out our reviews by clicking on the links below…

Oculus Quest

TC: “The headset may not be the most powerful, but it is doubtlessly the new flagship VR product from Facebook”

Oculus Rift S

TC: “It still doesn’t feel like a proper upgrade to a flagship headset that’s already three years old, but it is a more fine-tuned system that feels more evolved and dependable”

The Oculus blog contain no detail on pre-order sales for the headsets — beyond a few fine-sounding words.

Meanwhile Facebook has, for months, been running native ads for Oculus via its eponymous and omnipresent social network — although there’s no explicit mention of the Oculus brand unless you click through to “learn more”.

Instead it’s pushing the generic notion of “all-in-one VR”, shrinking the Oculus brand stamp on the headset to an indecipherable micro-scribble.

Here’s one of Facebook’s ads that targeted me in Europe, back in March, for e.g.:

For those wanting to partake of Facebook flavored face gaming (and/or immersive movie watching), the Oculus Quest and Rift S are available to buy via oculus.com and retail partners including Amazon, Best Buy, Newegg, Walmart, and GameStop in the US; Currys PC World, FNAC, MediaMarkt, and more in the EU and UK; and Amazon in Japan.

Just remember to keep your mouth shut.

Why is Facebook doing robotics research?

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

Millions of Instagram influencers had their contact data scraped and exposed

A massive database containing contact information of millions of Instagram influencers, celebrities, and brand accounts has been found online.

The database, hosted by Amazon Web Services, was left exposed and without a password allowing anyone to look inside. At the time of writing, the database had over 49 million records — but was growing by the hour.

From a brief review of the data, each record contained public data scraped from influencer Instagram accounts, including their bio, profile picture, the number of followers they have, if they’re verified, and their location by city and country, but also contained their private contact information, such as the Instagram account owner’s email address and phone number.

Security researcher Anurag Sen discovered the database and alerted TechCrunch in an effort to find the owner and get the database secured. We traced the database back to Mumbai-based social media marketing firm Chtrbox, which pays influencers to post sponsored content on their accounts. Each record in the database contained a record that calculated the worth of each account, based off the number of followers, engagement, reach, likes and shares they had. This was used as a metric to determine how much the company could pay an Instagram celebrity or influencer to post an ad.

TechCrunch found several high-profile influencers in the exposed database, including prominent food bloggers, celebrities and other social media influencers.

We contacted several people at random whose information was found in the database and provided them their phone numbers. Two of the people responded and confirmed their email address and phone number found in the database was used to set up their Instagram accounts. Neither had any involvement with Chtrbox, they said.

Shortly after we reached out, Chtrbox pulled the database offline. Pranay Swarup, the company’s founder and chief executive, did not respond to a request for comment and several questions, including how the company obtained private Instagram account email addresses and phone numbers.

The scraping effort comes two years after Instagram admitted a security bug in its developer API allowed hackers to scrape the email addresses and phone numbers of six million Instagram accounts. The hackers later sold the data for bitcoin.

Months later, Instagram — now with more than a billion users — choked its API to limit the number of requests apps and developers can make on the platform.

A spokesperson for Facebook, which owns Instagram, said it was looking into the matter. “Scraping data of any kind is prohibited on Instagram,” said the spokesperson. “We’re investigating how and what data was obtained and will share an update soon.”

On the Internet of Women with Moira Weigel

“Feminism,” the writer and editor Marie Shear famously said in an often-misattributed quote, “is the radical notion that women are people.” The genius of this line, of course, is that it appears to be entirely non-controversial, which reminds us all the more effectively of the past century of fierce debates surrounding women’s equality.

And what about in tech ethics? It would seem equally non-controversial that ethical tech is supposed to be good for “people,” but is the broader tech world and its culture good for the majority of humans who happen to be women? And to the extent it isn’t, what does that say about any of us, and about all of our technology?

I’ve known, since I began planning this TechCrunch series exploring the ethics of tech, that it would need to thoroughly cover issues of gender. Because as we enter an age of AI, with machines learning to be ever more like us, what could be more critical than addressing the issues of sex and sexism often at the heart of the hardest conflicts in human history thus far?

Meanwhile, several months before I began envisioning this series I stumbled across the fourth issue of a new magazine called Logic, a journal on technology, ethics, and culture. Logic publishes primarily on paper — yes, the actual, physical stuff, and a satisfyingly meaty stock of it, at that.

In it, I found a brief essay, “The Internet of Women,” that is a must-read, an instant classic in tech ethics. The piece is by Moira Weigel, one of Logic’s founders and currently a member of Harvard University’s “Society of Fellows” — one of the world’s most elite societies of young academics.

A fast-talking 30-something Brooklynite with a Ph.D. from Yale, Weigel’s work combines her interest in sex, gender, and feminism, with a critical and witty analysis of our technology culture.

In this first of a two-part interview, I speak with Moira in depth about some of the issues she covers in her essay and beyond: #MeToo; the internet as a “feminizing” influence on culture; digital media ethics around sexism; and women in political and tech leadership.

Greg E.: How would you summarize the piece in a sentence or so?

Moira W.: It’s an idiosyncratic piece with a couple of different layers. But if I had to summarize it in just a sentence or two I’d say that it’s taking a closer look at the role that platforms like Facebook and Twitter have played in the so-called “#MeToo moment.”

In late 2017 and early 2018, I became interested in the tensions that the moment was exposing between digital media and so-called “legacy media” — print newspapers and magazines like The New York Times and Harper’s and The Atlantic. Digital media were making it possible to see structural sexism in new ways, and for voices and stories to be heard that would have gotten buried, previously.

A lot of the conversation unfolding in legacy media seemed to concern who was allowed to say what where. For me, this subtext was important: The #MeToo moment was not just about the sexualized abuse of power but also about who had authority to talk about what in public — or the semi-public spaces of the Internet.

At the same time, it seemed to me that the ongoing collapse of print media as an industry, and really what people sometimes call the “feminization” of work in general, was an important part of the context.

When people talk about jobs getting “feminized” they can mean many things — jobs becoming lower paid, lower status, flexible or precarious, demanding more emotional management and the cultivation of an “image,” blurring the boundary between “work” and “life.”

The increasing instability or insecurity of media workplaces only make women more vulnerable to the kinds of sexualized abuses of power the #MeToo hashtag was being used to talk about.

LG developed its own AI chip to make its smart home products even smarter

As its once-strong mobile division continues to slide, LG is picking up its focus on emerging tech. The company has pushed automotive, and particularly its self-driving capabilities, and today it doubled down on its smart home play with the announcement of its own artificial intelligence (AI) chip.

LG said the new chip includes its own neural engine that will improve the deep-learning algorithms used in its future smart home devices, which will include robot vacuum cleaners, washing machines, refrigerators and air conditioners. The chip can operate without an internet connection thanks to on-device processing, and it uses “a separate hardware-implemented security zone” to store personal data.

“The AI Chip incorporates visual intelligence to better recognize and distinguish space, location, objects and users while voice intelligence accurately recognizes voice and noise characteristics while product intelligence enhances the capabilities of the device by detecting physical and chemical changes in the environment,” the company wrote in an announcement.

To date, companies seeking AI or machine learning (ML) smarts at chipset level have turned to established names like Intel, ARM and Nvidia, with upstarts including Graphcore, Cerebras and Wave Computing provided VC-fueled alternatives.

There is, indeed, a boom in AI and ML challengers. A New York Times report published last year estimated that “at least 45 startups are working on chips that can power tasks like speech and self-driving cars,” but that doesn’t include many under-the-radar projects financed by the Chinese government.

LG isn’t alone in opting to fly solo in AI. Facebook, Amazon and Apple are all reported to be working on AI and ML chipsets for specific purposes. In LG’s case, its solution is customized for smarter home devices.

“Our AI C​hip is designed to provide optimized artificial intelligence solutions for future LG products. This will further enhance the three key pillars of our artificial intelligence strategy – evolve, connect and open – and provide customers with an improved experience for a better life,” IP Park, president and CTO of LG Electronics, said in a statement.

The company’s home appliance unit just recorded its highest quarter of sales and profit to date. Despite a sluggish mobile division, LG posted an annual profit of $2.4 billion last year with standout results for its home appliance and home entertainment units — two core areas of focus for AI.

LG developed its own AI chip to make its smart home products even smarter

As its once-strong mobile division continues to slide, LG is picking up its focus on emerging tech. The company has pushed automotive, and particularly its self-driving capabilities, and today it doubled down on its smart home play with the announcement of its own artificial intelligence (AI) chip.

LG said the new chip includes its own neural engine that will improve the deep-learning algorithms used in its future smart home devices, which will include robot vacuum cleaners, washing machines, refrigerators and air conditioners. The chip can operate without an internet connection thanks to on-device processing, and it uses “a separate hardware-implemented security zone” to store personal data.

“The AI Chip incorporates visual intelligence to better recognize and distinguish space, location, objects and users while voice intelligence accurately recognizes voice and noise characteristics while product intelligence enhances the capabilities of the device by detecting physical and chemical changes in the environment,” the company wrote in an announcement.

To date, companies seeking AI or machine learning (ML) smarts at chipset level have turned to established names like Intel, ARM and Nvidia, with upstarts including Graphcore, Cerebras and Wave Computing provided VC-fueled alternatives.

There is, indeed, a boom in AI and ML challengers. A New York Times report published last year estimated that “at least 45 startups are working on chips that can power tasks like speech and self-driving cars,” but that doesn’t include many under-the-radar projects financed by the Chinese government.

LG isn’t alone in opting to fly solo in AI. Facebook, Amazon and Apple are all reported to be working on AI and ML chipsets for specific purposes. In LG’s case, its solution is customized for smarter home devices.

“Our AI C​hip is designed to provide optimized artificial intelligence solutions for future LG products. This will further enhance the three key pillars of our artificial intelligence strategy – evolve, connect and open – and provide customers with an improved experience for a better life,” IP Park, president and CTO of LG Electronics, said in a statement.

The company’s home appliance unit just recorded its highest quarter of sales and profit to date. Despite a sluggish mobile division, LG posted an annual profit of $2.4 billion last year with standout results for its home appliance and home entertainment units — two core areas of focus for AI.

Chat app Line is adding Snap-style disappearing stories

Facebook cloning Snap to death may be old news, but others are only just following suit. Line, the Japanese messaging app that’s popular in Asia, just became the latest to clone Snap’s ephemeral story concept.

The company announced today that it is adding stories that disappear after 24-hours to its timeline feature, a social network like feed that sits in its app, and user profiles. The update is rolling out to users now and the concept is very much identical to Snap, Instagram and others that have embraced time-limited content.

“As posts vanish after 24 hours, there is no need to worry about overposting or having posts remain in the feed,” Line, which is listed in the U.S. and Japan, wrote in an update. “Stories allows friends to discover real-time information on Timeline that is available only for that moment.”

Snap pioneered self-destructed content in its app, and the concept has now become present across most of the most popular internet services in the world.

In particular, Facebook added stories to across the board: to its core app, Messenger, Instagram and WhatsApp, the world’s most popular chat app with over 1.5 billion monthly users. Indeed, Facebook claims that WhatsApp stories are used by 500 million people, while the company has built Instagram into a service that has long had more users than Snap — currently over one billion.

The approach doesn’t always work, though — Facebook is shuttering its most brazen Snap copy, a camera app built around Instagram direct messages.

Line doesn’t have anything like the reach of Facebook’s constellation of social apps, but it is Japan’s dominant messaging platform and is popular in Thailand, Taiwan and Indonesia.

The Japanese company doesn’t give out global user numbers but it reported 164 million monthly users in its four key markets as of Q1 2019, that’s down one million year-on-year. Japan accounts for 80 million of that figure, ahead of Thailand (44 million), Taiwan (21 million) and Indonesia (19 million.)

While user growth has stagnated, Line has been able to extract increase revenue. In addition to a foray into services — in Japan its range covers ride-hailing, food delivery, music streaming and payments — it has increased advertising in the app’s timeline tab, and that is likely a big reason for the release of stories. The new feature may help timeline get more eyeballs, while the company could follow the lead of Snap and Instagram to monetize stories by allowing businesses in.

In Line’s case, that could work reasonably well — for advertising — since users can opt to follow business accounts already. It would make sense, then, to let companies push stories to users that opted in follow their account. But that’s a long way in the future and it will depend on how the new feature is received by users.

Part fund, part accelerator, Contrary Capital invests in student entrepreneurs

First Round Capital has both the Dorm Room Fund and the Graduate Fund. General Catalyst has Rough Draft Ventures. And Prototype Capital and a few other micro-funds focus on investing in student founders, but overall, there’s a shortage of capital set aside for entrepreneurs still making their way through school.

Contrary Capital, a soon-to-be San Francisco-based operation led by Eric Tarczynski, is raising $35 million to invest between $50,000 and $200,000 in students and recent college dropouts. The firm, which operates a summer accelerator program for its portfolio companies, closed on $2.2 million for its debut, proof-of-concept fund in 2018.

“We really care about the founders building a great company who don’t have the proverbial rich uncle,” Tarczynski, a former founder and startup employee, told TechCrunch. “We thought, ‘What if there was a fund that could democratize access to both world-class capital and mentorship, and really increase the probability of success for bright university-based founders wherever they are?’ “

Contrary launched in 2016 with backing from Tesla co-founder Martin Eberhard, Reddit co-founder Steve Huffman, SoFi co-founder Dan Macklin, Twitch co-founder Emmett Shear, founding Facebook engineer Jeff Rothschild and MuleSoft founder Ross Mason. The firm has more than 100 “venture partners,” or entrepreneurial students at dozens of college campuses that help fill Contrary’s pipeline of deals.

Contrary Capital celebrating its Demo Day event last year

Last year, Contrary kicked off its summer accelerator, tapping 10 university-started companies to complete a Y Combinator -style program that culminates with a small, GP-only demo day. Admittedly, the roughly $100,000 investment Contrary deploys to its companies wouldn’t get your average Silicon Valley startup very far, but for students based in college towns across the U.S., it’s a game-changing deal.

“It gives you a tremendous amount of time to figure things out,” Tarczynski said, noting his own experience building a company while still in school. “We are trying to push them. This is the first time in many cases that these people are working on their companies full-time. This is the first time they are going all in.”

Contrary invests a good amount of its capital in Berkeley, Stanford, Harvard and MIT students, but has made a concerted effort to provide capital to students at underrepresented universities, too. To date, the team has completed three investments in teams out of Stanford, two out of MIT, two out of University of California San Diego and one each at Berekely, BYU, University of Texas-Austin, University of Pennsylvania, Columbia University and University of California Santa Cruz.

“We wanted to have more come from the 40 to 50 schools across the U.S. that have comparable if not better tech curriculums but are underserviced,” Tarczynski explained. “The only difference between Stanford and these others universities is just the volume. The caliber is just as high.”

Contrary’s portfolio includes Memora Health, the provider of productivity software for clinics; Arc, which is building metal 3D-printing technologies to deliver rocket engines; and Deal Engine, a platform for facilitating corporate travel.

“We are one giant talent scout with all these different nodes across the country,” Tarczynski added. “I’ve spent every waking moment of my life the last eight years living and breathing university entrepreneurship … it’s pretty clear to me who is an exceptional university-based founder and who is just caught up in the hype.”

Vertex Ventures hits $230M first close on new fund for Southeast Asia and India

Tis the season to be raising in India and Southeast Asia. Hot on the heels of new funds from Strive and Jungle Ventures, so Singapore’s Vertex Ventures, a VC backed by sovereign wealth fund Temasek, today announced a first close of $230 million for its newest fund, the firm’s fourth to date.

Vertex raised $210 million for its previous fund two years ago, and this new vehicle is expected to make a final close over the coming few months with more capital expected to roll in. If you care about numbers, this fund may be the largest dedicated to Southeast Asia although pedants would point out that the Vertex allocation also includes a focus on India, echoing the trend of funds bridging the two regions. There are also Singapore-based global funds that have raised more, for example, B Capital from Facebook co-founder Eduardo Saverin.

Back to Vertex, it’s worth recalling that the firm’s third fund was its first to raise from outside investors — having previously taken capital from parent Temasek. Managing partner Chua Kee Lock told Bloomberg that most of those LPs signed on for fund four including Taiwan-based Cathay Life Insurance. Vertex said in a press release that it welcomed some new backers, but it did not provide names.

The firm has offices in Singapore, Jakarta and Bangalore and its most prominent investments include ride-hailing giant Grab, fintech startup InstaRem, IP platform PatSnap and Vision Fund-backed kids e-commerce firm FirstCry. Some of its more recent portfolio additions are Warung Pintar — which is digitizing Indonesia’s street kiosk vendors — Binance — which Vertex backed for its Singapore entity — and Thailand-based digital insurance play Sunday.

One differentiator that Vertex offers in Southeast Asia and India, beyond its ties to Temasek, is that there are connections with five other Vertex funds worldwide. Those include a new global growth fund, and others dedicated to global healthcare as well as startups in Israel and the U.S.

Others VCs operating in Southeast Asia’s Series A/B+ bracket include Jungle Ventures, which just hit first close on a new fund aimed at $220 million, Openspace Ventures, which closed a $135 million fund earlier this year, Sequoia India and Southeast Asia, which raised $695 million last year, Golden Gate Ventures, which has a third fund of $100 million, and Insignia Ventures, which raised $120 million for its maiden fund.

Growth funds are also increasingly sprouting up. Early stage investor East Ventures teamed up with Yahoo Japan and SMDV to launch a $150 million vehicle, while Golden Gate Ventures partnered with anchor LP Hanwha to raise a $200 million growth fund.

Instagram is killing Direct, its standalone Snapchat clone app, in the next several weeks

As Facebook pushes ahead with its strategy to consolidate more of the backend of its various apps on to a single platform, it’s also doing a little simplifying and housekeeping. In the coming month, it will shut down Direct, the standalone Instagram direct messaging app that it was testing to rival Snapchat, on iOS and Android. Instead, Facebook and its Instagram team will channel all developments and activity into the direct messaging feature of the main Instagram app.

We first saw a message about the app closing down by way of a tweet from Direct user Matt Navarra: “In the coming month, we’ll no longer be supporting the Direct app,” Instagram notes in the app itself. “Your conversations will automatically move over to Instagram, so you don’t need to do anything.”

The details were then confirmed to us by Instagram itself:

“We’re rolling back the test of the standalone Direct app,” a spokesperson said in a statement provided to TechCrunch. “We’re focused on continuing to make Instagram Direct the best place for fun conversations with your friends.”

From what we understand, Instagram will continue developing Direct features — they just won’t live in a standalone app. (Tests and rollouts of new features that we’ve reported before include encryption in direct messaging, the ability to watch videos with other people, a web version of the direct messaging feature,

Instagram didn’t give any reason for the decision, but in many ways, the writing was on the wall with this one.

The app first appeared December 2017, when Instagram confirmed it had rolled it out in a select number of markets — Uruguay, Chile, Turkey, Italy, Portugal and Israel — as a test. (Instagram first launched direct messaging within the main app in 2013.)

“We want Instagram to be a place for all of your moments, and private sharing with close friends is a big part of that,” it said at the time. “To make it easier and more fun for people to connect in this way, we are beginning to test Direct – a camera-first app that connects seamlessly back to Instagram.”

But it’s not clear how many markets beyond ultimately have had access to the app, although Instagram did expand it to more. The iOS version currently notes that it is available in a much wider range of languages than Spanish, Turkish, Italian and Portuguese. It also includes English, Croatian, Czech, Danish, Dutch, Finnish, French, German, Greek, Indonesian, Japanese, Korean, Malay, Norwegian Bokmål, Polish, Romanian, Russian, Simplified Chinese, Slovak, Swedish, Tagalog, Thai, Traditional Chinese, Ukrainian and Vietnamese.

But with Instagram doing little to actively promote the app or its expansion to more markets, Direct never really found a lot of traction in the markets where it was active.

The only countries that make it on to AppAnnie’s app rankings for Direct are Uruguay for Android, where it was most recently at number 55 among social networking apps (with no figures for overall rankings, meaning it was too low down to be counted); and Portugal on iOS, where it was number 24 among social apps and a paltry 448 overall.

The Direct app hadn’t been updated on iOS since the end of December, although the Android version was updated as recently as the end of April.

At the time of its original launch as a test, however, Direct looked like an interesting move from Instagram.

The company had already been releasing various other features that cloned popular ones in Snapchat. The explosive growth and traction of one of them, Stories, could have felt like a sign to Facebook that there was more ground to break on creating more Snapchat-like experiences for its audience. More generally, the rise of Snapchat and direct messaging apps like WhatsApp has shown that there is a market demand for more apps based around private conversations among smaller groups, if not one-to-one.

On top of that, building a standalone messaging app takes a page out of Facebook’s own app development book, in which it launched and began to really accelerate development of a standalone Messenger app separate from the Facebook experience on mobile.

The company has not revealed any recent numbers for usage of Direct since 2017, when it said there were 375 million users of the service as it brought together permanent and ephemeral (disappearing) messages within the service.

More recently, Instagram and Facebook itself have been part of the wider scrutiny we have seen over how social platforms police and moderate harmful or offensive content. Facebook itself has faced an additional wave of criticism from some over its plans to bring together its disparate app ecosystem in terms of how they function together, with the issue being that Facebook is not giving apps like WhatsApp and Instagram enough autonomy and becoming an even bigger data monster in the process.

It may have been the depressingly low usage that ultimately killed off Direct, but I’d argue that the optics for promoting an expansion of its app real estate on to another platform weren’t particularly strong, either.