7 investors discuss augmented reality and VR startup opportunities in 2020

For all of the investors preaching that augmented reality technology will likely be the successor to the modern smartphone, today, most venture capitalists are still quite wary to back AR plays.

The reasons are plentiful, but all tend to circle around the idea that it’s too early for software and too expensive to try to take on Apple or Facebook on the hardware front.

Meanwhile, few spaces were frothier in 2016 than virtual reality, but most VCs who gambled on VR following Facebook’s Oculus acquisition failed to strike it rich. In 2020, VR did not get the shelter-in-place usage bump many had hoped for largely due to supply chain issues at Facebook, but VCs hope their new cheaper device will spell good things for the startup ecosystem.

To get a better sense of how VCs are looking at augmented reality and virtual reality in 2020, I reached out to a handful of investors who are keeping a close watch on the industry:

Some investors who are bullish on AR have opted to focus on virtual reality for now, believing that there’s a good amount of crossover between AR and VR software, and that they can make safer bets on VR startups today that will be able to take advantage of AR hardware when it’s introduced.

“Besides Pokémon Go I don’t think we have seen the engagement numbers needed for AR,” Boost VC investor Brayton Williams tells TechCrunch. “We believe VR is still the largest long-term opportunity of the two. AR complements the real world, VR creates endless new worlds.”

Most of the investors I got in contact with were still fairly active in the AR/VR world, but many still disagreed whether the time was right for VR startups. For Jacob Mullins of Shasta Ventures, “It’s still early, but it’s no longer too early.” While Gigi Levy-Weiss of NFX says that the market is “sadly not happening yet,” Facebook’s Quest headsets have shown promise.

On the hardware side, the ghost of Magic Leap’s formerly hyped glory still looms large. Few investors are interested in making a hardware play in the AR/VR world, noting that startups don’t have the resources to compete with Facebook or Microsoft on a large-scale rollout. “Hardware is so capital intensive and this entire industry is dependent on the big players continuing to invest in hardware innovation,” General Catalyst’s Niko Bonatsos tells us.

Even those that are still bullish on startups making hardware plays for more niche audiences acknowledge that life had gotten harder for ambitious founders in these spaces, “the spectacular flare-outs do make it harder for companies to raise large amounts with long product release horizons,” investor Tipatat Chennavasin notes.

Responses have been edited for length and clarity.


Niko Bonatsos, General Catalyst

What are your general impressions on the health of the AR/VR market today?

We’re seeing some progress in VR and some of that is happening because of the Oculus ecosystem. They continue to improve the hardware and have a growing catalog of content. I think their onboarding and consumption experience is very consumer-friendly and that’s going to continue to help with adoption. On the consumer side, we’re seeing some companies across gaming, fitness and productivity that are earning and retaining their audiences at a respectable rate. That wasn’t happening even a year ago so it may be partially a COVID lift but habits are forming. 

The VR bets of several years ago have largely struggled to pan out, if you were to make a startup investment in this space today what would you need to see? 

Companies to watch are the ones that are creating cool experiences with mobile as the first entry point. Wave VR, Rec Room, VRChat are making it really easy for consumers to get a taste of VR with devices they already own. They’re not treating VR as just another gaming peripheral but as a way to create very cool, often celebrity-driven, content. These are the kinds of innovations that makes me optimistic about the VR category in general.

Most investors I chat with seem to be long-term bullish on AR, but are reticent to invest in an explicitly AR-focused startup today. What do you want to see before you make a play here?

In both AR/VR, a founder needs to be both super ambitious but patient. They’ll need to be flexible in thinking and open to pivoting a few times along the way. Product-market fit is always important but I want to see that they have a plan for customer retention. Fun to try is great, habit-forming is much better. Gaming continues to do pretty well as a category for VC dollars but it’d be interesting to see more founders look at making IRL sports experiences more immersive or figuring out how to enhance remote meeting experiences with VR to fix Zoom fatigue.

There have been a few spectacular flare-outs when it comes to AR/VR hardware investments, is there still a startup opportunity in AR/VR hardware?

Hardware is so capital intensive and this entire industry is dependent on the big players continuing to invest in hardware innovation. Facebook and Microsoft seem to be the main companies willing to spend here while others have backed away. If we expand our thinking for a minute, maybe the first real mainstream breakthrough AR/VR consumer experience isn’t visual. For VR, it might be the mobile experiences. For AR maybe AirPods or AirPod-like devices are the right entry point for consumers. They’re in millions of people’s ears already and who doesn’t want their own special-agent-like earpiece? That’s where founders might find some opportunity.

Tipatat Chennavasin, The Venture Reality Fund

Instagram rolls out fan badges for live videos, expands IGTV ads test

Instagram is today introducing a new way for creators to make money. The company is now rolling out badges in Instagram Live to an initial group of over 50,000 creators, who will be able to offer their fans the ability to purchase badges during their live videos to stand out in the comments and show their support.

The idea to monetize using fan badges is not unique to Instagram. Other live streaming platforms, including Twitch and YouTube, have similar systems. Facebook Live also allows fans to purchase stars on live videos, as a virtual tipping mechanism.

Instagram users will see three options to purchase a badge during live videos: badges that cost $0.99, $1.99, or $4.99.

On Instagram Live, badges will not only call attention to the fans’ comments, they also unlock special features, Instagram says. This includes a placement on a creator’s list of badge holders and access to a special heart badge.

The badges and list make it easier for creators to quickly see which fans are supporting their efforts, and give them a shout-out, if desired.

Image Credits: Instagram

To kick off the roll out of badges, Instagram says it will also temporarily match creator earnings from badge purchases during live videos, starting in November. Creators @ronnebrown and @youngezee are among those who are testing badges.

The company says it’s not taking a revenue share at launch, but as it expands its test of badges it will explore revenue share in the future.

“Creators push culture forward. Many of them dedicate their life to this, and it’s so important to us that they have easy ways to make money from their content,” said Instagram COO Justin Osofsky, in a statement. “These are additional steps in our work to make Instagram the single best place for creators to tell their story, grow their audience, and make a living,” she added.

Additionally, Instagram today is expanding access to its IGTV ads test to more creators. This program, introduced this spring, allows creators to earn money by including ads alongside their videos. Today, creators keep at least 55% of that revenue, Instagram says.

The introduction of badges and IGTV ads were previously announced, with Instagram saying it would test the former with a small group of creators earlier this year.

The changes follow what’s been a period of rapid growth on Instagram’s live video platform, as creators and fans sheltered at home during the coronavirus pandemic, which had cancelled live events, large meetups, concerts, and more.

During the pandemic’s start, for example, Instagram said Live creators saw a 70% increase in video views from Feb. to March, 2020. In Q2, Facebook also reported monthly active user growth (from 2.99B to 3.14B in Q1) that it said reflected increased engagement from consumers who were spending more time at home.

Who regulates social media?

Social media platforms have repeatedly found themselves in the United States government’s crosshairs over the last few years, as it has been progressively revealed just how much power they really wield, and to what purposes they’ve chosen to wield it. But unlike, say, a firearm or drug manufacturer, there is no designated authority who says what these platforms can and can’t do. So who regulates them? You might say everyone and no one.

Now, it must be made clear at the outset that these companies are by no means “unregulated,” in that no legal business in this country is unregulated. For instance Facebook, certainly a social media company, received a record $5 billion fine last year for failure to comply with rules set by the FTC. But not because the company violated its social media regulations — there aren’t any.

Facebook and others are bound by the same rules that most companies must follow, such as generally agreed-upon definitions of fair business practices, truth in advertising, and so on. But industries like medicine, energy, alcohol, and automotive have additional rules, indeed entire agencies, specific to them; Not so social media companies.

I say “social media” rather than “tech” because the latter is much too broad a concept to have a single regulator. Although Google and Amazon (and Airbnb, and Uber, and so on) need new regulation as well, they may require a different specialist, like an algorithmic accountability office or online retail antitrust commission. (Inasmuch as tech companies act within regulated industries, such as Google in broadband, they are already regulated as such.)

Social media can roughly defined as platforms where people sign up to communicate and share messages and media, and that’s quite broad enough already without adding in things like ad marketplaces, competition quashing and other serious issues.

Who, then, regulates these social media companies? For the purposes of the U.S., there are four main directions from which meaningful limitations or policing may emerge, but each one has serious limitations, and none was actually created for the task.

1. Federal regulators

Image Credits: Andrew Harrer/Bloomberg

The Federal Communications Commission and Federal Trade Commission are what people tend to think of when “social media” and “regulation” are used in a sentence together. But one is a specialist — not the right kind, unfortunately — and the other a generalist.

The FCC, unsurprisingly, is primarily concerned with communication, but due to the laws that created it and grant it authority, it has almost no authority over what is being communicated. The sabotage of net neutrality has complicated this somewhat, but even the faction of the Commission dedicated to the backwards stance adopted during this administration has not argued that the messages and media you post are subject to their authority. They have indeed called for regulation of social media and big tech — but are for the most part unwilling and unable to do so themselves.

The Commission’s mandate is explicitly the cultivation of a robust and equitable communications infrastructure, which these days primarily means fixed and mobile broadband (though increasingly satellite services as well). The applications and businesses that use that broadband, though they may be affected by the FCC’s decisions, are generally speaking none of the agency’s business, and it has repeatedly said so.

The only potentially relevant exception is the much-discussed Section 230 of the Communications Decency Act (an amendment to the sprawling Communications Act), which waives liability for companies when illegal content is posted to their platforms, as long as those companies make a “good faith” effort to remove it in accordance with the law.

But this part of the law doesn’t actually grant the FCC authority over those companies or define good faith, and there’s an enormous risk of stepping into unconstitutional territory, because a government agency telling a company what content it must keep up or take down runs full speed into the First Amendment. That’s why although many think Section 230 ought to be revisited, few take Trump’s feeble executive actions along these lines seriously.

The agency did announce that it will be reviewing the prevailing interpretation of Section 230, but until there is some kind of established statutory authority or Congress-mandated mission for the FCC to look into social media companies, it simply can’t.

The FTC is a different story. As watchdog over business practices at large, it has a similar responsibility towards Twitter as it does towards Nabisco. It doesn’t have rules about what a social media company can or can’t do any more than it has rules about how many flavors of Cheez-It there should be. (There are industry-specific “guidelines” but these are more advisory about how general rules have been interpreted.)

On the other hand, the FTC is very much the force that comes into play should Facebook misrepresent how it shares user data, or Nabisco overstate the amount of real cheese in its crackers. The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.

You can thank the FTC for the now-familiar, carefully worded statements that avoid any real claims or responsibilities: “We take security very seriously” and “we think we have the best method” and that sort of thing — so pretty much everything that Mark Zuckerberg says. Companies and executives are trained to do this to avoid tangling with the FTC: “Taking security seriously” isn’t enforceable, but saying “user data is never shared” certainly is.

In some cases this can still have an effect, as in the $5 billion fine recently dropped into Facebook’s lap (though for many reasons that was actually not very consequential). It’s important to understand that the fine was for breaking binding promises the company had made — not for violating some kind of social-media-specific regulations, because again, there really aren’t any.

The last point worth noting is that the FTC is a reactive agency. Although it certainly has guidelines on the limits of legal behavior, it doesn’t have rules that when violated result in a statutory fine or charges. Instead, complaints filter up through its many reporting systems and it builds a case against a company, often with the help of the Justice Department. That makes it slow to respond compared with the lightning-fast tech industry, and the companies or victims involved may have moved beyond the point of crisis while a complaint is being formalized there. Equifax’s historic breach and minimal consequences are an instructive case:

So: While the FCC and FTC do provide important guardrails for the social media industry, it would not be accurate to say they are its regulators.

2. State legislators

States are increasingly battlegrounds for the frontiers of tech, including social media companies. This is likely due to frustration with partisan gridlock in Congress that has left serious problems unaddressed for years or decades. Two good examples of states that lost their patience are California’s new privacy rules and Illinois’s Biometric Information Privacy Act (BIPA).

The California Consumer Privacy Act (CCPA) was arguably born out the ashes of other attempts at a national level to make companies more transparent about their data collection policies, like the ill-fated Broadband Privacy Act.

Californian officials decided that if the feds weren’t going to step up, there was no reason the state shouldn’t at least look after its own. By convention, state laws that offer consumer protections are generally given priority over weaker federal laws — this is so a state isn’t prohibited from taking measures for their citizens’ safety while the slower machinery of Congress grinds along.

The resulting law, very briefly stated, creates formal requirements for disclosures of data collection, methods for opting out of them, and also grants authority for enforcing those laws. The rules may seem like common sense when you read them, but they’re pretty far out there compared to the relative freedom tech and social media companies enjoyed previously. Unsurprisingly, they have vocally opposed the CCPA.

BIPA has a somewhat similar origin, in that a particularly far-sighted state legislature created a set of rules in 2008 limiting companies’ collection and use of biometric data like fingerprints and facial recognition. It has proven to be a huge thorn in the side of Facebook, Microsoft, Amazon, Google, and others that have taken for granted the ability to analyze a user’s biological metrics and use them for pretty much whatever they want.

Many lawsuits have been filed alleging violations of BIPA, and while few have produced notable punishments like this one, they have been invaluable in forcing the companies to admit on the record exactly what they’re doing, and how. Sometimes it’s quite surprising! The optics are terrible, and tech companies have lobbied (fortunately, with little success) to have the law replaced or weakened.

What’s crucially important about both of these laws is that they force companies to, in essence, choose between universally meeting a new, higher standard for something like privacy, or establishing a tiered system whereby some users get more privacy than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment,” they start asking why Mainers or Puerto Ricans aren’t getting it as well.

In this way state laws exert outsize influence, forcing companies to make changes nationally or globally because of decisions that technically only apply to a small subset of their users. You may think of these states as being activists (especially if their attorneys general are proactive), or simply ahead of the curve, but either way they are making their mark.

This is not ideal, however, because taken to the extreme, it produces a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. That, at least, is the doomsday scenario predicted almost universally by companies in a position to lose out.

State laws act as a test bed for new policies, but tend to only emerge when movement at the federal level is too slow. Although they may hit the bullseye now and again, like with BIPA, it would be unwise to rely on a single state or any combination among them to miraculously produce, like so many simian legislators banging on typewriters, a comprehensive regulatory structure for social media. Unfortunately, that leads us to Congress.

3. Congress

Image: Bryce Durbin/TechCrunch

What can be said about the ineffectiveness of Congress that has not already been said, again and again? Even in the best of times few would trust these people to establish reasonable, clear rules that reflect reality. Congress simply is not the right tool for the job, because of its stubborn and willful ignorance on almost all issues of technology and social media, its countless conflicts of interest, and its painful sluggishness — sorry, deliberation — in actually writing and passing any bills, let alone good ones.

Companies oppose state laws like the CCPA while calling for national rules because they know that it will take forever and there’s more opportunity to get their finger in the pie before it’s baked. National rules, in addition to coming far too late, are much more likely also be watered down and riddled with loopholes by industry lobbyists. (This is indicative of the influence these companies wield over their own regulation, but it’s hardly official.)

But Congress isn’t a total loss. In moments of clarity it has established expert agencies like those in the first item, which have Congressional oversight but are otherwise independent, empowered to make rules, and kept technically — if somewhat limply — nonpartisan.

Unfortunately, the question of social media regulation is too recent for Congress to have empowered a specialist agency to address it. Social media companies don’t fit neatly into any of the categories that existing specialists regulate, something that is plainly evident by the present attempt to stretch Section 230 beyond the breaking point just to put someone on the beat.

Laws at the federal level are not to be relied on for regulation of this fast-moving industry, as the current state of things shows more than adequately. And until a dedicated expert agency or something like it is formed, it’s unlikely that anything spawned on Capitol Hill will do much to hold back the Facebooks of the world.

4. European regulators

eu gdpr 1Of course, however central it considers itself to be, the U.S. is only a part of a global ecosystem of various and shifting priorities, leaders, and legal systems. But in a sort of inside-out version of state laws punching above their weight, laws that affect a huge part of the world except the U.S. can still have a major effect on how companies operate here.

The most obvious example is the General Data Protection Regulation or GDPR, a set of rules, or rather augmentation of existing rules dating to 1995, that has begun to change the way some social media companies do business.

But this is only the latest step in a fantastically complex, decades-long process that must harmonize the national laws and needs of the E.U. member states in order to provide the clout it needs to compel adherence to the international rules. Red tape seldom bothers tech companies, which rely on bottomless pockets to plow through or in-born agility to dance away.

Although the tortoise may eventually in this case overtake the hare in some ways, at present the GDPR’s primary hindrance is not merely the complexity of its rules, but the lack of decisive enforcement of them. Each country’s Data Protection Agency acts as a node in a network that must reach consensus in order to bring the hammer down, a process that grinds slow and exceedingly fine.

When the blow finally lands, though, it may be a heavy one, outlawing entire practices at an industry-wide level rather than simply extracting pecuniary penalties these immensely rich entities can shrug off. There is space for optimism as cases escalate and involve heavy hitters like antitrust laws in efforts that grow to encompass the entire “big tech” ecosystem.

The rich tapestry of European regulations is really too complex of a topic to address here in the detail it deserves, and also reaches beyond the question of who exactly regulates social media. Europe’s role in that question of, if you will, speaking slowly and carrying a big stick promises to produce results on a grand scale, but for the purposes of this article it cannot really be considered an effective policing body.

(TechCrunch’s E.U. regulatory maven Natasha Lomas contributed to this section.)

5. No one? Really?

As you can see, the regulatory ecosystem in which social media swims is more or less free of predators. The most dangerous are the small, agile ones — state legislatures — that can take a bite before the platforms have had a chance to brace for it. The other regulators are either too slow, too compromised, or too involved (or some combination of the three) to pose a real threat. For this reason it may be necessary to introduce a new, but familiar, species: the expert agency.

As noted above, the FCC is the most familiar example of one of these, though its role is so fragmented that one could be forgiven for forgetting that it was originally created to ensure the integrity of the telephone and telegraph system. Why, then, is it the expert agency for orbital debris? That’s a story for another time.

Capitol building

Image Credit: Bryce Durbin/TechCrunch

What is clearly needed is the establishment of an independent expert agency or commission in the U.S., at the federal level, that has statutory authority to create and enforce rules pertaining to the handling of consumer data by social media platforms.

Like the FCC (and somewhat like the E.U.’s DPAs), this should be officially nonpartisan — though like the FCC it will almost certainly vacillate in its allegiance — and should have specific mandates on what it can and can’t do. For instance, it would be improper and unconstitutional for such an agency to say this or that topic of speech should be disallowed from Facebook or Twitter. But it would be able to say that companies need to have a reasonable and accessible definition of the speech they forbid, and likewise a process for auditing and contesting takedowns. (The details of how such an agency would be formed and shaped is well beyond the scope of this article.)

Even the likes of the FAA lags behind industry changes, such as the upsurge in drones that necessitated a hasty revisit of existing rules, or the huge increase in commercial space launches. But that’s a feature, not a bug. These agencies are designed not to act unilaterally based on the wisdom and experience of their leaders, but are required to perform or solicit research, consult with the public and industry alike, and create evidence-based policies involving, or at least addressing, a minimum of sufficiently objective data.

Sure, that didn’t really work with net neutrality, but I think you’ll find that industries have been unwilling to capitalize on this temporary abdication of authority by the FCC because they see that the Commission’s current makeup is fighting a losing battle against voluminous evidence, public opinion, and common sense. They see the writing on the wall and understand that under this system it can no longer be ignored.

With an analogous authority for social media, the evidence could be made public, the intentions for regulation plain, and the shareholders — that is to say, users — could make their opinions known in a public forum that isn’t owned and operated by the very companies they aim to rein in.

Without such an authority these companies and their activities — the scope of which we have only the faintest clue to — will remain in a blissful limbo, picking and choosing by which rules to abide and against which to fulminate and lobby. We must help them decide, and weigh our own priorities against theirs. They have already abused the naive trust of their users across the globe — perhaps it’s time we asked them to trust us for once.

Instagram’s handling of kids’ data is now being probed in the EU

Facebook’s lead data regulator in Europe has opened another two probes into its business empire — both focused on how the Instagram platform processes children’s information.

The action by Ireland’s Data Protection Commission (DPC), reported earlier by the Telegraph, comes more than a year after a US data scientist reported concerns to Instagram that its platform was leaking the contact information of minors. David Stier went on to publish details of his investigation last year — saying Instagram had failed to make changes to prevent minors’ data being accessible.

He found that children who changed their Instagram account settings to a business account had their contact info (such as an email address and phone number) displayed unmasked via the platform — arguing that “millions” of children had had their contact information exposed as a result of how Instagram functions.

Facebook disputes Stier’s characterization of the issue — saying it’s always made it clear that contact info is displayed if people choose to switch to a business account on Instagram.

It also does now let people opt out of having their contact info displayed if they switch to a business account.

Nonetheless, its lead EU regulator has now said it’s identified “potential concerns” relating to how Instagram processes children’s data.

Per the Telegraph’s report the regulator opened the dual inquiries late last month in response to claims the platform had put children at risk of grooming or hacking by revealing their contact details. 

The Irish DPC did not say that but did confirm two new statutory inquiries into Facebook’s processing of children’s data on the fully owned Instagram platform in a statement emailed to TechCrunch in which it notes the photo-sharing platform “is used widely by children in Ireland and across Europe”.

“The DPC has been actively monitoring complaints received from individuals in this area and has identified potential concerns in relation to the processing of children’s personal data on Instagram which require further examination,” it writes.

The regulator’s statement specifies that the first inquiry will examine the legal basis Facebook claims for processing children’s data on the Instagram platform, and also whether or not there are adequate safeguards in place.

Europe’s General Data Protection Regulation (GDPR) includes specific provisions related to the processing of children’s information — with a hard cap set at age 13 for kids to be able to consent to their data being processed. The regulation also creates an expectation of baked in safeguards for kids’ data.

“The DPC will set out to establish whether Facebook has a legal basis for the ongoing processing of children’s personal data and if it employs adequate protections and or restrictions on the Instagram platform for such children,” it says of the first inquiry, adding: “This Inquiry will also consider whether Facebook meets its obligations as a data controller with regard to transparency requirements in its provision of Instagram to children.”

The DPC says the second inquiry will focus on the Instagram profile and account settings — looking at “the appropriateness of these settings for children”.

“Amongst other matters, this Inquiry will explore Facebook’s adherence with the requirements in the GDPR in respect to Data Protection by Design and Default and specifically in relation to Facebook’s responsibility to protect the data protection rights of children as vulnerable persons,” it adds.

In a statement responding to the regulator’s action, a Facebook company spokesperson told us:

We’ve always been clear that when people choose to set up a business account on Instagram, the contact information they shared would be publicly displayed. That’s very different to exposing people’s information. We’ve also made several updates to business accounts since the time of Mr. Stier’s mischaracterisation in 2019, and people can now opt out of including their contact information entirely. We’re in close contact with the IDPC and we’re cooperating with their inquiries.

Breaches of the GDPR can attract sanctions of as much as 4% of the global annual turnover of a data controller — which, in the case of Facebook, means any future fine for violating the regulation could run to multi-billions of euros.

That said, Ireland’s regulator now has around 25 open investigations related to multinational tech companies (aka cross-border GDPR cases) — a backlog that continues to attract criticism over the plodding progress of decisions. Which means the Instagram inquiries are joining the back of a very long queue.

Earlier this summer the DPC submitted its first draft decision on a cross-border GDPR case — related to a 2018 Twitter breach — sending it on to the other EU DPAs for review.

That step has led to a further delay, as the other EU regulators did not unanimously back the DPC’s decision — triggering a dispute mechanisms set out in the GDPR.

In separate news, an investigation of Instagram influencers by the UK’s Competition and Markets Authority found the platform is failing to protect consumers from being misled. The BBC reports that the platform will roll out new tools over the next year including a prompt for influencers to confirm whether they have received incentives to promote a product or service before they are able to publish a post, and new algorithms built to spot potential advertising content.

Facebook and Instagram will pin vote-by-mail explainers to top of feeds

Starting this weekend, everyone of voting age in the U.S. will begin seeing informational videos at the top of Instagram and Facebook, offering tips and state-specific guidance on how to vote through the mail. The videos will be offered in both English and Spanish.

The vote-by-mail videos will run on Facebook for four straight days in each state, starting between October 10 and October 18 depending on local registration deadlines. On Instagram, the videos will run in all 50 states on October 15 and October 16, followed by other notifications with vote-by-mail information over the next two days.

Facebook vote-by-mail video

Image via Facebook

Facebook vote-by-mail video

Image via Facebook

The videos let voters know when they can return a ballot in person, instruct them to sign carefully on additional envelopes that might be required and encourage returning ballots as soon as possible while being mindful of postmarking deadlines. Facebook will continue providing additional state-specific voting information in a voting information center dedicated to the 2020 election.

Even more than in past years, app makers have taken up the mantle of nudging their users to vote in the U.S. general election. From Snapchat to Credit Karma, it’s hard to open an app without being reminded to register — and that’s a good thing. Snapchat says it registered around 400,000 new voters through its own reminders and Facebook estimates that it helped 2.5 million people register to vote this year.

Voting rights advocates are concerned that 2020’s rapid scale-up of vote-by-mail might lead to many ballots being thrown out — a worry foreshadowed by the half a million ballots that were tossed out in state primaries. Some of those ballots failed to meet deadlines or were deemed invalid due to other mistakes voters made when filling them out.

In Florida, voters that were young, non-white or voting for the first time were twice as likely to have their ballots thrown out compared to white voters in the 2018 election, according to research by the ACLU.

Adding to concerns, state rules vary and they can be specific and confusing for voters new to voting through the mail. In Pennsylvania, the most likely state to decide the results of the 2020 election, new rules against “naked ballots” mean that any ballot not cast in an additional secrecy sleeve will be tossed out. In other states, secrecy sleeves have long been optional.

Facebook gets ready for November

Since 2016, Facebook has faced widespread criticism for rewarding hyper-partisan content, amplifying misinformation and incubating violent extremism. This week, the FBI revealed a plot to kidnap Michigan Governor Gretchen Whitmer that was hatched by militia groups who used the platform to organize.

Whether the public reveal of that months-long domestic terrorism investigation factored into its decisions or not, Facebook has taken a notably more aggressive posture across a handful of recent policy decisions. This week, the company expanded its ban on QAnon, the elaborate web of outlandish pro-Trump conspiracies that have increasingly spilled over into real-world violence, after that content had been allowed to thrive on the platform for years.

Facebook also just broadened its rules prohibiting voter intimidation to ban calls for poll watching that use militaristic language, like the Trump campaign’s own effort to recruit an “Army for Trump” to hold its political enemies to account on election day. The company also announced that it would suspend political advertising after election night, a policy that will likely remain in place until the results of the election are clear.

While President Trump has gone to great lengths to cast doubt on the integrity of vote-by-mail, mailed ballots are a historically very safe practice. States like Oregon and Colorado already conduct their voting through the mail in normal years, and all 50 states have absentee voting in place for people who can’t cast a ballot in person, whether they’re out of town or overseas serving in the military.

Instagram’s Threads app now lets you message everyone, like its Direct app once did

Last year, Instagram announced it was ending support for its standalone mobile messaging app known as Direct, which had allowed users to quickly create and share messages with friends. Shortly thereafter, the company launched Threads, a new messaging app focused on status updates and communication with only those you identified in Instagram as your “Close Friends.” Now, these two messaging concepts are merging. With the latest update to Threads, Instagram is again offering the full inbox experience, it says.

The changes were noted in the latest app update and were soon spotted by social media consultant Matt Navarra and noted reverse engineer Jane Manchun Wong — who both keep a close eye on changes to popular social apps.

In the latest update, Threads will now present a two-tabbed inbox.

In the “Close Friends” section, you can continue to message with your most frequent contacts, as before. The new second tab, “Everyone Else” allows access to your larger Instagram inbox. The app will continue to prioritize the “Close Friends” tab, and your status will continue to only be visible to Close Friends as well.

Instagram also tells us that, by default, Threads users will continue to only receive notifications for their Close Friends. But this can now be adjusted in the app’s Settings if you want to receive notifications for all messages instead.

What’s interesting is that these changes are rolling out so closely following a major update to Instagram’s messaging platform.

Only last week, Facebook introduced cross-app communication between Messenger and Instagram, alongside other features.

That update allows Instagram users to opt to upgrade to a new messaging experience that includes the ability to change chat colors, react with any emoji, watch videos together, set messages to disappear and more. These “fun” features serve as a way to entice users to agree to the update, which then locks users further inside the Facebook universe as it opens up cross-platform messaging. That means upgraded users can use Instagram to message their Facebook friends.

With the changes to Threads, one has to wonder if Facebook is now envisioning the standalone chat app as another potential entry point into its larger messaging platform.

Instagram says that’s not the case today.

“Cross-app communication is an opt-in update for people using Instagram, and will not be enabled for Threads,” a spokesperson told TechCrunch.

That doesn’t mean Threads won’t be updated to later offer some of the other changes that Instagram users can now take advantage of, if they choose to upgrade their messaging experience.

In fact, we understand that Instagram is considering bringing some of those new features over to Threads in the future. There’s no exact timeframe for this project at this point, though.

Presumably, this would mean connecting the Threads app on the backend to the newly built messaging infrastructure. If that’s true, even if Facebook chose to keep cross-app communication an Instagram-only (and Messenger-only) experience, it would still be tying in another core app, Threads, to the new messaging platform. And this, in turn, could make it harder to unspool the apps in the case that Facebook is forced to break up its business, if regulators were to declare it a monopoly.

It’s not clear, however, if Threads has yet been connected to that infrastructure or if it will further down the road. But it’s worth keeping an eye on.

The Threads update is live now.

 

Instagram’s 10th birthday release introduces a Stories Map, custom icons and more

Instagram today is celebrating its 10th birthday with the launch of several new features, including a private “Stories Map,” offering a retrospective of the Stories you’ve shared over the last three years, a pair of well-being updates, and the previously announced IGTV Shopping update. There’s even a selection of custom app icons for those who have recently been inspired to redesign their home screen, as is the new trend.

The icons had been spotted earlier in development within Instagram’s code, and it was expected they would be a part of a larger “birthday release.” That turned out to be true.

With the update, Instagram users across both iOS and Android can opt between a range of icons in shades of orange, yellow, green, purple, black, white and more. There’s also a rainbow-colored Pride icon and several versions of classic icons, if you want a more nostalgic feel.

The new Stories Map feature, meanwhile, introduces a private map and calendar of the Instagram Stories you’ve shared over the past three years, so you can look back at favorite moments. Though this may surprise some users who thought Instagram Stories’ ephemeral nature meant they were deleted from Facebook servers over time, it’s not the first time Instagram has pulled up your old Stories to build out a new feature.

Instagram’s “Story Highlights,” for example, first introduced in 2017, allowed users to create a permanent home for some of their formerly ephemeral content.

Image Credits: Instagram

Two other new features also rolling out with the latest release are timed alongside the kickoff of National Bullying Prevention Month. The first, which will begin as a test, will automatically hide comments similar to others that have already been reported. These will still be visible under the label “View Hidden Comments” if you want to see what’s been removed from the main comment feed.

Image Credits: Instagram

This feature is somewhat similar to Twitter’s “Hide Replies” feature that launched globally last year. Like Twitter, the feature will place the inappropriate or abusive remarks behind an extra click, which supposedly helps to disincentivize this sort of content, as it could be hidden from view. Except in Twitter’s case, the original poster had to manually hide the replies. The Instagram feature, however, is attempting to automate this functionality.

Instagram says it’s also expanding its nudge warnings feature to include an additional warning when people repeatedly try to post offensive remarks. Already, Instagram provides an AI-powered feature that notifies people when their comment may be considered offensive by giving them a chance to reflect and make changes before posting. Now this feature will target repeat offenders, suggesting that they take a moment to step back and reflect on their words and the potential consequences.

Image Credits: Instagram

The company also released new data about trends across its platform as well as an editorial look back at Instagram’s major milestones.

Here, it revealed trends across music — like how KPOP is the No. 1 most-discussed genre — along with other trends, like top songs, AR effects, top Story Fonts and more. Instagram said more than a million posts mentioning “meme” are shared on its platform daily, 50% of users see a video on Instagram daily, there are over 900 million emoji reactions sent daily and the average person sends 3x more DMs than comments.

The updated app is available across iOS and Android.

Instagram expands shopping on IGTV, plans test of shopping on Reels

Instagram this morning announced the global expansion of its Instagram Shopping service across IGTV. The product, which lets you watch a video then check out with a few taps, offers creators and influencers a way to more directly monetize their user base on Instagram, while also giving brands a way to sell merchandise to their followers. Instagram said it would also soon begin testing shopping within its newer feature and TikTok rival, Reels.

Image Credits: Instagram

Shopping has become a larger part of the Instagram experience over the past few years.

Instagram’s Explore section in 2018 gained a personalized Shopping channel filled with the things Instagram believed you’d want the most. It also expanded Shopping tags to Stories. Last year, it launched Checkout, a way to transact within the app when you saw something you wanted to buy. And just this summer, Instagram redesigned its dedicated Shop section, now powered by Facebook Pay.

Today, Instagram users can view products and make purchases across IGTV, Instagram Live and Stories.

On IGTV, users can either complete the purchase via the in-app checkout or they can visit the seller’s website to buy. However, the expectation is that many shoppers will choose to pay for their items without leaving the app, for convenience’s sake. This allows Instagram to collect selling fees on those purchases. At scale, this can produce a new revenue stream for the company — particularly now as consumers shop online more than ever, due to the coronavirus pandemic’s acceleration of e-commerce.

In the future, Instagram says its shoppable IGTV videos will be made discoverable on Instagram Shop, as well.

Given its intention to make shopping a core part of the Instagram platform, it’s not surprising that the company intends to make Reels shoppable, too.

“Digital creators and brands help bring emerging culture to Instagram, and people come to Instagram to get inspired by them. By bringing shopping to IGTV and Reels, we’re making it easy to shop directly from videos. And in turn, helping sellers share their story, reach customers, and make a living,” said Instagram COO Justin Osofsky, in a statement.

Instagram isn’t alone in seeing the potential for shopping inspired by short-form video content. Walmart’s decision to try to acquire a stake in TikTok is tied to the growing “social commerce” trend which mixes together social media and online shopping to create a flurry of demand for new products — like a modern-day QVC aimed at Gen Z and broadcast across smartphones’ small screens.

By comparison, TikTok so far has only dabbled with social commerce. It has run select ad tests, like a partnership with Levi’s during the early days of the pandemic to create influencer-created ads that appeared in users’ feeds and directed users to Levi’s website. It has also experimented with allowing users to add links to e-commerce sites to TikTok profiles and other features.

Instagram didn’t say when Reels would gain shopping features, beyond “later this year.”

 

The Little Black Door app makes luxury wardrobes shareable, resalable, and sustainable

When Lexi Willetts and Marina Pengilly realized they could make as much as £30,000 a year reselling their luxury clothes and accessories online, they resolved to create a solution for modern women who are already well-versed in the behaviors of Instagram and the sharing economy. Their solution, Little Black Door, has just gone live on the iOS store, and allows women to see, style and share their wardrobes with friends and followers. It also connects them to resale platforms, unlocking a vastly more environmental-friendly and sustainable way to shop for high-quality fashion. And with the COVID-19 pandemic hitting the fashion world, the app is set to benefit, as consumers head to the re-sale of luxury, rather than new items.

As Willetts puts it: “This started as a response to our own bad wardrobe behaviors. Our overbuying often because we forgot what we had, often thinking to buy rather than borrow from friends. Plus, we saw the headache of creating resale listings. Realizing that so much of our interactions were online, thus producing very rich e-receipt data, we set about thinking of how we could make use of that to create better wardrobe engagement and reduce our overbuying of irrelevant, cheap fashion.”

The problem with platforms like this has always been: how to digitize the wardrobe in the first place. Most people can’t be bothered to go to the trouble. But this app takes a fresh approach. It concentrates on using wardrobe purchase data and leveraging social sharing behavior to more easily create a digital wardrobe. It also allows the wardrobe to be connected with retail, making it far easier to start the resale journey of selling unwanted items.

The resulting LBD app appears at first to be a sort of ‘Instagram and Depop’ mashup. Users add items to their virtual wardrobe which then employs image recognition AI and natural language processing to figure out what the item is, and tries to categorize it as well. It checks with the user if something is a t-shirt, black, short sleeve, minimalism, urban casual, etc. before it’s confirmed into the wardrobe.

But perhaps more interestingly, the LBD app will ingest receipts of items purchased via email. This means the wardrobe can be built up from new or existing data the user already has. Once the wardrobe is built inside the app, the user can see the clothes and categories, their total wardrobe spend, and create “lookbooks” which they can share with friends and followers to comment on. Friends can then borrow items or users can send items to resale via the ‘swipe to sell’ feature.

Most other wardrobe apps haven’t created a ‘viral loop’ whereby the user is incentivized to use the app daily. LBD has added social features to create a community-driven platform that is almost like an ‘Instagram for fashion’.

Previous ‘wardrobe apps’ like this have obsessed over whether the app can recognize clothes or not, but most don’t work well. The better use of AI, as LBD has realized, is to use receipts data and purchase histories, plus retail partner links, to add to wardrobes. This means the wardrobe upload feature isn’t the primary focus, as it is trumped by wardrobe item data. It’s on this basis that they can create more useful and – crucially – playful features.

“We’ve designed features to entertain and engage the user relating to their wardrobe. We create ease of sharing with friends, tapping into the sharing economy mindset… Moreover, the app is designed to build a culture of conscious consumption, encouraging users to buy less ‘fast fashion’, invest in quality pieces, and wear and share the contents of their closets,” says Willetts.

So the app is interesting, but what about the business model? Effectively, LBD is creating a data play around women’s wardrobes. They could use the data to create advertising for relevant and sustainable brands; partnerships with retailers; value-added services; a resale platform with commissions; verified sellers; and a premium version for high-end users with high-end wardrobes.

LBD is hitting four key trends. The rise of resale (see Real Real, Depop); the rise in sharing wardrobe behaviors (rentals like Rent the Runway, Hurr); the rise in the use of AI in e-commence; and the rise of re-receipts and online sales.

The fashion market is big. The global clothing and apparel market is worth $758.4bn and is over 50% female. But although that market has been hit by the COV-19 pandemic – as people needed to dress up less during lockdown – it is recovering, and now with a client base far more aware of the issues of sustainability. So LBD is set to benefit from that general ‘re-set’.

And, in the coming recession, it will be cheaper to shop second hand from sellers you have an insight into (your friends) as well as selling items to re-sale. For retail partners, they get better data on what consumers really do within the privacy of their wardrobes, allowing them to produce and sell more relevant and more targeted collections, reducing inventory waste, and generating a positive environmental impact.

Instagram CEO, ACLU slam TikTok and WeChat app bans for putting US freedoms into the balance

As people begin to process the announcement from the U.S. Department of Commerce detailing how it plans, on grounds of national security, to shut down TikTok and WeChat — starting with app downloads and updates for both, plus all of WeChat’s services, on September 20, with TikTok following with a shut down of servers and services on November 12 — the CEO of Instagram and the ACLU are among those speaking out against the move.

The CEO of Instagram, Adam Mosseri, wasted little time in taking to Twitter to criticize the announcement. His particular beef is the implication the move will have for U.S. companies — like his — that also have built their businesses around operating across national boundaries.

In essence, if the U.S. starts to ban international companies from operating in the U.S., then it opens the door for other countries to take the same approach with U.S. companies.

Meanwhile, the ACLU has been outspoken in criticizing the announcement on the grounds of free speech.

“This order violates the First Amendment rights of people in the United States by restricting their ability to communicate and conduct important transactions on the two social media platforms,” said Hina Shamsi, director of the American Civil Liberties Union’s National Security Project, in a statement today.

Shamsi added that ironically, while the U.S. government might be crying foul over national security, blocking app updates poses a security threat in itself.

“The order also harms the privacy and security of millions of existing TikTok and WeChat users in the United States by blocking software updates, which can fix vulnerabilities and make the apps more secure. In implementing President Trump’s abuse of emergency powers, Secretary Ross is undermining our rights and our security. To truly address privacy concerns raised by social media platforms, Congress should enact comprehensive surveillance reform and strong consumer data privacy legislation.”

Vanessa Pappas, who is the acting CEO of TikTok, also stepped in to endorse Mosseri’s words and publicly asked Facebook to join TikTok’s litigation against the U.S. over its moves.

We agree that this type of ban would be bad for the industry. We invite Facebook and Instagram to publicly join our challenge and support our litigation,” she said in her own tweet responding to Mosseri, while also retweeting the ACLU. (Interesting how Twitter becomes Switzerland in these stories, huh?) “This is a moment to put aside our competition and focus on core principles like freedom of expression and due process of law.”

The move to shutter these apps has been wrapped in an increasingly complex set of issues, and these two dissenting voices highlight not just some of the conflict between those issues, but the potential consequences and detriment of acting based on one issue over another.

The Trump administration has stated that the main reason it has pinpointed the apps has been to “safeguard the national security of the United States” in the face of nefarious activity out of China, where the owners of WeChat and TikTok, respectively Tencent and ByteDance, are based:

“The Chinese Communist Party (CCP) has demonstrated the means and motives to use these apps to threaten the national security, foreign policy, and the economy of the U.S.,” today’s statement from the U.S. Department of Commerce noted. “Today’s announced prohibitions, when combined, protect users in the U.S. by eliminating access to these applications and significantly reducing their functionality.”

In reality, it’s hard to know where the truth actually lies.

In the case of the ACLU and Mosseri’s comments, they are highlighting issues of principles but not necessarily precedent.

It’s not as if the U.S. would be the first country to take a nationalist approach to how it permits the operation of apps. Facebook and its stable of apps, as of right now, are unable to operate in China without a VPN (and even with a VPN, things can get tricky). And free speech is regularly ignored in a range of countries today.

But the U.S. has always positioned itself as a standard-bearer in both of these areas, and so apart from the self-interest that Instagram might have in advocating for more free-market policies, it points to wider market and business position that’s being eroded.

The issue, of course, is a little like an onion (a stinking onion, I’d say), with well more than just a couple of layers around it, and with the ramifications bigger than TikTok (with 100 million users in the U.S. and huge in pop culture beyond even that) or WeChat (much smaller in the U.S. but huge elsewhere and valued by those who do use it).

The Trump administration has been carefully selecting issues to tackle to give voters reassurance of Trump’s commitment to “Make America Great Again,” building examples of how it’s helping to promote U.S. interests and demote those that stand in its way. China has been a huge part of that image building, positioned as an adversary in industrial, defence and other arenas. Pinpointing specific apps and how they might pose a security threat by sucking up our data fits neatly into that strategy.

But are they really security threats, or are they just doing the same kind of nefarious data ingesting that every social app does in order to work? Will the U.S. banning them really mean that other countries, up to now more in favor of a free market, will fall in line and take a similar approach? Will people really stop being able to express themselves?

Those are the questions that Trump has forced into the balance with his actions, and even if they were not issues before, they have very much become so now.