Snap appoints new execs as it aims to keep 2019 momentum

Snap has another appointment in the apt saga of the its ephemeral CFOs.

Four months after losing its CFO Tim Stone following a reported “personality clash” between Stone and CEO Evan Spiegel, Snap has promoted its VP of Finance Derek Andersen to the role, the company said Monday. Andersen is the company’s third CFO since March of 2017 when it went public.

Lara Sweet, who was serving as the company’s interim CFO as well as the Chief Accounting Officer, will be stepping into a new role as Chief People Officer.

Snap has has a less cataclysmic 2019 in the public markets compared to its two previous calendar years. Snap has nearly doubled its share price since the year’s start though the stock still sits just above where it was one year ago.

Stanford’s Doggo is a petite robotic quadruped you can (maybe) build yourself

Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.

Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.

The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.

It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

In the meantime the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.

U.S. mitigates Huawei ban by offering temporary reprieve

Two steps forward, one step back.

The Trump administration has seemingly been trying to calibrate its strategy around its intensifying trade dispute with China. Last week, it effectively banned Huawei from importing U.S. technology, a decision that forced several American companies including Google to partly sever their relationships with the Chinese handset and telecom provider.

Now, in an unpublished draft of a note in the Federal Register, the Department of Commerce and its Bureau of Industry and Security announced that Huawei would receive a “90-day temporary general license” to continue to use U.S. technology that it already has a license to. New technology and mobile phone models requiring new licenses would still need to apply for them — and those licenses are unlikely to be approved according to Reuters.

Reasons for the drawback are unclear. One answer might be the impact on American jobs. The Information Technology and Innovation Foundation, an industry trade and research group, argued in a new report today that export controls could cost up to $56.3 billion in damage to the U.S. economy and up to 74,000 jobs, depending on their scale. Obviously, the tech industry is mostly opposed to new tariffs or export controls, and the Trump administration has made American jobs a centerpiece of its domestic policy agenda.

The other answer might be that China is now fulminating against the actions and subtly threatening access to rare earth materials. President Xi Jinping toured a rare earths facility this weekend, in what was perceived by political analysts as a subtle reminder of China’s outsized role in rare earths exports, in which it is the world’s largest.

Regardless, the new temporary reprieve won’t do much to change the underlying trade calculus, but it may afford Huawei a little breathing space to figure out what it should next without U.S. technology.

How I Podcast: Criminal/This Is Love’s Lauren Spohrer

The beauty of podcasting is that anyone can do it. It’s a rare medium that’s nearly as easy to make as it is to consume. And as such, no two people do it exactly the same way. There are a wealth of hardware and software solutions open to potential podcasters, so setups run the gamut from NPR studios to USB Skype rigs.

We’ve asked some of our favorite podcast hosts and producers to highlight their workflows — the equipment and software they use to get the job done. The list so far includes:

Jeffrey Cranor of Welcome to Night Vale
Jesse Thorn of Bullseye
Ben Lindbergh of Effectively Wild
My own podcast, RiYL

In the crowded world of true-crime podcasts, Criminal has managed to stand out, garnering the critical acclaim of sources ranging from The Atlantic to Entertainment Weekly. Launched in 2014 by Lauren Spohrer, Eric Mennel and host Phoebe Judge, the Chapel Hill-based series takes a deep and complex dive into its subject matter, painting a rich picture of the case files it studies. 

On Valentine’s Day 2018, Spohrer, Judge and fellow Criminal producer Nadia Wilson launched the first season of This Is Love, a series focused on “sacrifice, obsession, and the ways in which we bet everything on each other.” Co-creator and co-producer Spohrer joins us to highlight the rig the team uses to gather recordings on the road. 

We’re just returning home from a whirlwind reporting trip for This Is Love. We were in Italy for 10 days and recorded eight stories, moving quickly by train and car through small towns. It was the most ambitious reporting trip we’ve done, and we were lucky we met such extraordinarily kind people and discovered Pocket Coffee.

We like to prepare for the worst-case scenario, so we brought three recording kits with us and each carried one (me, Phoebe, and Senior Producer Nadia Wilson). We stocked up on AA batteries at Costco before we left.

We recorded the interviews with a Marantz PMD 661 recorder and an Audio Technica AT8035 microphone with a Rode pistol grip and a Rode WS7 Deluxe Wind Shield. We use Sony MDR-7506 headphones both for recording and for mixing. A lot of the interviews were recorded standing up, one in a fish market before the sun came up. Phoebe always records herself and the guest with the same mic. Italy is loud, and we didn’t fight that. We do plenty of interviews in pristinely silent studios. It was a nice change to let the world creep into the sound.

Back when we started Criminal, we recorded Phoebe’s narration with the Audio Technica AT8035 in my bedroom closet with a bunch of blankets draped over her head. These days, we have a partnership with our local NPR affiliate, North Carolina Public Radio, and we use their studio to record Phoebe’s narration for both This Is Love and Criminal. They’ve got a Neumann TLM 103 microphone. We added a mic preamp (Great River ME-1NV) and a Stedman Proscreen PS101. The studio is set up to record into Adobe Audition. We upload those files to Dropbox, and then download them at home to edit and mix in Pro Tools.

When we’re touring, (we’re doing 16 live shows this fall, come see us!), we produce live shows with a program called Soundboard and a Focusrite Scarlett 2i4 USB Audio Interface.

Other tools: we pitch story ideas via email and Slack, use a shared Google Calendar to stay organized and transcribe our raw tape with Trint. Scripts are written and fact-checked collaboratively using Google Docs. We mix in Pro Tools, and when we’re happy with a mix, we send sessions to Rob Byers, Michael Rafael and/or Johnny Vince Evans to be mastered.

Quadric.io raises $15M to build a plug-and-play supercomputer for autonomous systems

Quadric.io, a startup founded by some of the folks behind the once-secretive bitcoin mining operation “21E6,” has raised $15 million in a Series A round that will fund the development of a supercomputer designed for autonomous systems.  

The round was led by automotive Tier 1 supplier DENSO and its semiconductor products arm NSITEXE, which will also be one of Quadric.io’s customers for future electronic systems in all levels of autonomous driving solutions. Leawood VC also participated in the Series A round.

The company says it will use the injection of capital to build out its product, hire more people and business development. Quadric’s supercomputer will be assembled by an outsourced company.

PearUncork CapitalSV AngelCota Capital, and Trucks VC are seed investors in Quadric.io.

The roots of Quadric.io grew from a seemingly disconnected mission to produce an agricultural robot designed to transform the way vineyards were managed. The company launched in 2016 by CEO Veerbhan Kheterpal, CTO Nigel Drego and CPO Daniel Siru — all co-founders of 21 Inc. The bitcoin startup, once known as 21E6, would later rebrand as Earn.com before being acquired by Coinbase for $100 million.

Quadric’s original plan was stymied by some real-world fundamentals. The power-hungry ag robot was weighed down by batteries that became too unwieldy to move amongst vineyard rows and the processing time to turn loads environmental data into actual actions based on algorithms were too slow.

Quadric was looking for a chip designed for processing on the edge and that supported decision-making in real time — all while crunching data faster and sipping, not slurping power. That need grew into Quadric’s core product today: a supercomputer that the company says hits that sweet spot of increased computational speed and reduced power consumption.

Kheterpal noted in a recent post on Medium that Intel’s CPU’s work “very well for standard computer processing” and Nvidia’s GPU’s have “ushered in astounding new graphics processing for gaming and much more.” But he argued, that Quadric needed something neither of those companies could provide: a chip designed for processing on the edge.

The company created a single unified architecture in the supercomputer that enables high performance computing and artificial intelligence. The supercomputer, which is built around the Quadric Processor, is plug-and-play. This means people can plug in their sensor set and build their entire application to support “near-instantaneous” decision making, Quadric says. The company claims that early testing of Quadric’s system has shown up to 100 times lower latency and a 90 percent reduction in power consumption. 

Quadric argues this underlying technology is a prerequisite for companies developing autonomous systems that will be used in the construction, transportation, agriculture and warehousing industries. The underlying tech that supports autonomous machines used in these industries either lacks the performance or solves only a small part of the full application, according to Quadric.

The startup contends that machines with autonomous functions requires processing speed and responsiveness “on the edge” — meaning at the machine level, not in the cloud.   

Other companies, most recently Tesla, have opted to build their own chips to meet this specific need. But as Kheterpal notes, not all companies have the resources to build the tech from the ground up. 

“ Quadric is a plug and play option that eliminates the need for building heterogeneous systems with significant hardware and software integration costs — thereby taking years off of product development roadmaps,” Kheterpal wrote.

With foldable phones in limbo, foldable display laptops are on the horizon

With the Galaxy Fold and Huawei Mate X currently in limbo for very different reasons, PC makers are apparently jumping at the chance to make their own foldable display ambitions known. It’s been clear, of course, for as long as flexible screens have been a viable technology, that hardware manufacturers would be experimenting with any and all form factors. In just the past week, two key players have talked up their plans for how it might be utilized on the PC front.

Last week, Lenovo showed off a prototype ThinkPad X1. The company’s been no stranger to experimental convertibles, and utilizing a foldable display could further blur the line been tablets and PCs. The technology allows for a large screen in a compact form factor. Here it’s 13.3 inches that can be collapsed into half the size, making it a lot easier to take with you.

It’s a slick prototype, and obviously folding form factors are already the standard in the laptop world. But like Lenovo’s past attempts at dual-screen devices, the on-screen removes the tactile keyboard, one of the biggest pain points in moving consumers away from more traditional laptops. Perhaps that’s something that could be addressed with the sorts of overlays provided by companies like Sensel.

Dell, too, recently told Gizmodo that it’s experimenting with a similar form factor. No surprise on that front, really. One expects that any PC maker worth its weight in netbooks is, at the very least, playing around with the concept as we speak.

All of this is complicated by the fact that the foldable phone category has been plagued with issues — though not necessarily the ones most people predicted. Samsung indefinitely pushed back the launch date of the Galaxy Fold after several reviewers ran into issues with their units.

We’re still waiting for official news on that front. Huawei, meanwhile, had a wrench thrown into its stratospheric ascendancy when the company was blacklisted by the Trump White House, leaving aspects of its future in jeopardy.

Neither of these are direct indictments of the concept — though Samsung’s model certainly failed in real-world testing. For that reason, it’s probably safe to say that the jury’s still out on consumer demand, though many of the major concerns, including pricing, would likely carry over to the PC category.

Maisie Williams’ talent discovery startup Daisie raises $2.5 million, hits 100K members

Maisie Williams’ time on Game of Thrones may have come to an end, but her talent discovery app Daisie is just getting started. Co-founded by film producer Dom Santry, Daisie aims to make it easier for creators to showcase their work, discover projects and collaborate with one another through a social networking-style platform. Only 11 days after Daisie officially launched to the public, the app hit an early milestone of 100,000 members. It also recently closed on $2.5 million in seed funding, the company tells TechCrunch.

The round was led by Founders Fund, which contributed $1.5 million. Other investors included 8VC, Kleiner Perkins, and newer VC firm Shrug Capital, from AngelList’s former head of marketing Niv Dror, who also separately invested. To date — including friends and family money and the founders’ own investment — Daisie has raised roughly $3 million.

It will later move toward raising a larger Series A, Santry says.

On Daisie, creators establish a profile as you would on a social network, find and follow other users, then seek out projects based on location, activity, or other factors.

“Whether it’s film, music, photography, art — everything is optimized around looking for collaborators,” explains Santry. “So the projects that are actively open and looking for people to get involved, are the ones we’re really pushing for people to discover and hopefully get involved with,” he says.

The company’s goal to offer an alternative path to talent discovery is a timely one. Today, the creative industry is waking up — as are many others — to the ramifications of the #MeToo and #TimesUp movements. As power-hungry abusers lose their jobs, new ways of working, networking and sourcing talent are taking hold.

As Williams said when she first introduced the app last year, Daisie’s focus is on giving the power back to the creator.

“Instead of [creators] having to market themselves to fit someone else’s idea of what their job would be, they can let their art speak for themselves,” she said at the time.

The app was launched into an invite-only beta on iOS last summer, and quickly saw a surge of users. After 37,000 downloads in week one, it crashed.

“We realized that the community was a lot larger than the product we had built, and that scale was something we needed to do properly,” Santry tells TechCrunch.

The team realized there was another problem, too: Once collaborators found each other in Daisie, there wasn’t a clear cut way for them to get in touch with one another as the app had no communication tools or ways to share files built in.

“That journey from concept to production was pretty muddy and quite muddled…so we realized, if we were bringing teams together, we actually wanted to give them a place to work — give them this creative hub…and take their project from concept all the way to production on Daisie,” Santry notes.

With this broader concept in mind, Daisie began fundraising in San Francisco shortly after the beta launch. The round initially closed in October 2018, but was more recently reopened to allow Dror’s investment.

With the additional funding in tow, Daisie has been able to grow its team of five to eighteen, including new hires from Monzo, Deliveroo, BBC, Microsoft, and others — specifically engineers who were familiar with designing apps for scale. Tasked with developing better infrastructure and a more expansive feature set, the team set to work on bringing Daisie to the web.

Nine months later, the new version launched to the public and is stable enough to handle the load. Today, it topped 100,000 users — most of which are in London. However, Daisie is planning to focus on taking its app to other cities including Berlin, New York, and L.A. going forward.

The company has monetization ideas in mind, but the app does not currently generate revenue. However, it’s already fielding inquiries from companies who want Daisie to find them the right talent for their projects.

“We want the best for the creators on the platform, so if that means bringing clients on — and hopefully giving those connectivity opportunities — then we’ll absolutely [go] down those roads,” Santry says.

The app may also serve as a talent pipeline for Maisie Williams’ own Daisy Chain Productions. In fact, Daisie recently ran a campaign called London Creates which connected young, emerging creators with project teams, two of which were headed by Santry’s Daisy Chain Productions co-founders, Williams and Bill Milner.

Now Daisy Chain Productions is going to produce a film from the Daisie collaboration as a result.

While celebs sometimes do little more than lend their name to projects, Williams was hands-on in terms of getting Daisie off the ground, Santry says. During the first quarter of 2019, she worked on Daisie 9-to-5, he notes. But she has since started another film project and plans to continue to work as an actress, which will limit her day-to-day involvement. Her role now and in the future may be more high-level.

“I think her role is going to become one of, culturally, like: where does Daisie stand? What do we stand for? Who do we work with? What do we represent?” he says. “How do we help creators everywhere? That’s mainly want Maisie wants to make sure Daisie does.”

Why is Facebook doing robotics research?

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

Talk key takeaways from KubeCon 2019 with TechCrunch writers

The Linux Foundation’s annual KubeCon conference is going down at the Fira Gran Via exhibition center in Barcelona, Spain this week and TechCrunch is on the scene covering all the latest announcements.

The KubeCon/CloudNativeCon conference is the world’s largest gathering for the topics of Kubernetes, DevOps and cloud-native applications. TechCrunch’s Frederic Lardinois and Ron Miller will be on the ground at the event. Wednesday at 9:00 am PT, Frederic and Ron will be sharing with Extra Crunch members via a conference call what they saw and what it all means.

Tune in to dig into what happened onstage and off and ask Frederic and Ron any and all things Kubernetes, open-source development or dev tools.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

Instagram’s IGTV copies TikTok’s AI, Snapchat’s design

Instagram conquered Stories, but it’s losing the battle for the next video formats. TikTok is blowing up with an algorithmically suggested vertical one-at-a-time feed featuring videos of users remixing each other’s clips. Snapchat Discover’s 2 x infinity grid has grown into a canvas for multi-media magazines, themed video collections, and premium mobile TV shows.

Instagram’s IGTV…feels like a flop in comparison. Launched a year ago, it’s full of crudely cropped & imported viral trash from around the web. The long-form video hub that lives inside both a homescreen button in Instagram as well as a standalone app has failed to host lengthier must-see original vertical content. Sensor Tower estimates that the IGTV app has just 4.2 million installs worldwide with just 7,700 new ones per day — implying less than half a percent of Instagram’s billion-plus users have downloaded it. IGTV doesn’t rank on the overall charts and hangs low at #191 on the US – Photo & Video app charts according to App Annie.

Now Instagram has quietly overhauled the design of IGTV’s space inside its main app to crib what’s working from its two top competitors. The new design showed up in last week’s announcements for Instagram Explore’s new Shopping and IGTV discovery experiences. At the time, Instagram’s product lead on Explore Will Ruben told us that with the redesign, “the idea is this is more immersive and helps you to see the breadth of videos in IGTV rather than the horizontal scrolling interface that used to exist” but the company declined to answer follow-up questions about it.

IGTV has ditched its category-based navigation system’s tabs like “For You”, “Following”, “Popular”, and “Continue Watching” for just one central feed of algorithmically suggested videos — much like TikTok. This affords a more lean-back, ‘just show me something fun’ experience that relies on Instagram’s AI to analyze your behavior and recommend content instead of putting the burden of choice on the viewer.

IGTV has also ditched its awkward horizontal scrolling design that always kept a clip playing in the top half of the screen. Now you’ll scroll vertically through a 2 x infinity grid of recommended clips in a what looks just like Snapchat Discover feed. Once you get past a first video that auto-plays up top, you’ll find a full-screen grid of things to watch. You’ll only see the horizontal scroller in the standalone IGTV app, or if you tap into an IGTV video, and then tap the Browse button for finding a next clip while the last one plays up top.

Instagram seems to be trying to straddle the designs of its two competitors. The problem is that TikTok’s one-at-a-time feed works great for punchy, short videos that get right to the point. If you’re bored after 5 second you swipe to the next. IGTV’s focus on long-form means its videos might start too slowly to grab your attention if they were auto-played full-screen in the feed rather than being chosen by a viewer. But Snapchat makes the most of the two previews per row design IGTV has adopted because professional publishers take the time to make compelling cover thumbnail images promoting their content. IGTV’s focus on independent creators means fewer have labored to make great cover images, so viewers have to rely on a screenshot and caption.

Instagram is prototyping a number of other features to boost engagement across its app, as discovered by reverse engineering specialist and frequent TechCrunch tipster Jane Manchun Wong. Those include options to blast a direct message to all your Close Friends at once but in individual message threads, see a divider between notifications and likes you have or haven’t seen, or post a Chat sticker to Stories that lets friends join a group message thread about that content. And to better compete with TikTok, it may let you add lyrics stickers to Stories that appear word-by-word in sync with Instagram’s licensed music soundtrack feature, and share Music Stories to Facebook. What we haven’t seen is any cropping tool for IGTV that would help users reformat landscape videos. The vertical-only restriction keeps lots of great content stuck outside IGTV, or letterboxed with black, color-matched backgrounds, or meme-style captions with the video as just a tiny slice in the middle.

When I spoke with Instagram co-founder and ex-CEO Kevin Systrom last year a few months after IGTV’s launch, he told me “It’s a new format. It’s different. We have to wait for people to adopt it and that takes time . . . Everything that is great starts small.”

But to grow large, IGTV needs to demonstrate how long-form portrait mode video can give us a deeper look at the nuances of the influencers and topics we care about. The company has rightfully prioritized other drives like safety and well-being with features that hide bullies and deter overuse. But my advice from August still stands despite all the ground Instagram has lost in the meantime. “Concentrate on teaching creators how to find what works on the format and incentivizing them with cash and traffic. Develop some must-see IGTV and stoke a viral blockbuster. Prove the gravity of extended, personality-driven vertical video.” Until the content is right, it won’t matter how IGTV surfaces it.