Apple’s iPadOS 15 breaks the app barrier

The announcement of new iPad software at this year’s WWDC conference had an abnormally large expectation hung on it. The iPad lineup, especially the larger iPad Pro, has kept up an impressively frantic pace of hardware innovation over the past few years. In that same time frame, the software of the iPad, especially its ability to allow users to use multiple apps at once and in its onramps for professional software makers, has come under scrutiny for an apparently slower pace. 

This year’s announcements about iOS 15 and iPadOS 15 seemed designed to counter that narrative with the introduction of a broad number of quality of life improvements to multitasking as well as a suite of system-wide features that nearly all come complete with their own developer-facing APIs to build on. I had the chance to speak to Bob Borchers, Apple’s VP of Worldwide Product Marketing, and Sebastien (Seb) Mariners-Mes, VP, Intelligent System Experience at Apple about the release of iPadOS 15 to discuss a variety of these improvements. 

Mariners-Mes works on the team of Apple software SVP Craig Federighi and was pivotal in the development of this new version.

iPad has a bunch of new core features including SharePlay, Live Text, Focuses, Universal Control, on-device Siri processing and a new edition of Swift Playgrounds designed to be a prototyping tool. Among the most hotly anticipated for iPad Pro users, however, are improvements to Apple’s multitasking system. 

If you’ve been following along, you’ll know that the gesture-focused multitasking interface of iPadOS has had its share of critics, including me. Though it can be useful in the right circumstances, the un-discoverable gesture system and confusing hierarchy of the different kinds of combinations of apps made it a sort of floppy affair to utilize correctly for an apt user much less a beginner. 

Since the iPad stands alone as pretty much the only successful tablet device on the market, Apple has a unique position in the industry to determine what kinds of paradigms are established as standard. It’s a very unique opportunity to say, hey, this is what working on a device like this feels like; looks like; should be.

 

So I ask Borchers and Mariners-Mes to talk a little bit about multitasking. Specifically Apple’s philosophy in the design of multitasking on iPadOS 15 and the update from the old version, which required a lot of acrobatics of the finger and a strong sense of spatial awareness of objects hovering out off the edges of the screen. 

“I think you’ve got it,” Borchers says when I mention the spatial gymnastics, “but the way that we think about this is that the step forward and multitasking makes it easier discover, easier to use even more powerful. And, while pros I think were the ones who were using multitasking in the past, we really want to take it more broadly because we think there’s applicability to many, many folks. And that’s why the, the discovery and the ease of use I think were critical.”

“You had a great point there when you talked about the spatial model and one of our goals was to actually make the spatial model more explicit in the experience,” says Mariners-Mes, “where, for example, if you’ve got a split view, and you’re replacing one of the windows, we kind of open the curtain and tuck the other app to the side, you can see it — it’s not a hidden hidden mental model, it’s one that’s very explicit.

Another great example of it is when you go into the app, switcher to reconfigure your windows, you’re actually doing drag and drop as you rearrange your new split views, or you dismiss apps and so on. So it’s not a hidden model, it’s one where we really try to reinforce a spatial model with an explicit one for the user through all of the animations and all of the kinds of affordances.”

Apple’s goal this time around, he says, was to add affordances for the user to understand that multitasking was even an option — like the small series of dots at the top of every app and window that now allows you to explicitly choose an available configuration, rather than the app-and-dock-juggling method of the past. He goes on to say that consistency was a key metric for them on this version of the OS. The appearance of Slide Over apps in the same switcher view as all other apps, for instance. Or the way that you can choose configurations of apps via the button, by drag and drop in the switcher and get the same results.

In the dashboard, Mariners-Mes says, “you get an at a glance view of all of the apps that you’re running and a full model of how you’re navigating that through the iPad’s interface.”

This ‘at a glance’ map of the system should be very welcome by advanced users. Even as a very aggressive Pro user myself, Slide Over apps became more of a nuisance than anything because I couldn’t keep track of how many were open and when to use them. The ability to combine them on the switcher itself is one of those things that Apple has wanted to get into the OS for years but is just now making its way onto iPads. Persistence of organization, really, was the critical problem to tackle.

“I think we believe strongly in building a mental model where people know where things are [on iPad],” says Mariners-Mes. “And I think you’re right when it comes persistence I think it also applies to, for example, home screen. People have a very strong mental model of where things are in the home screen as well as all of the apps that they’ve configured. And so we try to maintain a well maintained that mental model, and also allow people to reorganize again in the switcher.”

He goes on to explain the new ‘shelf’ feature that displays every instance or window that an app has open within itself. They implemented this as a per-app feature rather than a system-wide feature, he says, because the association of that shelf with a particular app fit the overall mental model that they’re trying to build. The value of this shelf may jump into higher relief when more professional apps that may have a dozen documents or windows open at once and active during a project ship later this year.

Another nod to advanced users in iPadOS 15 is the rich keyboard shortcut set offered across the system. The interface can be navigated by arrow keys now, many advanced commands are there and you can even move around on an iPad using a game controller. 

“One of the key goals this year was to make basically everything in the system navigable from the keyboard,” says Mariners-Mes, “so that if you don’t want to, you don’t have to take your hands off the keyboard. All of the new multitasking affordances and features, you can do through the keyboard shortcuts. You’ve got the new keyboard shortcut menu bar where you can see all the shortcuts that are available. It’s great for discoverability. You can search them and we even, you know, and this is a subtle point, but we even made a very conscious effort to rationalize the shortcuts across Mac and iPadOS. So that if you’re using universal control, for example, you’re going to go from one environment to the other seamlessly. You want to ensure that consistency as you go across.”

The gestures, however, are staying as a nod to consistency for existing users that may be used to those. 

To me, one of the more interesting and potentially powerful developments is the introduction of the Center Window and its accompanying API. A handful of Apple apps like Mail, Notes and Messages now allow items to pop out into an overlapping window.

“It was a very deliberate decision on our part,” says Mariners-Mes about adding this new element. “This really brings a new level of productivity where you can have, you know, this floating window. You can have content behind it. You can seamlessly cut and paste. And that’s something that’s just not possible with the traditional [iPadOS] model. And we also really strive to make it consistent with the rest of multitasking where that center window can also become one of the windows in your split view, or full size, and then go back to to being a center window. We think it’s a cool addition to the model and we look really look forward to 3rd parties embracing it.”

Early reception of the loop Apple gave at iPadOS 15 has an element of reservation about it still given that many of the most powerful creative apps are made by third parties that must adopt these technologies in order for them to be truly useful. But Apple, Borchers says, is working hard to make sure that pro apps adopt as many of these new paradigms and technologies as possible, so that come fall, the iPad will feel like a more hospitable host for the kinds of advanced work pros want to do there.

One of the nods to this multi-modal universe that the iPad exists in is Universal Control. This new feature uses Bluetooth beaconing, peer-to-peer WiFi and the iPad’s touchpad support to allow you to place your devices close to one another and — in a clever use of reading user intent — slide your mouse to the edge of a screen and onto your Mac or iPad seamlessly. 

CUPERTINO, CALIFORNIA – June 7, 2021: Apple’s senior vice president of Software Engineering Craig Federighi showcases the ease of Universal Control, as seen in this still image from the keynote video of AppleÕs Worldwide Developers Conference at Apple Park. (Photo Credit: Apple Inc.)Ê

“I think what we have seen and observed from our users, both pro and and otherwise, is that we have lots of people who have Macs and they have iPads, and they have other iPhones and and we believe in making these things work together in ways that are that are powerful,” says Borchers. “And it just felt like a natural place to be able to go and extend our Continuity model so that you could make use of this incredible platform that is iPadOS while working with your Mac, right next to it. And I think the big challenge was, how do you do that in kind of a magical, simple way. And that’s what Seb and his team and been able to accomplish.

“It really builds on the foundation we made with Continuity and Sidecar,” adds Mariners-Mes. “We really thought a lot about how do you make the experience — the set up experience — as seamless as possible. How do you discover that you’ve got devices side by side.?

The other thing we thought about was what are the workflows that people want to have and what capabilities that will be essential for that. That’s where thinks like the ability to seamlessly drag content across the platforms or cut and paste was we felt to be really, really important. Because I think that’s really what brings to the magic to the experience.”

Borchers adds that it makes all the continuity features that much more discoverable. Continuity’s shared clipboard, for instance, is an always on but invisible presence. Expanding that to visual and mouse-driven models made some natural sense.

“It’s just like, oh, of course, I can drag that all the way across all the way across here,” he says.

“Bob, you say, of course,” Mariners-Mes laughs. “And yet for those of us working in platforms for a long time, the ‘of course’, is technically very, very challenging. Totally non obvious.”

Another area where iPadOS 15 is showing some promising expansionary behavior is in system-wide activities that allow you to break out of the box of in-app thinking. These include embedded recommendations that seed themselves into various apps, Shareplay, which makes an appearance wherever video calls are found and Live Text, which turns all of your photos into indexed archives searchable with a keyboard. 

Another is Quick Note, a system extension that lets you swipe from the bottom corner of your screen wherever you are in the system.

“There are, I think a few interesting things that we did with with Quick Note,” says Mariners-Mes. “One is this idea of linking. So, that if I’m working in Safari or Yelp or another app, I can quickly insert a link to whatever content I’m viewing. I don’t know about you, but it’s something that I certainly do a lot when I do research. 

“The old way was, like, cut and paste and maybe take a screenshot, create a note and jot down some notes. And now we’ve made that very, very seamless and fluid across the whole system. It even works the other way where, if I’m now in Safari and I have a note that refers to that page in Safari, you’ll see it revealed as a thumbnail at the bottom of the screen’s right hand side. So, we’ve really tried to bring the notes experience to be something that just permeates the system and is easily accessible from, from everywhere.” 

Many of the system-wide capabilities that Apple is introducing in iPadOS 15 and iOS 15 have an API that developers can tap into. That is not always the case with Apple’s newest toys, which in years past have often been left to linger in the private section of its list of frameworks rather than be offered to developers as a way to enhance their apps. Borchers says that this is an intentional move that offers a ‘broader foundation of intelligence’ across the entire system. 

This broader intelligence includes Siri moving a ton of commands to its local scope. This involved having to move a big chunk of Apple’s speech recognition to an on-device configuration in the new OS as well. The results, says Borchers, are a vastly improved day-to-day Siri experience, with many common commands executing immediately upon request — something that was a bit of a dice roll in days of Siri past. The removal of the reputational hit that Siri was taking from commands that went up to the cloud never to return could be the beginning of a turnaround for the public perception of Siri’s usefulness.

The on-device weaving of the intelligence provided by the Apple Neural Engine (ANE) also includes the indexing of text across photos in the entire system, past, present and in-the-moment.

“We could have done live text only in camera and photos, but we wanted it to apply to anywhere we’ve got images, whether it be in in Safari or quick look or wherever,” says Mariners-Mes. “One of my favorite demos of live text is actually when you’ve got that long complicated field for a password for a Wi-Fi network. You can just actually bring it up within the keyboard and take a picture of it, get the text in it and copy and paste it into into the field. It’s one of those things that’s just kind of magical.”

On the developer service front of iPadOS 15, I ask specifically about Swift Playgrounds, which add the ability to write, compile and ship apps on the App Store for the first time completely on iPad. It’s not the native Xcode some developers were hoping for, but, Borchers says, Playgrounds has moved beyond just ‘teaching people how to code’ and into a real part of many developer pipelines.

“ think one of the big insights here was that we also saw a number of kind of pro developers using it as a prototyping platform, and a way to be able to be on the bus, or in the park, or wherever if you wanted to get in and give something a try, this was super accessible and easy way to get there and could be a nice adjunct to hey, I want to learn to code.”

“If you’re a developer,” adds Mariners-Mes, “it’s actually more productive to be able to run that app on the device that you’re working on because you really get great fidelity. And with the open project format, you can go back and forth between Xcode and Playgrounds. So, as Bob said, we can really envision people using this for a lot of rapid prototyping on the go without having to bring along the rest of their development environment so we think it’s a really, really powerful addition to our development development tools this year.”

Way back in 2018 I profiled a new team at Apple that was building out a testing apparatus that would help them to make sure they were addressing real-world use cases for flows of process that included machines like the (at the time un-revealed) new Mac Pro, iMacs, MacBooks and iPads. One of the demos that stood out at the time was a deep integration with music apps like Logic that would allow the input models of iPad to complement the core app. Tapping out a rhythm on a pad, brightening or adjusting sound more intuitively with the touch interface. More of Apple’s work these days seems to be aimed at allowing users to move seamlessly back and forth between its various computing platforms, taking advantage of the strengths of each (raw power, portability, touch, etc) to complement a workflow. A lot of iPadOS 15 appears to be geared this way.

Whether it will be enough to turn the corner on the perception of iPad as a work device that is being held back by software, I’ll reserve judgement until it ships later this year. But, in the near term, I am cautiously optimistic that this set of enhancements that break out of the ‘app box’, the clearer affordances for multitasking both in and out of single apps and the dedication to API support are pointing towards an expansionist mentality on the iPad software team. A good sign in general.

Apple’s new ShazamKit brings audio recognition to apps, including those on Android

Apple in 2018 closed its $400 million acquisition of music recognition app Shazam. Now, it’s bringing Shazam’s audio recognition capabilities to app developers in the form of the new ShazamKit. The new framework will allow app developers — including those on both Apple platforms and Android — to build apps that can identify music from Shazam’s huge database of songs, or even from their own custom catalog of pre-recorded audio.

Many consumers are already familiar with the mobile app Shazam, which lets you push a button to identify what song you’re hearing, and then take other actions — like viewing the lyrics, adding the song to a playlist, exploring music trends, and more. Having first launched in 2008, Shazam was already one of the oldest apps on the App Store when Apple snatched it up.

Now the company is putting Shazam to better use than being just a music identification utility. With the new ShazamKit, developers will now be able to leverage Shazam’s audio recognition capabilities to create their own app experiences.

There are three parts to the new framework: Shazam catalog recognition, which lets developers add song recognition to their apps; custom catalog recognition, which performs on-device matching against arbitrary audio; and library management.

Shazam catalog recognition is what you probably think of when you think of the Shazam experience today. The technology can recognize the song that’s playing in the environment and then fetch the song’s metadata, like the title and artist. The ShazamKit API will also be able to return other metadata like genre or album art, for example. And it can identify where in the audio the match occurred.

When matching music, Shazam doesn’t actually match the audio itself, to be clear. Instead, it creates a lossy representation of it, called a signature, and matches against that. This method greatly reduces the amount of data that needs to be sent over the network. Signatures also cannot be used to reconstruct the original audio, which protects user privacy.

The Shazam catalog comprises millions of songs and is hosted in cloud and maintained by Apple. It’s regularly updated with new tracks as they become available.

When a customer uses a developer’s third-party app for music recognition via ShazamKit, they may want to save the song in their Shazam library. This is found in the Shazam app, if the user has it installed, or it can be accessed by long pressing on the music recognition Control Center module. The library is also synced across devices.

Apple suggests that apps make their users aware that recognized songs will be saved to this library, as there’s no special permission required to write to the library.

Image Credits: Apple

ShazamKit’s custom catalog recognition feature, meanwhile, could be used to create synced activities or other second-screen experiences in apps by recognizing the developer’s audio, not that from the Shazam music catalog.

This could allow for educational apps where students follow along with a video lesson, where some portion of the lesson’s audio could prompt an activity to begin in the student’s companion app. It could also be used to enable mobile shopping experiences that popped up as you watched a favorite TV show.

ShazamKit is current in beta on iOS 15.0+, macOS 12.0+, Mac Catalyst 15.0+, tvOS 15.0+, and watchOS 8.0+. On Android, ShazamKit comes in the form of an Android Archive (AAR) file and supports music and custom audio, as well.

read more about Apple's WWDC 2021 on TechCrunch

Apple finally launches a Screen Time API for app developers

Just after the release of iOS 12 in 2018, Apple introduced its own built-in screen time tracking tools and controls. In then began cracking down on third-party apps that had implemented their own screen time systems, saying they had done so through via technologies that risked user privacy. What wasn’t available at the time? A Screen Time API that would have allowed developers to tap into Apple’s own Screen Time system and build their own experiences that augmented its capabilities. That’s now changed.

At Apple’s Worldwide Developer Conference on Monday, it introduced a new Screen Time API that offers developer access to frameworks that will allow parental control experience that also maintains user privacy.

The company added three new Swift frameworks to the iOS SDK that will allow developers to create apps that help parents manage what a child can do across their devices and ensure those restrictions stay in place.

The apps that use this API will be able to set restrictions like locking accounts in place, preventing password changes, filtering web traffic, and limiting access to applications. These sorts of changes are already available through Apple’s Screen Time system, but developers can now build their own experiences where these features are offered under their own branding and where they can then expand on the functionality provided by Apple’s system.

 

Developers’ apps that take advantage of the API can also be locked in place so it can only be removed from the device with a parent’s approval.

The apps can authenticate the parents and ensure the device they’re managing belongs to a child in the family. Plus, Apple said the way the system will work lets parents choose the apps and websites they want to limit, without compromising user privacy. (The system returns only opaque tokens instead of identifiers for the apps and website URLs, Apple told developers, so the third-parties aren’t gaining access to private user data like app usage and web browsing details. This would prevent a shady company from building a Screen Time app only to collect troves of user data about app usage, for instance.)

The third-party apps can also create unique time windows for different apps or types of activities, and warn the child when time is nearly up. When it registers the time’s up, the app lock down access to websites and apps and perhaps remind the child it’s time to their homework — or whatever other experience the developer has in mind.

And on the flip side, the apps could create incentives for the child to gain screen time access after they complete some other task, like doing homework, reading or chores, or anything else.

Developers could use these features to design new experiences that Apple’s own Screen Time system doesn’t allow for today, by layering their own ideas on top of Apple’s basic set of controls. Parents would likely fork over their cash to make using Screen Time controls easier and more customized to their needs.

Other apps could tie into Screen Time too, outside of the “family” context — like those aimed at mental health and wellbeing, for example.

Of course, developers have been asking for a Screen Time API since the launch of Screen Time itself, but Apple didn’t seem to prioritize its development until the matter of Apple’s removal of rival screen time apps was brought up in an antitrust hearing last year. At the time, Apple CEO Tim Cook defended the company’s decision by explaining that apps had been using MDM (mobile device management) technology, which was designed for managing employee devices in the enterprise, not home use. This, he said, was a privacy risk.

Apple has a session during WWDC that will detail how the new API works, so we expect we’ll learn more soon as the developer info becomes more public.

read more about Apple's WWDC 2021 on TechCrunch

Tezlab CEO Ben Schippers to discuss the Tesla effect and the next wave of EV startups at TC Sessions: Mobility 2021

As Tesla sales have risen, interest in the company has exploded, prompting investment and interest in the automotive industry, as well as the startup world.

Tezlab, a free app that’s like a Fitbit for a Tesla vehicle, is just one example of the numerous startups that have sprung up in the past few years as electric vehicles have started to make the tiniest of dents in global sales. Now, as Ford, GM, Volvo, Hyundai along with newcomers Rivian, Fisker and others launch electric vehicles into the marketplace, more startups are sure to follow.

Ben Schippers, the co-founder and CEO of Tezlab, is one of two early-stage founders who will join us at TC Sessions: Mobility 2021 to talk about their startups and the opportunities cropping up in this emerging age of EVs. The six-person team behind TezLab was born out of HappyFunCorp, a software engineering shop that builds apps for mobile, web, wearables and Internet of Things devices for clients that include Amazon, Facebook and Twitter, as well as an array of startups.

HFC’s engineers, including Schippers, who also co-founded HFC, were attracted to Tesla  because of its techcentric approach and one important detail: the Tesla API endpoints are accessible to outsiders. The Tesla API is technically private. But it exists allowing the Tesla’s app to communicate with the cars to do things like read battery charge status and lock doors. When reverse-engineered, it’s possible for a third-party app to communicate directly with the API.

Schippers’ experience extends beyond scaling up Tezlab. Schippers consults and works with companies focused on technology and human interaction, with a sub-focus in EV.

The list of speakers at our 2021 event is growing by the day and includes Motional’s president and CEO Karl Iagnemma and Aurora co-founder and CEO Chris Urmson, who will discuss the past, present and future of AVs. On the electric front is Mate Rimac, the founder of Rimac Automobili, who will talk about scaling his startup from a one-man enterprise in a garage to more than 1,000 people and contracts with major automakers.

We also recently announced a panel dedicated to China’s robotaxi industry, featuring three female leaders from Chinese AV startups: AutoX’s COO Jewel Li, Huan Sun, general manager of Momenta Europe with Momenta, and WeRide’s VP of Finance Jennifer Li.

Other guests include, GM’s VP of Global Innovation Pam Fletcher, Scale AI CEO Alexandr Wang, Joby Aviation founder and CEO JoeBen Bevirt, investor and LinkedIn founder Reid Hoffman (whose special purpose acquisition company just merged with Joby), investors Clara Brenner of Urban Innovation Fund, Quin Garcia of Autotech Ventures and Rachel Holt of Construct Capital, and Zoox co-founder and CTO Jesse Levinson.

And we may even have one more surprise — a classic TechCrunch stealth company reveal to close the show.

Don’t wait to book your tickets to TC Sessions: Mobility as prices go up at our virtual door.

This one email explains Apple

An email has been going around the internet as a part of a release of documents related to Apple’s App Store based suit brought by Epic Games. I love this email for a lot of reasons, not the least of which is that you can extrapolate from it the very reasons Apple has remained such a vital force in the industry for the past decade. 

The gist of it is that SVP of Software Engineering, Bertrand Serlet, sent an email in October of 2007, just three months after the iPhone was launched. In the email, Serlet outlines essentially every core feature of Apple’s App Store — a business that brought in an estimated $64B in 2020. And that, more importantly, allowed the launch of countless titanic internet startups and businesses built on and taking advantage of native apps on iPhone.

Forty five minutes after the email, Steve Jobs replies to Serlet and iPhone lead Scott Forstall, from his iPhone, “Sure, as long as we can roll it all out at Macworld on Jan 15, 2008.”

Apple University should have a course dedicated to this email. 

Here it is, shared by an account I enjoy, Internal Tech Emails, on Twitter. If you run the account let me know, happy to credit you further here if you wish:

First, we have Serlet’s outline. It’s seven sentences that outline the key tenets of the App Store. User protection, network protection, an owned developer platform and a sustainable API approach. There is a direct ask for resources — whoever we need in software engineering — to get it shipped ASAP. 

It also has a clear ask at the bottom, ‘do you agree with these goals?’

Enough detail is included in the parentheticals to allow an informed reader to infer scope and work hours. And at no point during this email does Serlet include an ounce of justification for these choices. These are the obvious and necessary framework, in his mind, for accomplishing the rollout of an SDK for iPhone developers. 

There is no extensive rationale provided for each item, something that is often unnecessary in an informed context and can often act as psychic baggage that telegraphs one of two things:

  1. You don’t believe the leader you’re outlining the project to knows what the hell they’re talking about.
  2. You don’t believe it and you’re still trying to convince yourself. 

Neither one of those is the wisest way to provide an initial scope of work. There is plenty of time down the line to flesh out rationale to those who have less command of the larger context. 

If you’re a historian of iPhone software development, you’ll know that developer Nullriver had released Installer, a third-party installer that allowed apps to be natively loaded onto iPhone, in the summer of 2007. Early September, I believe. It was followed in 2008 by the eventually far more popular Cydia. And there were developers that August and September already experimenting with this completely unofficial way of getting apps on the store, like the venerable Twitterific by Craig Hockenberry and Lights Off by Lucas Newman and Adam Betts.

Though there has been plenty of established documentation of Steve being reluctant about allowing third-party apps on iPhone, this email establishes an official timeline for when the decision was not only made but essentially fully formed. And it’s much earlier than the apocryphal discussion about when the call was made. This is just weeks after the first hacky third-party attempts had made their way to iPhone and just under two months since the first iPhone jailbreak toolchain appeared. 

There is no need or desire shown here for Steve to ‘make sure’ that his touch is felt on this framework. All too often I see leaders that are obsessed with making sure that they give feedback and input at every turn. Why did you hire those people in the first place? Was it for their skill and acumen? Their attention to detail? Their obsessive desire to get things right?

Then let them do their job. 

Serlet’s email is well written and has the exact right scope, yes. But the response is just as important. A demand of what is likely too short a timeline (the App Store was eventually announced in March of 2008 and shipped in July of that year) sets the bar high — matching the urgency of the request for all teams to work together on this project. This is not a side alley, it’s the foundation of a main thoroughfare. It must get built before anything goes on top. 

This efficacy is at the core of what makes Apple good when it is good. It’s not always good, but nothing ever is 100% of the time and the hit record is incredibly strong across a decade’s worth of shipped software and hardware. Crisp, lean communication that does not coddle or equivocate, coupled with a leader that is confident in their own ability and the ability of those that they hired means that there is no need to bog down the process in order to establish a record of involvement. 

One cannot exist without the other. A clear, well argued RFP or project outline that is sent up to insecure or ineffective management just becomes fodder for territorial games or endless rounds of requests for clarification. And no matter how effective leadership is and how talented their employees, if they do not establish an environment in which clarity of thought is welcomed and rewarded then they will never get the kind of bold, declarative product development that they wish. 

All in all, this exchange is a wildly important bit of ephemera that underpins the entire app ecosystem era and an explosive growth phase for Internet technology. And it’s also an encapsulation of the kind of environment that has made Apple an effective and brutally efficient company for so many years. 

Can it be learned from and emulated? Probably, but only if all involved are willing to create the environment necessary to foster the necessary elements above. Nine times out of ten you get moribund management, an environment that discourages blunt position taking and a muddy route to the exit. The tenth time, though, you get magic.

And, hey, maybe we can take this opportunity to make that next meeting an email?

Facebook’s Spark AR platform expands to video calling with Multipeer API

At today’s F8 developer conference, Facebook announced new capabilities for Spark AR, its flagship AR creation software. Since Spark AR was announced at F8 2017, more than 600,000 creators from 190 countries have published over 2 million AR effects on Facebook and Instagram, making it the largest mobile AR platform, according to Facebook. If you’ve ever posted a selfie on your Instagram story with an effect that gave you green hair, or let you control a dog’s facial expression by moving your own face, then you’ve used Spark AR

Soon, these AR effects will be available for video calling on Messenger, Instagram, and Portal with the introduction of a Multipeer API. Creators can develop effects that bring call participants together by using a shared AR effect. As an example, Spark AR shared a promo video of a birthday party held over a video call, in which an AR party hat appears on each of the participants’ heads. 

Creators can also develop games for users to play during their video calls. This already exists on Facebook video calls – think of the game where you compete to see who can catch the most flying AR hamburgers in their mouth in a minute. But when the ability to make new, lightweight games opens to developers, we’ll see some new games to challenge our friends with on video calls. 

These video call effects and multipeer AR games will be bolstered by Spark’s platform exclusive multi-class segmentation capability. This lets developers augment multiple segments of a user’s body (like hair or skin) at once within a single effect. 

Facebook also discussed its ongoing ambition to build AR glasses. Chris Barber, Director of Partnerships for Spark AR, said that this goal is still “years away” – but, Barber did tease some potential features for the innovative, wearable tech. 

“Imagine being able to teleport to a friend’s sofa to watch a show together, or being able to share a photo of something awesome you see on a hike,” Barber said. Maybe this won’t sound so dystopian by the time the product launches, years down the road. 

Last October, Spark AR launched the AR Partner Network, a program for the platform’s most advanced creators, and this year, Spark launched an AR curriculum through Facebook’s BluePrint Platform to help creators learn how to improve their AR effects. Applications for the Spark Partner Network will open again this summer. For now, creators and developers can apply to start building effects for video calling through the Spark AR Video Calling Beta

Goldman Sachs leads $202M investment in project44, doubling its valuation to $1.2B in a matter of months

The COVID-19 pandemic disrupted a lot in the world, and supply chains are no exception. 

A number of applications that aim to solve workflow challenges across the supply chain exist. But getting real-time access to information from transportation providers has remained somewhat elusive for shippers and logistics companies alike. 

Enter project44. The seven-year-old Chicago-based company has built an API-based platform that it says acts as “the connective tissue” between transportation providers, third-party logistics companies, shippers and their supply chain systems. Using predictive analytics, the platform provides crucial real-time information such as estimated time of arrivals (ETAs).

“Supply chains have undergone an incredible amount of change — there has never been a greater need for agility, resiliency and the ability to rapidly respond to changes across the supply chain,” said Jason Duboe, the company’s chief growth officer.

And now, project44 announced it has raised $202 million in a Series E funding round led by Goldman Sachs Asset Management and Emergence Capital. Girteka and Lineage Logistics also participated in the financing, which gives project44 a post-money valuation of $1.2 billion. That doubles the company’s valuation at the time of its Insight Partners-led $100 million Series D in December, and brings its total raised since inception to $442.5 million.

The raise is quite possibly the largest investment in the supply chain visibility space to date.

Project44 is one of those refreshingly transparent private companies that gives insight into its financials. This month, the company says it crossed $50 million in annual recurring revenue (ARR), which is up 100% year over year. It has more than 600 customers, including some of the world’s largest brands such as Amazon, Walmart, Nestlé, Starbucks, Unilever, Lenovo and P&G. Customers hail from a variety of industries, including CPG, retail, e-commerce, manufacturing, pharma and chemical.

Over the last year, the pandemic created a number of supply chain disruptions, underscoring the importance of technologies that help provide visibility into supply chain operations. Project44 said it worked hard to help customers mitigate “relentless volatility, bottlenecks, and logistics breakdowns,” including during the Suez Canal incident where a cargo ship got stuck for days.

Looking ahead, project44 plans to use its new capital in part to continue its global expansion. Project44 recently announced its expansion into China and has plans to grow in the Asia-Pacific, Australia/New Zealand and Latin American markets, according to Duboe.

We are also going to continue to invest heavily in our carrier products to enable more participation and engagement from the transportation community that desires a stronger digital experience to improve efficiency and experience for their customers,” he told TechCrunch. The company also aims to expand its artificial intelligence (AI) and data science capabilities and broaden sales and marketing reach globally.

Last week, project44 announced its acquisition of ClearMetal, a San Francisco-based supply chain planning software company that focuses on international freight visibility, predictive planning and overall customer experience. With the buy, Duboe said project44 will now have two contracts with Amazon: road and ocean. 

“Project44 will power what they are chasing,” he added.

And in March, the company also acquired Ocean Insights to expand its ocean offerings.

Will Chen, a managing director of Goldman Sachs Asset Management, believes that project44 is unique in its scope of network coverage across geographies and modes of transport.  

“Most competitors predominantly focus on over-the-road visibility and primarily serve one region, whereas project44 is a truly global business that provides end-to-end visibility across their customers’ entire supply chain,” he said.

Goldman Sachs Asset Management, noted project44 CEO and founder Jett McCandless, will help the company grow not only by providing capital but through its network and resources.

Belvo, LatAm’s answer to Plaid, raises $43M to scale its API for financial services

Belvo, a Latin American startup which has built an open finance API platform, announced today it has raised $43 million in a Series A round of funding.

A mix of Silicon Valley and Latin American-based VC firms and angels participated in the financing, including Future Positive, Kibo Ventures, FJ Labs, Kaszek, MAYA Capital, Venture Friends, Rappi co-founder and president Sebastián Mejía, Harsh Sinha, CTO of Wise (formerly Transferwise) and Nubank CEO and founder David Vélez.

Citing Crunchbase data, Belvo believes the round represents the largest Series A ever raised by a Latin American fintech. In May 2020, Belvo raised a $10 million seed round co-led by Silicon Valley’s Founders Fund and Argentina’s Kaszek.

Belvo aims to work with leading fintechs in Latin America, spanning verticals like the neobanks, credit providers and personal finance products Latin Americans use every day.

The startup’s goal with its developer-first API platform that can be used to access and interpret end-user financial data is to build better, more efficient and more inclusive financial products in Latin America. Developers of popular neobank apps, credit providers and personal finance tools use Belvo’s API to connect bank accounts to their apps to unlock the power of open banking.

As TechCrunch Senior Editor Alex Wilhelm explained in this piece last year, Belvo might be considered similar to U.S.-based Plaid, but more attuned to the Latin American market so it can take in a more diverse set of data to better meet the needs of the various markets it serves. 

So while Belvo’s goals are “similar to the overarching goal[s] of Plaid,” co-founder and co-CEO Pablo Viguera told TechCrunch that Belvo is not merely building a banking API business hoping to connect apps to financial accounts. Instead, Belvo wants to build a finance API, which takes in more information than is normally collected by such systems. Latin America is massively underbanked and unbanked so the more data from more sources, the better.

“In essence, we’re pushing for similar outcomes [as Plaid] in terms of when you think about open banking or open finance,” Viguera said. “We’re working to democratize access to financial data and empower end users to port that data, and share that data with whoever they want.”

The company operates under the premise that just because a significant number of the region’s population is underbanked doesn’t mean that they aren’t still financially active. Belvo’s goal is to link all sorts of accounts. For example, Viguera told TechCrunch that some gig economy companies in Latin America are issuing their own cards that allow workers to cash out at small local shops. In time, all those transactions are data that could be linked up using Belvo, casting a far wider net than what we’re used to domestically.

The company’s work to connect banks and non-banks together is key to the company’s goal of allowing “any fintech or any developer to access and interpret user financial data,” according to Viguera.

Viguera and co-CEO Oriol Tintoré founded Belvo in May of 2019, and it was part of Y Combinator’s Winter 2020 batch. Since launching its platform last year, the company says it has built a customer base of more than 60 companies across Mexico, Brazil and Colombia, handling millions of monthly API calls. 

This is important because as Alex noted last year, similar to other players in the API-space, Belvo charges for each API call that its customers use (in this sense, it has a model similar to Twilio’s). 

Image Credits: Co-founders and co-CEOs Oriol Tintore and Pablo Viguera / Belvo

Also, over the past year, Belvo says it expanded its API coverage to over 40 financial institutions, which gives companies the ability to connect to more than 90% of personal and business bank accounts in LatAm, as well as to tax authorities (such as the SAT in Mexico) and gig economy platforms.

“Essentially we take unstructured financial data, which an individual might have outside of a bank such as integrations we have with gig economy platforms such as Uber and Rappi. We can take a driver’s information from their Uber app, which is kind of built like a bank app, and turn it into meaningful bank-like info which third parties can leverage to make assessments as if it’s data coming from a bank,” Viguera explained.

The startup plans to use its new capital to scale its product offering, continue expanding its geographic footprint and double its current headcount of 70. Specifically, Belvo plans to hire more than 50 engineers in Mexico and Brazil by year’s end. It currently has offices in Mexico City, São Paulo and Barcelona. The company also aims to launch its bank-to-bank payment initiation offering in Mexico and Brazil.

Belvo currently operates in Mexico, Colombia and Brazil. 

But it’s seeing “a lot of opportunity” in other markets in Latin America, especially in Chile, Peru and Argentina, Viguera told TechCrunch. “In due course, we will look to pursue expansion there.” 

Fred Blackford, founding partner of Future Positive, believes Belvo represents a “truly transformational opportunity for the region’s financial sector.”

Nicolás Szekasy, co-founder and managing partner of Kaszek, noted that demand for financial services in Latin America is growing at an exponential rate .

“Belvo is developing the infrastructure that will enable both the larger institutions and the emerging generation of younger players to successfully deploy their solutions,” he said. “Oriol, Pablo and the Belvo team have been leading the development of a sophisticated platform that resolves very complex technical challenges, and the company’s exponential growth reflects how it is delivering a product that fits perfectly with the requirements of the market.” 

Peloton and Echelon profile photo metadata exposed riders’ real-world locations

Security researchers say at-home exercise giant Peloton and its closest rival Echelon were not stripping user-uploaded profile photos of their metadata, in some cases exposing users’ real-world location data.

Almost every file, photo or document contains metadata, which is data about the file itself, such as how big it is, when it was created, and by whom. Photos and video will often also include the location from where they were taken. That location data helps online services tag your photos or videos that you were at this restaurant or that other landmark.

But those online services — especially social platforms, where you see people’s profile photos — are supposed to remove location data from the file’s metadata so other users can’t snoop on where you’ve been, since location data can reveal where you live, work, where you go, and who you see.

Jan Masters, a security researcher at Pen Test Partners, found the metadata exposure as part of a wider look at Peloton’s leaky API. TechCrunch verified the bug by uploading a profile photo with GPS coordinates of our New York office, and checking the metadata of the file while it was on the server.

The bugs were privately reported to both Peloton and Echelon.

Peloton fixed its API issues earlier this month but said it needed more time to fix the metadata bug and to strip existing profile photos of any location data. A Peloton spokesperson confirmed the bugs were fixed last week. Echelon fixed its version of the bug earlier this month. But TechCrunch held this report until we had confirmation that both companies had fixed the bug and that metadata had been stripped from old profile photos.

It’s not known how long the bug existed or if anyone maliciously exploited it to scrape users’ personal information. Any copies, whether cached or scraped, could represent a significant privacy risk to users whose location identifies their home address, workplace, or other private location.

Parler infamously didn’t scrub metadata from user-uploaded photos, which exposed the locations of millions of users when archivists exploited weaknesses on the platform’s API to download its entire contents. Others have been slow to adopt metadata stripping, like Slack, even if it got there in the end.

Read more:

Snap emphasizes commerce in updates to its camera and AR platforms

At Snap’s Partner Summit, the company announced a number of updates to the company’s developer tools and AR-focused Lens Studio including several focused on bringing shopping deeper into the Snapchat experience.

One of the cooler updates involved the company’s computer vision Scan product which analyzes content in a user’s camera feed to quickly bring up relevant information. Snap says the feature is used by around 170 million users per month. Scan which has now been given more prominent placement inside the camera section of the app has been upgraded with commerce capabilities with a feature called Screenshop.

Users can now use their Snap Camera to scan a friend’s outfit after which they’ll quickly be served up shopping recommendations from hundreds of brands. The company is using the same technology for another upcoming feature that will allow users to snap pictures of ingredients in their kitchen and get served recipes from Allrecipes that integrate them.

The features are part of a broader effort to intelligently suggest lenses to users based on what their camera is currently focused on.

Business will now be able to establish public profiles inside Snapchat where users can see all of their different offerings, including Lenses, Highlights, Stories and items for sale through Shop functionality.

On the augmented reality side, Snap is continuing to emphasize business solutions with API integrations that make lenses smarter. Retailers will be able to use the Business Manager to integrate their product catalogs so that users can only access try-on lenses for products that are currently in stock.

Partnerships with luxury fashion platform Farfetch and Prada will tap into further updates to the AR platform including technical 3D mesh advances that make trying on clothing virtually appear more realistic. Users will also be able to use voice commands and visual gestures to cycle between items they’re trying on in the new experiences.

“We’re excited about the power of our camera platform to bring Snapchatters together with the businesses they care about in meaningful ways,” said Snap’s global AR product lead Carolina Arguelles Navas. “And, now more than ever, our community is eager to experience and try on, engage with, and learn about new products, from home.”