Scale AI releases free lidar dataset to power self-driving car development

High quality data is the fuel that powers AI algorithms. Without a continual flow of labeled data, bottlenecks can occur and the algorithm will slowly get worse and add risk to the system.

It’s why labeled data is so critical for companies like Zoox, Cruise and Waymo, which use it to train machine learning models to develop and deploy autonomous vehicles. That need is what led to the creation of Scale AI, a startup that uses software and people to process and label image, lidar and map data for companies building machine learning algorithms. Companies working on autonomous vehicle technology make up a large swath of Scale’s customer base, although its platform is also used by Airbnb, Pinterest and OpenAI, among others.

The COVID-19 pandemic has slowed, or even halted, that flow of data as AV companies suspended testing on public roads — the means of collecting billions of images. Scale is hoping to turn the tap back on, and for free.

The company, in collaboration with lidar manufacturer Hesai, launched this week an open source dataset called PandaSet that can be used for training machine learning models for autonomous driving. The dataset, which is free and licensed for academic and commercial use, includes data collected using Hesai’s forward-facing PandarGT lidar with image-like resolution as well as its mechanical spinning lidar known as Pandar64. The data was collected while driving urban areas in San Francisco and Silicon Valley before officials issued stay-at-home orders in the area, according to the company.

“AI and machine learning are incredible technologies with an incredible potential for impact, but also a huge pain in the ass,” Scale CEO and co-founder Alexandr Wang told TechCrunch in a recent interview. “Machine learning is definitely a garbage in, garbage out kind of framework — you really need high quality data to be able to power these algorithms. It’s why we built Scale and it’s also why we’re using this dataset today to help drive forward the industry with an open source perspective.”

The goal with this lidar dataset was to give free access to a dense and content-rich dataset, which Wang said was achieved by using two kinds of lidars in complex urban environments filled with cars, bikes, traffic lights and pedestrians.

“The Zoox and the Cruises of the world will often talk about how battle-tested their systems are in these dense urban environments,” Wang said. “We wanted to really expose that to the whole community.”

Lidar - Scale AI PandaSet flyover GIF

Image Credits: Scale AI

The dataset includes more than 48,000 camera images and 16,000 LiDAR sweeps — more than 100 scenes of 8s each, according to the company. It also includes 28 annotation classes for each scene and 37 semantic segmentation labels for most scenes. Traditional cuboid labeling, those little boxes placed around a bike or car, for instance, can’t adequately identify all of the lidar data. So, Scale uses a point cloud segmentation tool to precisely annotate complex objects like rain.

Open sourcing AV data isn’t entirely new. Last year,  Aptiv and Scale released nuScenes, a large-scale data set from an autonomous vehicle sensor suite. Argo AI, Cruise and Waymo were among a number of AV companies that have also released data to researchers. Argo AI released curated data along with high-definition maps, while Cruise shared a data visualization tool it created called Webviz that takes raw data collected from all the sensors on a robot and turns that binary code into visuals.

Scale’s efforts are a bit different; For instance, Wang said the license to use this dataset doesn’t have any restrictions.

“There’s a big need right now and a continual need for high quality labeled data,” Wang said. “That’s one of the biggest hurdles overcome when building self driving systems. We want to democratize access to this data, especially at a time when a lot of the self driving companies can’t collect it.”

That doesn’t mean Scale is going to suddenly give away all of its data. It is, after all a for-profit enterprise. But it’s already considering collecting and open sourcing fresher data later this year.

Hims & Hers launch Spanish language telemedicine services

Hims & Hers, the startup focused on providing access to elective treatments for things like hair loss, skin care, and erectile disfunction and online telemedicine services, is expanding its services to include a Spanish language option, the company said.

After Mexico, the U.S. has the second-largest Spanish speaking population in the world, with an estimated 41 million U.S. residents speaking Spanish at home. The population also prefers to receive healthcare information and frequent facilities that offer resources in Spanish.

Now, with a shortage looming in primary care physicians for rural areas and inner cities and a sky-high rate of Hispanics living without any form of healthcare coverage (roughly 15.1 percent, according to data provided by the company), Hims & Hers is pitching its telemedicine offering as an option.

“Language, cost, and location should not be barriers to receiving quality care, which is why we are launching a Spanish offering on our telemedicine platform,” the company said in a statement.

The company’s $39 primary care consultations at its Hims and its Hers websites will be in Spanish. That will include everything from communications like the patient intake form and instructions to prepare for an online consultation along with a connection to Spanish-speaking healthcare provider.

“The reason we created Hims & Hers was to break down barriers and provide more people with access to quality and convenient care,” the company’s co-founder and chief executive, Andrew Dudum, said in a statement. “As a telemedicine company, we recognize the need and understand the importance of serving the Spanish-speaking population. We hope those seeking access to care in Spanish find our platform to be a welcoming, inclusive, quality experience.”

Microsoft acquires robotic process automation platform Softomotive

During his Build keynote, Microsoft CEO Satya Nadella today confirmed that the company has acquired Softomotive, a software robotic automation platform. Bloomberg first reported that this acquisition was in the works earlier this month, but the two companies didn’t comment on the report at the time.

Today, Nadella noted that Softomotive would become part of Microsoft’s Power Automate platform. “We’re bringing RPA – or robotic process automation to legacy apps and services with our acquisition of Softomotive,” Nadella said.

Softomotive currently has about 9,000 customers around the world. Softomotive’s WinAutomation platform will be freely available to Power Automate users with what Microsoft calls an RPA attended license in Power Automate.

In Power Automate, Microsoft will use Softomotive’s tools to enable a number of new capabilities, including Softomotives low-code desktop automation solution WinAutomation. Until now, Power Automate did not feature any desktop automation tools.

It’ll also build Softomotive’s connectors for applications from SAP, as well as legacy terminal screens and Java, into its desktop automation experience and enable parallel execution and multitasking for UI automation.

Softomotives other flagship application, ProcessRobot for server-based enterprise RPA development, will also find a new home in Power Automate. My guess, though, is that Microsoft mostly bought the company for its desktop automation skills.

“One of our most distinguishing characteristics, and an indelible part of our DNA, is an unswerving commitment to usability,” writes Softomotive CEO and co-founder Marios Stavropoulos. “We have always believed in the notion of citizen developers and, since less than two percent of the world population can write code, we believe the greatest potential for both process improvement and overall innovation comes from business end users. This is why we have invested so diligently in abstracting complexity away from end users and created one of the industry’s most intuitive user interfaces – so that non-technical business end users can not just do more, but also make deeper contributions by becoming professional problem solvers and innovators. We are extremely excited to pursue this vision as part of Microsoft.”

The two companies did not disclose the financial details of the transaction.

Xona Space Systems raises $1 million to improve satellite-based navigation services

San Mateo-based startup Xona Space Systems has raised a $1 million “pre-seed” round led by 1517, and including participation from Seraphim Capital, Trucks Venture Capital and Stellar Solutions. The company is focused on developing a Positioning, Navigation and Timing (PNT) satellite service that it believes can supersede Global Navigation Satellite Systems (GNSS), providing big benefits in terms of security, precision and accuracy.

Xona contends that GNSS, which is essentially the backbone of almost all global navigation software and services, is relatively imprecise, and open to potential disruption from malicious attackers. It’s a technology that was transformational in its time, but it’s not up to the challenge of meeting the requirements of modern applications, including autonomous vehicle transportation, drone fleets, automated ocean shipping and more.

The company is pursuing an ambitious goal: GNSS remains one of the most significant, broad and impactful space-based technologies ever to be developed. Its impact is apparent daily, from consumer applications like turn-by-turn navigation via mobile mapping apps, to industrial services like global logistics platforms. Anyone who can develop a credible next-generation alternative that modernizes and improves upon GNSS stands to gain a lot.

Xona’s approach promises tenfold improvements in terms of accuracy vs. GNSS, and encryption that can help provide much more security. The company has a patent pending on its ‘Pular’ branded PNT service, which will employ low Earth orbit satellites (vs. higher orbit current GNSS networks) to provide its next-gen navigation tech.

How to create the best at-home videoconferencing setup, for every budget

Your life probably involves a lot more videoconferencing now than it did a few weeks ago – even if it already did involve a lot. That’s not likely going to change anytime soon, so why not make the most of it? The average MacBook webcam can technically get the job done, but it’s far from impressive. There are a number of ways to up your game, however – by spending either just a little or a whole lot. Whether you’re just looking to improve your daily virtual stand-up, gearing up for presenting at a virtual conference, or planning a new video podcast, here’s some advice about what to do to make the most of what you’ve got, or what to get if you really want to maximize your video and audio quality.

Level 0

Turn on a light and put it in the right place

One of the easiest things you can do to improve the look of your video is to simply turn on any light you have handy and position it behind the camera shining on your face. That might mean moving a lamp, or moving your computer if all your available lights are in a fixed position, but it can make a dramatic difference. Check out these examples below, screen grabbed from my Microsoft Surface Book 2 (which actually has a pretty good built-in video camera, as far as built-in video cameras go).

The image above is without any light beyond the room’s ceiling lights on, and the image below is turning on a lamp and positioning it directed on my face from above and behind the Surface Book. It’s enough of a change to make it look less like I got caught by surprise with my video on, and more like I actually am attending a meeting I’m supposed to take part in.

Be aware of what’s behind you

It’s definitely too much to ask to set dress your surroundings for every video call you jump on, but it is worth taking a second to spot check what’s visible in the frame. Ideally, you can find a spot where the background is fairly minimal, with some organized decor visible. Close doors that are in frame, and try not to film in front of an uncovered window. And if you’re living in a pandemic-induced mess of clutter, just shovel the clutter until it’s out of frame.

Know your system sound settings

Get to know where the input volume settings are for your device and operating system. It’s not usually much of an issue, because most apps and systems set pretty sensible defaults, but if you’re also doing something unusual like sitting further away from your laptop to try to fit a second person in frame, then you might want to turn up the input audio slider to make sure anyone listening can actually hear what you have to say.

It’s probably controllable directly in whatever app you’re using, but on Macs, also try going to System Preferences > Sound > Input to check if the level is directly controllable for the device you’re using, and if tweaking that produces the result you’re looking for.

Level 1

Get an external webcam

The built-in webcam on most notebooks and all-in-ones isn’t going to be great, and you can almost always improve things by buying a dedicated webcam instead. Right now, it might be hard to find them in stock, since a lot of people have the same need for a boost in videoconferencing quality all at the same time. But if you can get your hands on even a budget upgrade option like the Logitech C922 Pro Stream 1080p webcam I used for the clip below, it should help with sharpness, low light performance, color and more.

Get a basic USB mic

Dedicated external mics are another way to quickly give your setup a big boost for relatively low cost. In the clip above, I used the popular Samson Meteor USB mic, which has built-in legs and dedicated volume/mute controls. This mic includes everything you need, and should work instantly when you plug it in via USB, and it produces great sound that’s ideal for vocals.

Get some headphones

Headphones of any kind will make your video calls and conferences better, since it minimizes the chance of echo from your mic picking up the audio from your own speakers. Big over ears models are good for sound quality, while earbuds make for less obvious headwear in your actual video image.

Level 2

Use a dedicated camera and an HDMI-to-USB interface

If you already have a standalone camera, including just about any consumer pocket camera with HDMI out capabilities, then it’s worth looking into picking up an HDMI-to-USB video capture interface in order to convert it into a much higher quality webcam. In the clip below I’m using the Sony RX100 VII, which is definitely at the high end of the consumer pocket camera market, but there are a range of options that should give you nearly the same level of quality, including the older RX100 models from Sony .

When looking for an HDMI interface, make sure that they advertise that it works with videoconferencing apps like Zoom, Hangouts and Skype on Mac and Windows without any software required: This means that they likely have UVC capabilities, which means those operating systems will recognize them as webcams without any driver downloads or special apps required out of the box. These are also in higher demand due to COVID-19, so the Elgato Cam Link 4K I used here probably isn’t in ready stock anywhere. Instead, look to alternatives like the IOGear Video Capture Adapter or the Magewell USB 3.0 Capture device, or potentially consider upgrading to a dedicated live broadcast deck like the Blackmagic ATEM Mini I’ll talk more about below.

Get a wired lav mic

A simple wired lavalier (lav) microphone is a great way to upgrade your audio game, and it doesn’t even need to cost that much. You can get a wired lav that performs decently well for as little as $20 on Amazon, and you can use a USB version for connecting directly to your computer even if you don’t have a 3.5mm input port. Rode’s Lavalier GO is a great mid-range option that also works well with the Wireless GO transmitter and receiver kit I mention in the next section. The main limitation of this is that depending on cord length, you could be pretty limited in terms of your range of motion while using one.

Get multiple lights and position them effectively

Lighting is a rabbit hole that ends up going very deep, but getting a couple of lights that you can move to where you need them most is a good, inexpensive way to get started. Amazon offers a wide range of lighting kits that fit the bill, or you can even do pretty well with just a couple of Philips Hue lights in gooseneck lamps positioned correctly and adjusted to the right temperature and brightness.

Level 3

Use an interchangeable lens camera and a fast lens

The next step up from a decent compact camera is one that features interchangeable lenses. This allows you to add a nice, fast prime lens with a high maximum aperture (aka a low ‘f’ number’) to get that defocused background look. This provides natural-looking separation of you, the subject, from whatever is behind you, and provides a cinematic feel that will wow colleagues in your monthly all-hands.

Get a wireless lav mic

A lav mic is great, but a wireless lav mic is even better. It means you don’t need to worry about hitting the end of your cable, or getting it tangled in other cables in your workspace, and it can provide more flexibility in terms of what audio interfaces you use to actually get your sound into the computer, too. A great option here is the RODE Wireless GO, which can work on its own or in tandem with a mic like the RODE Lavalier GO for great, flexible sound.

Use in-ear monitors

You still want to be using headphones at this stage, but the best kind to use really are in-ear monitors that do their best to disappear out of sight. You can get some dedicated broadcast-style monitors like those Shure makes, or you can spring for a really good pair of Bluetooth headphones with low latency and the latest version of Bluetooth. Apple’s AirPods Pro is a great option, as are the Bang & Olfusen E8 fully wireless earbuds, which I’ve used extensively without any noticeable lag.

Use 3-point lighting

At this stage, it’s really time to just go ahead and get serious about lighting. The best balance in terms of optimizing specifically for streaming, videoconferencing and anything else your’e doing from your desk, basically, is to pick up at least two of Elgato’s Key Lights or Key Light Airs.

These are LED panel lights with built-in diffusers that don’t have a steep learning curve, and that come with very sturdy articulating tube mounts with desk clamps, and that connect to Wi-Fi for control via smartphones or desktop applications. You can adjust their temperature, meaning you can make them either more ‘blue’ or more ‘orange’ depending on your needs, as well as tweak their brightness.

Using three of these, you can set up a standard 3-point lighting setup which are ideal for interviews or people speaking directly into a camera – aka just about every virtual conference/meeting/event/webinar use you can think of.

Level 4

Get an HDMI broadcast switcher deck

HDMI-USB capture devices do a fine job turning most cameras into webcams, but if you really want to give yourself a range of options, you can upgrade to a broadcast switching interface like the Blackmagic ATEM Mini. Released last year, the ATEM Mini packs in a lot of features that previously were basically only available to video pros, and provides them in an easy-to-use form factor with a price that’s actually astounding given how much this thing can really do.

On its own paired with a good camera, the ATEM Mini can add a lot to your video capabilities, including allowing you to tee up still graphics, and switch to computer input to show videos, work live in graphics apps, demonstrate code or run a presentation. You can set up picture-in-picture views, put up lower thirds and even fade-to-black using a hardware button dedicated to that purpose.

But if you really want to make the most of the ATEM Mini, you can add a second or even a third and fourth camera to the mix. For most uses, this is probably way too much camera – there are only so many angles one can get of a single person talking, in the end. But if you get creative with camera placement and subjects, it’s a fun and interesting way to break up a stream, especially if you’re doing something longer like giving a speech or extended presentation. The newer ATEM Mini Pro is just starting to ship, and offers built-in recording and streaming as well.

Use a broadcast-quality shotgun mic

The ATEM Mini has two dedicated audio inputs that really give you a lot of flexibility on that front, too. Attaching one to the output on an iPod touch, for instance, could let you use that device as a handy soundboard for cueing up intro and title music, plus sound effects. And this also means you can route sound from a high-quality mic, provided you have the right interface.

For top level streaming quality, with minimal sacrifices required in terms of video, I recommend going to a good, broadcast-quality shotgun mic. The Rode VideoMic NTG is a good entry-level option that has flexibility when it comes to also being mountable on-camera, but something like the Rode NTG3m mounted to a boom arm and placed out of frame with the mic end angled down towards your mouth, is going to provide the best possible results.

Add accent lighting

You’ve got your 3-point lighting – but as I said, lighting is a nearly endless rabbit hole. Accent lighting can really help push the professionalism of your video even further, and it’s also pretty easy and to set up using readily available equipment. Philips Hue is probably my favorite way to add a little more vitality to any scene, and if you’re already a Hue user you can make do with just about any of their color bulbs. Recent releases from Philips like the Hue Play Smart LED Light Bars are essentially tailor made for this use, and you can daisy chain up to three on one power adapter to create awesome accent wall lighting effects.

All of this is, of course, not at all necessary for basic video conferencing, virtual hangouts and meetings. But if you think that remote video is going to be a bigger part of our lives going forward, even as we return to some kind of normalcy in the wake of COVID-19, then it’s worth considering what elements of your system to upgrade based on your budget and needs, and hopefully this article provides some guidance.

Dear‌ ‌Sophie:‌ Will a PPP loan‌ affect my visa renewal or green card?

Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

“Dear Sophie” columns are accessible for Extra Crunch subscribers; use promo code ALCORN to purchase a one or two-year subscription for 50% off.


Dear‌ ‌Sophie:‌ ‌ ‌

I’m a tech founder on an ‌E-2‌ ‌investor‌ ‌visa.‌ ‌Will‌ ‌receiving‌ ‌PPP‌ ‌funding‌ ‌count‌ ‌against‌ ‌me‌ when I renew my E-2 or file my I-485 for my green card given‌ ‌the‌ ‌“Public‌ ‌Charge”‌ restrictions?

— E-2 Employer ‌in‌ Emeryville ‌

 

Dear‌ ‌E-2‌ Employer,

Thank you for starting a business in the United States and for your efforts to keep your team employed. Since the federal government increased funding for the Paycheck Protection Program (PPP) last week, your question is timely.

R&D Roundup: Sweat power, Earth imaging, testing ‘ghostdrivers’

I see far more research articles than I could possibly write up. This column collects the most interesting of those papers and advances, along with notes on why they may prove important in the world of tech and startups.

This week: one step closer to self-powered on-skin electronics; people dressed as car seats; how to make a search engine for 3D data; and a trio of Earth imaging projects that take on three different types of disasters.

Sweat as biofuel

Monitoring vital signs is a crucial part of healthcare and is a big business across fitness, remote medicine and other industries. Unfortunately, powering devices that are low-profile and last a long time requires a bulky battery or frequent charging is a fundamental challenge. Wearables powered by body movement or other bio-derived sources are an area of much research, and this sweat-powered wireless patch is a major advance.

A figure from the paper showing the device and interactions happening inside it.

The device, described in Science Robotics, uses perspiration as both fuel and sampling material; sweat contains chemical signals that can indicate stress, medication uptake, and so on, as well as lactic acid, which can be used in power-generating reactions.

The patch performs this work on a flexible substrate and uses the generated power to transmit its data wirelessly. It’s reliable enough that it was used to control a prosthesis, albeit in limited fashion. The market for devices like this will be enormous and this platform demonstrates a new and interesting direction for researchers to take.

Cloudflare partners with JD to expand its network in China

Cloudflare today announced a new partnership with JD Cloud & AI that will see the company expand its network in Chinato an additional 150 data centers. Currently, Cloudflare is available in 17 data centers in mainland China, thanks to a long-standing partnership with Baidu, but this new deal is obviously significantly larger.

CloudFlare’s original partnership with Baidu launched in 2015. The idea then, as now, was to give Cloudflare a foothold in one of the fastest-growing internet markets by providing Chinese companies better reach customers inside and outside of the country, but also — and maybe more importantly — to allow foreign companies to better reach the vast Chinese market.

“I think there are very few Western technology companies that have figured out how to operate in China,” Matthew Prince, the CEO and co-founder Cloudflare told me. “And I think we’re really proud of the fact that we’ve done that. What I’ve learned about China — certainly in the last six years that we’ve been directly working with partners there, […] has been that while it’s an enormous market and an enormous opportunity […], it’s still a very tight-knot technology community there — and one with a very long memory.”

GettyImages 489573216

SAN FRANCISCO, CA – SEPTEMBER 22: (L-R) Matthew Prince and Michelle Zatlyn of CloudFlare speak onstage during day two of TechCrunch Disrupt SF 2015 at Pier 70 on September 22, 2015 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)

He attributes the fact that Cloudflare was a good partner to Baidu for so many years to JD’s interest in working with the company as well. That partnership with Baidu will continue (Prince called them a “terrific partner”). This new deal with JD, however, will now also give Cloudflare the ability to reach another set of Chinese enterprises, too, that are currently betting on that company’s cloud.

“As we got to know them, JD really stood out,” Prince said. “I think they’re first of all really one of the up and coming cloud providers in China. And I think that then means that marrying Cloudflare’s services with JD’s services makes their overall cloud platform much more robust for Chinese customers.” He also noted that JD has relationships with many large Chinese businesses that are increasingly looking to go global.

To put this deal into perspective, today, Cloudflare operates in about 200 cities. Adding another 150 to this — even if it’s through a partner — marks a major expansion for the company.

As for the deal itself, Prince said that its structure is similar to the deal it made with Baidu. “We contribute the technology and the know-how to build a network out across China. They introduce capital in order to build that network out and also have some financial guarantees to us and then we share in the upside of what happens as we’re both able to sell the China network or as JD is able to sell Cloudflare’s services outside of China.”

When the company first went to China through Baidu, it was criticized for going into a market where there some obvious issues around free speech. Prince, who has been pretty outspoken about free speech issues, seems to be taking a rather pragmatic approach here.

“[Free speech] is certainly something we thought about a lot when we first made the decision to go into China in 2014,” he said. “And I think we’ve learned a lot about it. Around the world, whether it’s China or Turkey or Egypt or the United Kingdom or Brazil or increasingly even the United States, there are rules about what content can be accessed there. Regardless of what my personal feelings might be — and I grew up as a son of a journalist and in the United States and have seen the power of having a very free press and really, really, really strong freedom of expression protection. But I also think that every country doesn’t have the same tradition and the same laws as the United States. And I think that what we have tried to do everywhere that we operate, is comply with whatever the regional laws are. And it’s hard to do anything else.”

Cloudflare expects that it will take three years before all of the data centers will go online.

“I’m thrilled to establish this strategic collaboration with Cloudflare,” said Dr. Bowen Zhou, President of JD Cloud & AI. “Cloudflare’s mission of ‘helping to build a better Internet,’ closely aligns with JD Cloud & AI’s commitment to provide the best service possible to global partners. Leveraging JD.com’s rich experience across vast business scenarios, as well as its logistics and technological capabilities, we believe that this collaboration will provide valuable services that will transform how business is done for users inside and outside of China.”

Ben Horowitz, a16z general partner, is leaving Lyft’s board

Ben Horowitz, the co-founder and general partner of venture capital firm Andreessen Horowitz, won’t seek re-election to Lyft’s board, according to a document filed with the U.S. Securities and Exchange Commission on Monday.

Horowitz has served as a board director at the ride-hailing company since June 2016. His venture firm, which he co-founded with Marc Andreessen, was an early investor in Lyft . He will stay on the board until Lyft’s annual shareholder meeting scheduled for June 19. Horowitz’s plan to leave the board was first spotted by Protocol reporter Biz Carson.

Lyft is not planning to fill Horowitz’s board seat.

Horowitz could not be reached for comment. TechCrunch will update this article if he responds.

“We thank Ben for his longtime partnership with Lyft, including his four years of service on our board,” a Lyft spokesperson said in a email to TechCrunch. “During his tenure, Ben has helped Lyft achieve some of its most significant milestones, including our initial public offering in 2019. We wish Ben all the best as he continues his work as a pioneering investor and leader in the venture capital community.”

Horowitz serves on boards of 13 other portfolio companies, including Okta, Foursquare, Genius, Medium and Databricks.

Horowitz was selected to serve on Lyft’s board because of his extensive operating and management experience, his knowledge of technology companies and his extensive experience as a venture capital investor, the company said in a filing announcing the agenda for its 2020 annual shareholders’ meeting.

The annual meeting will be held virtually at 1:30 p.m. PT June 19, 2020. Shareholders and others can attend the Annual Meeting by visiting www.virtualshareholdermeeting.com/LYFT2020. Shareholders will be able to  submit questions and vote online.

During the meeting, Lyft plans to elect two directors to serve until 2023 and to ratify the appointment of PricewaterhouseCoopers LLP as its independent registered public accounting firm. Lyft co-founder and CEO Logan Green and Ann Miura-Ko, co-founder and partner at Floodgate Fund, are up for re-election as board members.

The company’s agenda also includes two measures to approve, on an advisory basis, the compensation of its named executive officers and the frequency of future stockholder advisory votes on the compensation of its named executive officers.

Apple Watch designer reveals the device’s origins on its fifth birthday

Update: We mistakenly noted in an earlier version that Chaudhri had been a part of Microsoft’s Hololens team. The story has been updated to remove the reference. 

In his two decades at Apple, Imran Chaudhri worked on many of the company’s most iconic product lines, including the iPhone, iPad and Mac. The designer left the company in 2017 , but today he’s offering up some fun insight into Apple Watch’s beginnings on the wearable’s fifth birthday.

The thread is a treasure trove of fun facts about the device’s early days. One interesting tidbit that might not be a huge surprise to those following Apple at the time is that an early prototype of the Watch consisted of an iPod nano strapped to a watch band.

Five years before it finally entered the smartwatch market in earnest, Apple introduced a square touchscreen nano. Three years before the arrival of the first Pebble, people were already considering the smartwatch possibilities. Accessory makers quickly took advantage, introducing wrist bands that would let it function as a touchscreen music watch. That sixth-gen product ultimately served as a foundation for the popular device to come. 

Per Chaudhri:

i had just wrapped up ios5 and took it down to show the ID team what notification centre and siri was – and what it could be in the future. i never got to share it with steve. we lost him right after ios5.

Other interesting bits here include:

  • The Solar watch face was designed as “as a way for muslims observing ramadan to quickly see the position of the sun and for all to understand the sun’s relationship to time.”
  • The butterfly animation was created using real (albeit deceased) butterflies (one of which is now framed in his home).
  • The touch feature originally went by the name E.T. (electronic touch).
  • The Digital Touch drawing feature was inspired by his time as a graffiti artist.