Cruise calls for a new way to determine commercial readiness of self-driving cars

Cruise co-founder and CTO Kyle Vogt said Friday that disengagement reports released annually by California regulators are not a proxy for the commercial readiness or safety of self-driving cars.

Vogt, in a lengthy post on Medium, called for a new metric to determine whether an autonomous vehicle is ready for commercial deployment. The post suggests that the autonomous vehicle company, which had a valuation of $19 billion as of May, is already developing more comprehensive metrics.

The California Department of Motor Vehicles, which regulates the permits for autonomous vehicle testing on public roads in the state, requires companies to submit an annual report detailing “disengagements,” a term that means the number of times drivers have had to take control of a car. The DMV defines a disengagement as any time a test vehicle operating on public roads has switched from autonomous to manual mode for an immediate safety-related reason or due to a failure of the system. 

“It’s woefully inadequate for most uses beyond those of the DMV,” Vogt wrote. “The idea that disengagements give a meaningful signal about whether an AV is ready for commercial deployment is a myth.”

These disengagement reports will be released in a few weeks. Cruise did share some of its disengagement data, specifically the number of miles driven per disengagement event, between 2017 and 2019.

cruise disengagement data 2019

The so-called race to commercialize autonomous vehicles has involved a fair amount of theater, including demos. This lack of data has made it nearly impossible to determine if a company’s self-driving cars are safe enough or ready for the big and very real stage of shuttling people from Point A to Point B on city streets. Disengagement reports — as flawed as they might be — have been one of the only pieces of data that the public, and the media, have access to.

How safe is safe enough?

While that data might provide some insights, it doesn’t help answer the fundamental question for every AV developer planning to deploy robotaxis for the public: “How safe is safe enough?”

Vogt’s comments signal Cruise’s efforts to find a practical means of answering that question.

But if we can’t use the disengagement rate to gauge commercial readiness, what can we use? Ultimately, I believe that in order for an AV operator to deploy AVs at scale in a ridesharing fleet, the general public and regulators deserve hard, empirical evidence that an AV has performance that is super-human (better than the average human driver) so that the deployment of the AV technology has a positive overall impact on automotive safety and public health.
This requires a) data on the true performance of human drivers and AVs in a given environment and b) an objective, apples-to-apples comparison with statistically significant results. We will deliver exactly that once our AVs are validated and ready for deployment. Expect to hear more from us about this very important topic soon.

Competitors agree

Cruise is hardly the only company to question the disengagement reports, although this might be the most strongly worded and public call to date. Waymo told TechCrunch that it takes a similar view.

The reports have long been a source of angst among AV developers. The reports do provide information that can be useful to the public, such as number of vehicles testing on public roads. But it’s hardly a complete picture of any company’s technology.

The reports are wildly different; each company provides varying amounts of information, all in different formats. There is also disagreement over what is and what is not a disengagement. For instance, this issue got more attention in 2018 when Jalopnik questioned an incident involving a Cruise vehicle. In that case, a driver took manual control of the wheel as it passed through an intersection, but it wasn’t reported as a disengagement. Cruise told Jalopnik at the time that it didn’t meet the standard for California regulations.

The other issue is that disengagements don’t provide an “apples to apples” comparison of technology, as these test vehicles operate in a variety of environments and conditions.

Disengagements also often rise and fall along with the scale of testing. Waymo, for instance, told TechCrunch that its disengagements will likely increase as it scales up its testing in California.

And finally, more companies are using simulation or virtual testing instead of sending fleets of cars on public roads to test every new software build. Aurora, another AV developer, emphasizes its use of its virtual testing suite. The disengagement reports don’t include any of that data.

Vogt’s post also called out the industry for conducting carefully “curated demo routes that avoid urban areas with cyclists and pedestrians, constrain geofences and pickup/dropoff locations, and limit the kinds of maneuvers the AV will attempt during the ride.”

The shot could be interpreted as a shot at Waymo, which has recently conducted driverless demos on public streets in Chandler, Ariz. with reporters. TechCrunch was one of the first to have a driverless ride last year. However, demos are common practice among many other self-driving vehicle startups, and are particularly popular around events like CES. Cruise has conducted at least one demo, which was with the press in 2017.

Vogt suggested that raw, unedited drive footage that “covers long stretches of driving in real world situations” is hard to fake and a more qualitative indicator of technology maturity.

Baraja’s unique and ingenious take on lidar shines in a crowded industry

It seems like every company making lidar has a new and clever approach, but Baraja takes the cake. Its method is not only elegant and powerful, but fundamentally avoids many issues that nag other lidar technologies. But it’ll need more than smart tech to make headway in this complex and evolving industry.

To understand how lidar works in general, consult my handy introduction to the topic. Essentially a laser emitted by a device skims across or otherwise very quickly illuminates the scene, and the time it takes for that laser’s photons to return allows it to quite precisely determine the distance of every spot it points at.

But to picture how Baraja’s lidar works, you need to picture the cover of Pink Floyd’s “Dark Side of the Moon.”

GIFs kind of choke on rainbows, but you get the idea.

Imagine a flashlight shooting through a prism like that, illuminating the scene in front of it — now imagine you could focus that flashlight by selecting which color came out of the prism, sending more light to the top part of the scene (red and orange) or middle (yellow and green). That’s what Baraja’s lidar does, except naturally it’s a bit more complicated than that.

The company has been developing its tech for years with the backing of Sequoia and Australian VC outfit Blackbird, which led a $32 million round late in 2018 — Baraja only revealed its tech the next year and was exhibiting it at CES, where I met with co-founder and CEO Federico Collarte.

“We’ve stayed in stealth for a long, long time,” he told me. “The people who needed to know already knew about us.”

The idea for the tech came out of the telecommunications industry, where Collarte and co-founder Cibby Pulikkaseril thought of a novel use for a fiber optic laser that could reconfigure itself extremely quickly.

We thought if we could set the light free, send it through prism-like optics, then we could steer a laser beam without moving parts. The idea seemed too simple — we thought, ‘if it worked, then everybody would be doing it this way,’ ” he told me, but they quit their jobs and worked on it for a few months with a friends and family round, anyway. “It turns out it does work, and the invention is very novel and hence we’ve been successful in patenting it.”

Rather than send a coherent laser at a single wavelength (1550 nanometers, well into the infrared, is the lidar standard), Baraja uses a set of fixed lenses to refract that beam into a spectrum spread vertically over its field of view. Yet it isn’t one single beam being split but a series of coded pulses, each at a slightly different wavelength that travels ever so slightly differently through the lenses. It returns the same way, the lenses bending it the opposite direction to return to its origin for detection.

It’s a bit difficult to grasp this concept, but once one does it’s hard to see it as anything but astonishingly clever. Not just because of the fascinating optics (something I’m partial to, if it isn’t obvious), but because it obviates a number of serious problems other lidars are facing or about to face.

First, there are next to no moving parts whatsoever in the entire Baraja system. Spinning lidars like the popular early devices from Velodyne are being replaced at large by ones using metamaterials, MEMS, and other methods that don’t have bearings or hinges that can wear out.

Baraja’s “head” unit, connected by fiber optic to the brain.

In Baraja’s system, there are two units, a “dumb” head and an “engine.” The head has no moving parts and no electronics; it’s all glass, just a set of lenses. The engine, which can be located nearby or a foot or two away, produces the laser and sends it to the head via a fiber-optic cable (and some kind of proprietary mechanism that rotates slowly enough that it could theoretically work for years continuously). This means it’s not only very robust physically, but its volume can be spread out wherever is convenient in the car’s body. The head itself also can be resized more or less arbitrarily without significantly altering the optical design, Collarte said.

Second, the method of diffracting the beam gives the system considerable leeway in how it covers the scene. Different wavelengths are sent out at different vertical angles; a shorter wavelength goes out toward the top of the scene and a slightly longer one goes a little lower. But the band of 1550 +/- 20 nanometers allows for millions of fractional wavelengths that the system can choose between, giving it the ability to set its own vertical resolution.

It could for instance (these numbers are imaginary) send out a beam every quarter of a nanometer in wavelength, corresponding to a beam going out every quarter of a degree vertically, and by going from the bottom to the top of its frequency range cover the top to the bottom of the scene with equally spaced beams at reasonable intervals.

But why waste a bunch of beams on the sky, say, when you know most of the action is taking place in the middle part of the scene, where the street and roads are? In that case you can send out a few high frequency beams to check up there, then skip down to the middle frequencies, where you can then send out beams with intervals of a thousandth of a nanometer, emerging correspondingly close together to create a denser picture of that central region.

If this is making your brain hurt a little, don’t worry. Just think of Dark Side of the Moon and imagine if you could skip red, orange and purple, and send out more beams in green and blue — and because you’re only using those colors, you can send out more shades of green-blue and deep blue than before.

Third, the method of creating the spectrum beam provides against interference from other lidar systems. It is an emerging concern that lidar systems of a type could inadvertently send or reflect beams into one another, producing noise and hindering normal operation. Most companies are attempting to mitigate this by some means or another, but Baraja’s method avoids the possibility altogether.

“The interference problem — they’re living with it. We solved it,” said Collarte.

The spectrum system means that for a beam to interfere with the sensor it would have to be both a perfect frequency match and come in at the precise angle at which that frequency emerges from and returns to the lens. That’s already vanishingly unlikely, but to make it astronomically so, each beam from the Baraja device is not a single pulse but a coded set of pulses that can be individually identified. The company’s core technology and secret sauce is the ability to modulate and pulse the laser millions of times per second, and it puts this to good use here.

Collarte acknowledged that competition is fierce in the lidar space, but not necessarily competition for customers. “They have not solved the autonomy problem,” he points out, “so the volumes are too small. Many are running out of money. So if you don’t differentiate, you die.” And some have.

Instead companies are competing for partners and investors, and must show that their solution is not merely a good idea technically, but that it is a sound investment and reasonable to deploy at volume. Collarte praised his investors, Sequoia and Blackbird, but also said that the company will be announcing significant partnerships soon, both in automotive and beyond.

Discount student tickets available for TC Sessions: Mobility 2020

“Revolutionary” may be an over-used adjective, but how else to describe the rapid evolution in mobility technology? Join us in San Jose, Calif., on May 14 for TC Sessions: Mobility 2020. Our second annual day-long conference cuts through the hype and explores the current and future state of the technology and its social, regulatory and economic impact.

If you’re a student with a passion for mobility and transportation tech, listen up. We can’t talk about the future if we’re not willing to invest in the next generation of mobility visionaries. That’s why we offer student tickets at a deep discount — $50 each. Invest in your future, save $200 and spend the day with more than 1,000 of mobility tech’s brightest minds, movers and makers.

As always, you can count on a program packed with top-notch speakers, panel discussions, fireside chats and workshops. We’re in the process of building our agenda, but we’re ready to share our first two guests with you: Boris Sofman and Nancy Sun.

Sofman is the engineering director at Waymo and former co-founder and CEO of Anki. Sun is the co-founder and chief engineer of Ike Robotics. Read more about Sofman and Sun’s accomplishments here. We can’t wait to hear what they have to say about automation and robotics.

Keep checking back, because we’ll announce more exciting speakers in the coming weeks.

You’ll also have plenty of time for world-class networking. What better place for a student to impress — and possibly score a great internship or job? You might even meet a future co-founder or an investor. That knocking sound you hear is opportunity. Open the door.

Hold up… you’re not a student but still love a bargain? We’ve got you covered, too. You can save $100 if you purchase an early-bird ticket before April 9.

Be part of the revolution. Join the mobility and transportation tech community — the top technologists, investors researchers and visionaries — on May 14 at TC Sessions: Mobility 2020 in San Jose. Get your student ticket today.

Is your company interested in sponsoring or exhibiting at TC Sessions: Mobility 2020? Contact our sponsorship sales team by filling out this form.

Trucks VC general partner Reilly Brennan is coming to TC Sessions: Mobility

The future of transportation industry is bursting at the seams with startups aiming to bring everything from flying cars and autonomous vehicles to delivery bots and even more efficient freight to roads.

One investor who is right at the center of this is Reilly Brennan, founding general partner of Trucks VC, a seed-stage venture capital fund for entrepreneurs changing the future of transportation.

TechCrunch is excited to announce that Brennan will join us on stage for TC Sessions: Mobility.

In case you missed last year’s event, TC Sessions: Mobility is a one-day conference that brings together the best and brightest engineers, investors, founders and technologists to talk about transportation and what is coming on the horizon. The event will be held May 14, 2020 in the California Theater in San Jose, Calif.

Brennan is known as much for his popular FoT newsletter as his investments, which include May Mobility, Nauto, nuTonomy, Joby Aviation, Skip and Roadster.

Stay tuned to see who we’ll announce next.

And … $250 Early-Bird tickets are now on sale — save $100 on tickets before prices go up on April 9; book today.

Students, you can grab your tickets for just $50 here.

Mobileye takes aim at Waymo

Mobileye has built a multi-billion-dollar business supplying automakers with computer vision technology that powers advanced driver assistance systems. It’s a business that last year generated nearly $1 billion in sales for the company. Today, 54 million vehicles on the road are using Mobileye’s computer vision technology.

In 2018, the company made what many considered a bold and risky move when it expanded its focus beyond being a mere supplier to becoming a robotaxi operator. The upshot: Mobileye wants to compete directly with the likes of Waymo and other big players aiming to deploy commercial robotaxi services.

TechCrunch sat down with Amnon Shashua, Mobileye’s president and CEO and Intel senior vice president, to find out why and how — yep, acquisitions are in the future — the company will hit its mark.

Waymo’s Anca Dragan and Ike Robotics CTO Jur van den Berg are coming to TC Sessions: Robotics+AI

The road to “solving” self-driving cars is riddled with challenges from perception and decision making to figuring out the interaction between human and robots.

Today we’re announcing that joining us at TC Sessions: Robotics+AI on March 3 at UC Berkeley are two experts who play important roles in the development and deployment of autonomous vehicle technology: Anca Dragan and Jur van den Berg.

Dragan is assistant professor at UC Berkeley’s electrical engineering and computer sciences department as well as a senior research scientist and consultant for Waymo, the former Google self-driving project that is now a business under Alphabet. She runs the InterACT Lab at UC-Berkeley, which focuses on on algorithms for human-robot interaction. Dragan also helped found and serve on the steering committee for the Berkeley AI Research Lab, and is co-PI of the Center for Human-Compatible AI.

Last year, Dragan was awarded the Presidential Early Career Award for Scientists and Engineers.

Van den Berg is the co-founder and CTO of Ike Robotics, a self-driving truck startup that last year raised $52 million in a Series A funding round led by Bain Capital  Ventures. Van den Berg has been part of the most important, secretive and even controversial companies in the autonomous vehicle technology industry. He was a senior researcher and developer in Apple’s special projects group, before jumping to self-driving trucks startup Otto. He became a senior autonomy engineer at Uber after the ride-hailing company acquired Otto .

All of this led to Ike, which was founded in 2018 with Nancy Sun and Alden Woodrow, who were also veterans of Apple, Google and Uber Advanced Technologies Group’s self-driving truck program

TC Sessions: Robotics+AI returns to Berkeley on March 3. Make sure to grab your early-bird tickets today for $275 before prices go up by $100. Students, grab your tickets for just $50 here.

Startups, book a demo table right here and get in front of 1,000+ of Robotics/AI’s best and brightest — each table comes with four attendee tickets.

TC Sessions: Mobility 2020: Boris Sofman of Waymo and Nancy Sun of Ike

You have might heard: a mobility revolution is in the making. TechCrunch is here for it — and we’re not just along for the ride. We’re here to uncover new ideas and startups, root out vaporware and dig into the tech and people spurring this change.

In short, we’re helping drive the conversation around mobility. And it’s only fitting we have an event dedicate to the topic. TC Sessions: Mobility — a one-day event on May 14, 2020 in San Jose, Calif., that’s centered around the future of mobility and transportation— is back for a second year and we’re already putting together a fantastic group of the brightest engineers, investors, founders and technologists.

TechCrunch is excited to announce our first two guests for TC Sessions: Mobility.

Drum roll, please …..

We’re excited that Boris Sofman, engineering director at Waymo and former co-founder and CEO of Anki, will join us on stage. But wait there’s more. TechCrunch is also announcing Nancy Sun, the co-founder and chief engineer of Ike Robotics, will be a guest at TC Sessions: Mobility.

Here’s a bit about these bright and accomplished people.

Sofman is leading the engineering for trucking at Waymo, the former Google self-driving project that is now a business under Alphabet. Sofman came to Waymo from consumer robotics company Anki, which shut down in April 2019. Nearly the entire technical team at Anki headed over to Waymo and

Anki built several popular products, starting with Anki Drive in 2013 and later the popular Cozmo robot. The Bay Area-startup had shipped more than 3.5 million devices with annual revenues approaching $100 million.

Previously, Sofman worked on off-road autonomous vehicles and ways to leverage machine learning approach to improve navigational capabilities in real-time.

Sun has also had an incredibly interesting ride in the world of automated and robotics. She is chief engineer and co-founder of Ike, the self-driving truck startup. Prior to Ike, Sun was the senior engineering manager of self-driving trucks at Uber ATG, a company she came to through the acquisition of Otto .

Prior to Otto, Sun was engineering manager of Apple’s secretive special projects group.

Stay tuned to see who we’ll announce next.

$250 Early-Bird tickets are now on sale — save $100 on tickets before prices go up on April 9 when you book today.

Students, you can grab your tickets for just $50 here.

AutoX and Fiat Chrysler are teaming up on a robotaxi for China

Autonomous vehicle startup AutoX, which is backed by Alibaba, said Tuesday that it is partnering with Fiat Chrysler to roll out a fleet of robotaxis for China and other countries in Asia.

A fleet of Chrysler Pacifica vans is going to be in service to the public in China in early 2020, according to AutoX. Passengers will be able to call a robotaxi using a WeChat mini-program and other popular apps in China.

The partnership is an important step for AutoX, which is developing a full self-driving stack. While AutoX has been operating robotaxi pilots in California and China, its real aim is to license its technology to companies that want to operate robotaxi fleets of their own.

While the partnership might not be as critical to FCA, it would theoretically give the automaker access to a robotaxi platform that can operate in China, if it were to choose to make that move.

AutoX, which is based in Hong Kong and San Jose, Calif., already tests in California and China. The company began offering public rides in downtown Shenzhen in early 2019, and in September partnered with Shanghai to launch 100 of its robotaxis to pilot a fleet there.

AutoX CEO Jianxiong Xiao said the next challenge is to remove the safety driver and go truly driverless.

“Getting hardware ready is a crucial step towards this goal,” he said. The partnership with FCA will help it get there, according to the company.

“Achieving completely driverless operation needs a very reliable vehicle platform with full redundancy of the vehicle’s drive-by-wire system,” said Xiao. “This level of redundancy is still new and rare in the auto industry. The Chrysler Pacifica platform has proven trustworthy for driverless deployment.”

AutoX is exhibiting the Chrysler Pacifica minivan at CES 2020. The vehicle is outfitted with an array of sensors.The hybrid vehicle now has 360 degrees of solid state lidar sensors, along with numerous high-definition cameras, blind spot lidar sensors and radar sensors. AutoX has tapped RoboSense and drone manufacturer DJI for its lidar sensors.

The vehicle is also equipped with a vehicle control unit, called the XCU, that AutoX developed. The XCU, pictured below, powers and integrates the self-driving stack, including sensors like lidar and radar, into the vehicle. AutoX says its XCU has faster processing speed and more computational capability, making it ideal for the complex scenarios found in China’s cities.AutoX XCU

“There are a lot more cars, pedestrians, bikers, scooters, and moving objects on the street, many of which are not following the traffic rules,” COO Zhuo Li said in a statement. “Due to the fast development speeds in China, construction and reconstruction can happen overnight. The streets can look completely different in the morning, afternoon, and at night. This requires our system to process faster and extremely accurate to recognize and track each object to guarantee safety.” 

Meanwhile, FCA’s autonomy strategy has largely hinged on partnering with AV developers. In May 2016, Waymo and FCA announced a collaboration to produce about 100 Chrysler Pacifica Hybrid minivans integrated with Waymo’s self-driving system. And last year, FCA struck a deal with Aurora to develop self-driving commercial vehicles.

Last year, AutoX announced a partnership with Swedish holding company and electric vehicle manufacturer NEVS to deploy a robotaxi pilot service in Europe by the end of 2020. It received permission from California regulators to transport passengers in its robotaxis (human safety driver required). AutoX is calling its California robotaxi service xTaxi.

CES 2020 coverage - TechCrunch

Hyundai and Seoul set to test self-driving cars on city roads starting next month

Hyundai has signed a memorandum of understanding (MOU) with the city of Seoul to begin testing six autonomous vehicles on roads in the Gangnam district beginning next month, BusinessKorea reports. The arrangement specifics that six vehicles will begin testing on 23 roads in December, and then looking ahead to 2021, there will be as many as 15 of the cars, which are hydrogen fuel cell electric vehicles, on the roads in testing.

Seoul will provide smart infrastructure to communicate with the vehicles, including connected traffic signals, and will also relay traffic and other info as frequently as every 0.1 seconds to the Hyundai vehicles. That kind of real-time information flow should help considerably with providing the visibility necessary to optimize safe operation of the autonomous test cars. On the Hyundai said, they’ll be sharing information too – providing data around the self-driving test that will be freely available to schools and other organizations looking to test their own self-driving technology within the city.

Together, Seoul and Hyundai hope to use the partnership to build out a world-leading downtown self-driving technology deployment, and to have that evolve into a commercial service, complete with dedicated autonomous vehicle manufacture by 2024.

The Math of Sisyphus

“There is but one truly serious question in philosophy, and that is suicide,” wrote Albert Camus in The Myth of Sisyphus. This is equally true for a human navigating an absurd existence, and an artificial intelligence navigating a morally insoluble situation.

As AI-powered vehicles take the road, questions about their behavior are inevitable — and the escalation to matters of life or death equally so. This curiosity often takes the form of asking whom the car should steer for should it have no choice but to hit one of a variety innocent bystanders. Men? Women? Old people? Young people? Criminals? People with bad credit?

There are a number of reasons this question is a silly one, yet at the same time a deeply important one. But as far as I’m concerned, there is only one real solution that makes sense: when presented with the possibility of taking a life, the car must always first attempt to take its own.

The trolley non-problem

First, let’s get a few things straight about the question we’re attempting to answer.

There is unequivocally an air of contrivance to the situations under discussion. That’s because they’re not plausible real-world situations but mutations of a venerable thought experiment often called the “Trolley Problem.” The most familiar version dates to the ’60s, but versions of it can be found going back to discussions of utilitarianism, and before that in classical philosophy.

The problem goes: A train car is out of control, and it’s going to hit a family of five who are trapped on the tracks. Fortunately, you happen to be standing next to a lever that will divert the car to another track… where there’s only one person. Do you pull the switch? Okay, but what if there are ten people on the first track? What if the person on the second one is your sister? What if they’re terminally ill? If you choose not to act, is that in itself an act, leaving you responsible for those deaths? The possibilities multiply when it’s a car on a street: for example, what if one of the people is crossing against the light — does that make it all their fault? But what if they’re blind?

And so on. It’s a revealing and flexible exercise that makes people (frequently undergrads taking Intro to Philosophy) examine the many questions involved in how we value the lives of others, how we view our own responsibility, and so on.

But it isn’t a good way to create an actionable rule for real-life use.

After all, you don’t see convoluted moral logic on signs at railroad switches instructing operators on an elaborate hierarchy of the values of various lives. This is because the actions and outcomes are a red herring; the point of the exercise is to illustrate the fluidity of our ethical system. There’s no trick to the setup, no secret “correct” answer to calculate. The goal is not even to find an answer, but generate discussion and insight. So while it’s an interesting question, it’s fundamentally a question for humans, and consequently not really one our cars can or should be expected to answer, even with strict rules from its human engineers.

A self-driving car can no more calculate its way out of an ethical conundrum than Sisyphus could have calculated a better path by which to push his boulder up the mountain.
And it must also be acknowledged that these situations are going to be vanishingly rare. Most of the canonical versions of this thought experiment – five people versus one, or a kid and an old person – are so astronomically unlikely to occur that even if we did find a best method that a car should always choose, it’ll only be relevant once every trillion miles driven or so. And who’s to say whether that solution will be the right one in another country, among people with different values, or in ten or twenty years?

No matter how many senses and compute units a car has, it can no more calculate its way out of an ethical conundrum than Sisyphus could have calculated a better path by which to push his boulder up the mountain. The idea is, so to speak, absurd.

We can’t have our cars attempting to solve a moral question that we ourselves can’t. Yet somehow that doesn’t stop us from thinking about it, from wanting an answer. We want to somehow be prepared for the situation even though it may never arise. What’s to be done?

Implicit and explicit trust

The entire self-driving car ecosystem has to be built on trust. That trust will grow over time, but there are two aspects to be considered.

The first is implicit trust. This is the kind of trust we have in the cars we drive today: that despite being one-ton metal missiles propelled by a series of explosions and filled with high octane fuel, they won’t blow up, fail to stop when we hit the brakes, spin out when we turn the wheel, and so on. That we trust the vehicle to do that is the result of years and years of success on the part of car manufacturers. Considering their complexity, cars are among the most reliable machines ever made. That’s been proven in practice and most of the time, we don’t even think of the possibility of the brakes not catching when the pedal is depressed.

You trust your personal missile to work the way you trust a fridge to stay cold. Let’s take a moment to appreciate how amazing that is.

Self-driving cars, however, introduce new factors, unproven ones. Their proponents are correct when they say that autonomous vehicles will revolutionize the road, reduce traffic deaths, shorten commutes, and so on. Computers are going to be much better drivers than us in countless ways. They have superior reflexes, can see in all directions simultaneously (not to mention in the dark, and around or through obstacles), communicate and collaborate instantly with nearby vehicles, immediately sense and potentially fix technical problems… the list goes on.

But until these amazing abilities lose their luster and become just more pieces of the transportation tech infrastructure that we trust, they’ll be suspect. That part we can’t really accelerate except, paradoxically, by taking it slow and making sure no highly visible outlier events (like that fatal Uber crash) arrest the zeitgeist and set back that trust by years. Make haste slowly, as they say. Few people remember anti-lock brakes saving their lives, though it’s probably happened to several people reading this right now — it just quietly reinforced our implicit trust in the vehicle. And no one will remember when their car improved their commute by 5 minutes with a hundred tiny improvements. But they sure do remember that Toyotas killed dozens with bad software that locked the car’s accelerator.

The second part of that trust is explicit: something that has to be communicated, learned, something of which we are consciously aware.

For cars there aren’t many of these. The rules of the road differ widely and are flexible — some places more than others — and on ordinary highways and city streets there we operate our vehicles almost instinctively. When we are in the role of pedestrian, we behave as a self-aware part of an the ecosystem — we walk, we cross, we step in front of moving cars because we assume the driver will see us, avoid us, stop before they hit us. This is because we assume that behind the wheel of every car is an attentive human who will behave according to the rules we have all internalized.

Nevertheless we have signals, even if we don’t realize we’re sending or receiving them; how else can you explain how you know that truck up there is going to change lanes fives seconds before it turns its blinker on? How else can you be so sure a car isn’t going to stop, and hold a friend back from stepping into the crosswalk? Just because we don’t quite understand it doesn’t mean we don’t exert it or assess it all the time. Making eye contact, standing in a place implying the need to cross, waving, making space for a merge, short honks and long honks… It’s a learned skill, and a culture- or even city-specific one at that.

Cold blooded

With self-driving cars there is no humanity in which to place our trust. We trust other people because they’re like us; Computers are not like us.

In time autonomous vehicles of all kinds will become as much a part of the accepted ecosystem as automated lights and bridges, metered freeway entrances, parking monitoring systems, and so on. Until that time we will have to learn the rules by which autonomous vehicles operate, both through observation and straightforward instruction.

Some of these habits will be easily understood, for instance maybe autonomous vehicles will never, ever try to make an U-turn by crossing a double yellow line. I try not to myself, but you know how it is. I’d rather do that than go an extra three blocks to do it legally. But an AV will perhaps scrupulously adhere to traffic laws like that. So there’s one possible rule.

Others might not be quite so hard and fast. Merging and lane changes can be messy, but perhaps it will be the established pattern that AVs will always brake and join the line further back rather than try to move up a spot. This requires a little more context and the behavior is more adaptive, but it’s still a relatively simple pattern that you can perceive and react to, or even exploit to get ahead a bit (please don’t).

It’s important to note that, like the trolley problem “solutions,” there’s no huge list of car behaviors that says, always drop back when merging, always give the right of way, never this, this if that, etc. Just as our decision to switch or not switch tracks proceeds from a higher-order process of morality in our minds, these autonomous behaviors will be the natural result of a large set of complicated evaluations and decision-making processes that weigh hundreds of factors like positions of nearby cars, speed, lane width, etc. But I think they’ll be reliable enough in some ways and in some behaviors that there will definitely be a self-driving “style” that doesn’t deviate too much.

Although few if any of these behaviors are likely to be dangerous in and of themselves, it will be helpful to understand them if you are going to be sharing the road with them. Imperfect knowledge is how we get accidents to begin with. Establishing an explicit trust relationship with self-driving vehicles is part of the process accepting them into our everyday lives.

But people naturally want to take things to their logical ends, even if those ends aren’t really logical. And as you consider the many ways AVs will drive and how they will navigate certain situations, the “but what if…” scenarios naturally get more and more dire and specific as variables approach limits, and ultimately you arrive at the AV equivalent of the trolley problem that we started with. What happens when the car has to make a choice between people?

It’s not that anyone even thinks it will happen to them. What they want to know, as a prerequisite to trust, is that the system is not unprepared, and that the prepared response is not one that puts them in danger. People don’t want to be the victim of the self-driving car’s logic, even theoretically — that would be an impassible barrier to trust.

Because whatever the scenario, whoever it “chooses” between, one of those parties is undeniably the victim. The car got on the road and, following its ill logic to the bitter end, homed in on and struck this person rather than that one.

If neither of the people in this AV-trolley problem can by any reasonable measure be determined to be the “correct” one to choose, especially from their perspective (which must after all be considered), what else is there to do? Well, we have to remember that there’s one other “person” involved here: the car itself.

Is it self-destruction if you don’t have a self?

My suggestion is simply that it be made a universal policy that should a self-driving car be put in a situation where it is at serious risk of striking a person, it must take whatever means it can to avoid it — up to and including destroying itself, with no consideration for its own “life.” Essentially, when presented with the possibility of murder, an autonomous vehicle must always prefer suicide.

When presented with the possibility of murder, an autonomous vehicle must always prefer suicide.
It doesn’t have to detonate itself or anything. It just needs to take itself out of the action, and a robust improvisational engine can be produced to that end just as well as for avoiding swerving trucks, changing lanes suddenly, and any other behavior. There are telephone poles, parked cars, trees — take your pick; any of these things will do as long as they stop the car.

The objection, of course, is that there is likely to be a person inside the self-driving car. Yes — but this person has consented to the inherent risk involved, while the people on the street haven’t. While much of the moral calculus of the trolley problem is academic, this bit actually makes a difference.

Consenting to the risks of using a self-driving system means the occupant is acknowledging the possibility that should such a situation arise, however remote the possibility, they would be the person who may be the victim of it. They are the ones who will explicitly consent to trust their lives to the logic of the self-driving system. Furthermore, as a practical consideration, the occupant is so to speak on the soft side of the car.

As we’ve already established, it’s unlikely a car will ever have to do this. But what it does is provide a substantial and easily understood answer when someone asks the perfectly natural question of what an autonomous vehicle will do when it is careening towards a pedestrian. Simple: it will do its level best to destroy itself first.

There are extremely specific and dire situations that there will never be a solution to as long as there are moving cars and moving people, and self-driving vehicles are no exception to that. You’ll never run out of imaginary scenarios for any system, human or automated, to fail. But it is in order to reduce the number of such scenarios and help establish trust, not to render tragedy impossible, that every self-driving car should robustly and provably prefer its own destruction to that of a person outside itself.

We are not aiming for a complete solution, just an intuitive one. Self-driving cars will, say, always brake to merge, never cross a double yellow in normal traffic, and so on and so forth — and will crash themselves rather than hit a pedestrian. Regardless of the specifics and limitations of the model, that’s a behavior anyone can understand, including those who must consent to it.

Although even the most hard-bitten existentialist would be unlikely to support a systematic framework for suicide, it makes a difference when “suicide” is more likely to mean a fender bender and damage to one’s pocket rather than the death or injury of another. To destroy oneself is different when there is no self to destroy, and practically speaking the risk to passengers, equipped with airbags and seat belts, is far less than the risk to pedestrians.

How exactly would this all be accomplished in practice? Well, it could of course be required by transportation authorities, like seat belts and other safety measures. But unlike seat belts, the proprietary and complex inner workings of an autonomous system aren’t easily verifiable by non-experts. There are ways, but we should be wary of putting ourselves in a position where we have to trust not a technology but the company that administrates it. Either can fail us, but only one can betray us.

We should be wary of putting ourselves in a position where we have to trust not a technology but the company that administrates it. Either can fail us, but only one can betray us.
Perhaps there will be no need to rely on regulators, though: No brand of car wants to have its vehicles associated with running down a pedestrian. Today there are probably more accidents in Civics and Camrys than anything else, but no one thinks that makes them dangerous to drive — it just means more people drive them, and people make mistakes like anyone else.

On the other hand, if an automaker’s brand of self-driving vehicle hits someone, it’s obvious (and right) that the company will bear the blame. And consumers will see that — for one thing, it will be widely reported, and for another, there will probably be highly robust tracking of this kind of thing, including footage and logs from these accidents.

If automakers want to avoid pedestrian strikes and fatalities, they will incorporate something like this self-destruction protocol in their cars as a last line of defense, even if it leads to a net increase in autonomous collisions. It would be much preferable to be known as having a cautious AI than a killer one. So I think that, like other safety mechanisms, this or something like it will be included and, I hope, publicized on every car not because it’s required, but because it makes sense.

People deserve to know how things like self-driving cars work, even if few people on the planet can truly understand the complex computations and algorithms that govern them. They should, like regular cars, be able to be understood at a surface level. This case of understanding them at an extreme end of their behavior is not one that will be relevant every day, but it is a crucial one because it is something that matters to us at a gut level: knowing that these cars aren’t evaluating us as targets via mysterious and fundamentally inadequate algorithms.

To repurpose Camus: “These are facts the heart can feel; Yet they call for careful study before they become clear to the intellect.” Start with a simple solution we feel to be just and work backward from there. And soon — because this is no longer a thought experiment.