Hyundai and Seoul set to test self-driving cars on city roads starting next month

Hyundai has signed a memorandum of understanding (MOU) with the city of Seoul to begin testing six autonomous vehicles on roads in the Gangnam district beginning next month, BusinessKorea reports. The arrangement specifics that six vehicles will begin testing on 23 roads in December, and then looking ahead to 2021, there will be as many as 15 of the cars, which are hydrogen fuel cell electric vehicles, on the roads in testing.

Seoul will provide smart infrastructure to communicate with the vehicles, including connected traffic signals, and will also relay traffic and other info as frequently as every 0.1 seconds to the Hyundai vehicles. That kind of real-time information flow should help considerably with providing the visibility necessary to optimize safe operation of the autonomous test cars. On the Hyundai said, they’ll be sharing information too – providing data around the self-driving test that will be freely available to schools and other organizations looking to test their own self-driving technology within the city.

Together, Seoul and Hyundai hope to use the partnership to build out a world-leading downtown self-driving technology deployment, and to have that evolve into a commercial service, complete with dedicated autonomous vehicle manufacture by 2024.

The Math of Sisyphus

“There is but one truly serious question in philosophy, and that is suicide,” wrote Albert Camus in The Myth of Sisyphus. This is equally true for a human navigating an absurd existence, and an artificial intelligence navigating a morally insoluble situation.

As AI-powered vehicles take the road, questions about their behavior are inevitable — and the escalation to matters of life or death equally so. This curiosity often takes the form of asking whom the car should steer for should it have no choice but to hit one of a variety innocent bystanders. Men? Women? Old people? Young people? Criminals? People with bad credit?

There are a number of reasons this question is a silly one, yet at the same time a deeply important one. But as far as I’m concerned, there is only one real solution that makes sense: when presented with the possibility of taking a life, the car must always first attempt to take its own.

The trolley non-problem

First, let’s get a few things straight about the question we’re attempting to answer.

There is unequivocally an air of contrivance to the situations under discussion. That’s because they’re not plausible real-world situations but mutations of a venerable thought experiment often called the “Trolley Problem.” The most familiar version dates to the ’60s, but versions of it can be found going back to discussions of utilitarianism, and before that in classical philosophy.

The problem goes: A train car is out of control, and it’s going to hit a family of five who are trapped on the tracks. Fortunately, you happen to be standing next to a lever that will divert the car to another track… where there’s only one person. Do you pull the switch? Okay, but what if there are ten people on the first track? What if the person on the second one is your sister? What if they’re terminally ill? If you choose not to act, is that in itself an act, leaving you responsible for those deaths? The possibilities multiply when it’s a car on a street: for example, what if one of the people is crossing against the light — does that make it all their fault? But what if they’re blind?

And so on. It’s a revealing and flexible exercise that makes people (frequently undergrads taking Intro to Philosophy) examine the many questions involved in how we value the lives of others, how we view our own responsibility, and so on.

But it isn’t a good way to create an actionable rule for real-life use.

After all, you don’t see convoluted moral logic on signs at railroad switches instructing operators on an elaborate hierarchy of the values of various lives. This is because the actions and outcomes are a red herring; the point of the exercise is to illustrate the fluidity of our ethical system. There’s no trick to the setup, no secret “correct” answer to calculate. The goal is not even to find an answer, but generate discussion and insight. So while it’s an interesting question, it’s fundamentally a question for humans, and consequently not really one our cars can or should be expected to answer, even with strict rules from its human engineers.

A self-driving car can no more calculate its way out of an ethical conundrum than Sisyphus could have calculated a better path by which to push his boulder up the mountain.
And it must also be acknowledged that these situations are going to be vanishingly rare. Most of the canonical versions of this thought experiment – five people versus one, or a kid and an old person – are so astronomically unlikely to occur that even if we did find a best method that a car should always choose, it’ll only be relevant once every trillion miles driven or so. And who’s to say whether that solution will be the right one in another country, among people with different values, or in ten or twenty years?

No matter how many senses and compute units a car has, it can no more calculate its way out of an ethical conundrum than Sisyphus could have calculated a better path by which to push his boulder up the mountain. The idea is, so to speak, absurd.

We can’t have our cars attempting to solve a moral question that we ourselves can’t. Yet somehow that doesn’t stop us from thinking about it, from wanting an answer. We want to somehow be prepared for the situation even though it may never arise. What’s to be done?

Implicit and explicit trust

The entire self-driving car ecosystem has to be built on trust. That trust will grow over time, but there are two aspects to be considered.

The first is implicit trust. This is the kind of trust we have in the cars we drive today: that despite being one-ton metal missiles propelled by a series of explosions and filled with high octane fuel, they won’t blow up, fail to stop when we hit the brakes, spin out when we turn the wheel, and so on. That we trust the vehicle to do that is the result of years and years of success on the part of car manufacturers. Considering their complexity, cars are among the most reliable machines ever made. That’s been proven in practice and most of the time, we don’t even think of the possibility of the brakes not catching when the pedal is depressed.

You trust your personal missile to work the way you trust a fridge to stay cold. Let’s take a moment to appreciate how amazing that is.

Self-driving cars, however, introduce new factors, unproven ones. Their proponents are correct when they say that autonomous vehicles will revolutionize the road, reduce traffic deaths, shorten commutes, and so on. Computers are going to be much better drivers than us in countless ways. They have superior reflexes, can see in all directions simultaneously (not to mention in the dark, and around or through obstacles), communicate and collaborate instantly with nearby vehicles, immediately sense and potentially fix technical problems… the list goes on.

But until these amazing abilities lose their luster and become just more pieces of the transportation tech infrastructure that we trust, they’ll be suspect. That part we can’t really accelerate except, paradoxically, by taking it slow and making sure no highly visible outlier events (like that fatal Uber crash) arrest the zeitgeist and set back that trust by years. Make haste slowly, as they say. Few people remember anti-lock brakes saving their lives, though it’s probably happened to several people reading this right now — it just quietly reinforced our implicit trust in the vehicle. And no one will remember when their car improved their commute by 5 minutes with a hundred tiny improvements. But they sure do remember that Toyotas killed dozens with bad software that locked the car’s accelerator.

The second part of that trust is explicit: something that has to be communicated, learned, something of which we are consciously aware.

For cars there aren’t many of these. The rules of the road differ widely and are flexible — some places more than others — and on ordinary highways and city streets there we operate our vehicles almost instinctively. When we are in the role of pedestrian, we behave as a self-aware part of an the ecosystem — we walk, we cross, we step in front of moving cars because we assume the driver will see us, avoid us, stop before they hit us. This is because we assume that behind the wheel of every car is an attentive human who will behave according to the rules we have all internalized.

Nevertheless we have signals, even if we don’t realize we’re sending or receiving them; how else can you explain how you know that truck up there is going to change lanes fives seconds before it turns its blinker on? How else can you be so sure a car isn’t going to stop, and hold a friend back from stepping into the crosswalk? Just because we don’t quite understand it doesn’t mean we don’t exert it or assess it all the time. Making eye contact, standing in a place implying the need to cross, waving, making space for a merge, short honks and long honks… It’s a learned skill, and a culture- or even city-specific one at that.

Cold blooded

With self-driving cars there is no humanity in which to place our trust. We trust other people because they’re like us; Computers are not like us.

In time autonomous vehicles of all kinds will become as much a part of the accepted ecosystem as automated lights and bridges, metered freeway entrances, parking monitoring systems, and so on. Until that time we will have to learn the rules by which autonomous vehicles operate, both through observation and straightforward instruction.

Some of these habits will be easily understood, for instance maybe autonomous vehicles will never, ever try to make an U-turn by crossing a double yellow line. I try not to myself, but you know how it is. I’d rather do that than go an extra three blocks to do it legally. But an AV will perhaps scrupulously adhere to traffic laws like that. So there’s one possible rule.

Others might not be quite so hard and fast. Merging and lane changes can be messy, but perhaps it will be the established pattern that AVs will always brake and join the line further back rather than try to move up a spot. This requires a little more context and the behavior is more adaptive, but it’s still a relatively simple pattern that you can perceive and react to, or even exploit to get ahead a bit (please don’t).

It’s important to note that, like the trolley problem “solutions,” there’s no huge list of car behaviors that says, always drop back when merging, always give the right of way, never this, this if that, etc. Just as our decision to switch or not switch tracks proceeds from a higher-order process of morality in our minds, these autonomous behaviors will be the natural result of a large set of complicated evaluations and decision-making processes that weigh hundreds of factors like positions of nearby cars, speed, lane width, etc. But I think they’ll be reliable enough in some ways and in some behaviors that there will definitely be a self-driving “style” that doesn’t deviate too much.

Although few if any of these behaviors are likely to be dangerous in and of themselves, it will be helpful to understand them if you are going to be sharing the road with them. Imperfect knowledge is how we get accidents to begin with. Establishing an explicit trust relationship with self-driving vehicles is part of the process accepting them into our everyday lives.

But people naturally want to take things to their logical ends, even if those ends aren’t really logical. And as you consider the many ways AVs will drive and how they will navigate certain situations, the “but what if…” scenarios naturally get more and more dire and specific as variables approach limits, and ultimately you arrive at the AV equivalent of the trolley problem that we started with. What happens when the car has to make a choice between people?

It’s not that anyone even thinks it will happen to them. What they want to know, as a prerequisite to trust, is that the system is not unprepared, and that the prepared response is not one that puts them in danger. People don’t want to be the victim of the self-driving car’s logic, even theoretically — that would be an impassible barrier to trust.

Because whatever the scenario, whoever it “chooses” between, one of those parties is undeniably the victim. The car got on the road and, following its ill logic to the bitter end, homed in on and struck this person rather than that one.

If neither of the people in this AV-trolley problem can by any reasonable measure be determined to be the “correct” one to choose, especially from their perspective (which must after all be considered), what else is there to do? Well, we have to remember that there’s one other “person” involved here: the car itself.

Is it self-destruction if you don’t have a self?

My suggestion is simply that it be made a universal policy that should a self-driving car be put in a situation where it is at serious risk of striking a person, it must take whatever means it can to avoid it — up to and including destroying itself, with no consideration for its own “life.” Essentially, when presented with the possibility of murder, an autonomous vehicle must always prefer suicide.

When presented with the possibility of murder, an autonomous vehicle must always prefer suicide.
It doesn’t have to detonate itself or anything. It just needs to take itself out of the action, and a robust improvisational engine can be produced to that end just as well as for avoiding swerving trucks, changing lanes suddenly, and any other behavior. There are telephone poles, parked cars, trees — take your pick; any of these things will do as long as they stop the car.

The objection, of course, is that there is likely to be a person inside the self-driving car. Yes — but this person has consented to the inherent risk involved, while the people on the street haven’t. While much of the moral calculus of the trolley problem is academic, this bit actually makes a difference.

Consenting to the risks of using a self-driving system means the occupant is acknowledging the possibility that should such a situation arise, however remote the possibility, they would be the person who may be the victim of it. They are the ones who will explicitly consent to trust their lives to the logic of the self-driving system. Furthermore, as a practical consideration, the occupant is so to speak on the soft side of the car.

As we’ve already established, it’s unlikely a car will ever have to do this. But what it does is provide a substantial and easily understood answer when someone asks the perfectly natural question of what an autonomous vehicle will do when it is careening towards a pedestrian. Simple: it will do its level best to destroy itself first.

There are extremely specific and dire situations that there will never be a solution to as long as there are moving cars and moving people, and self-driving vehicles are no exception to that. You’ll never run out of imaginary scenarios for any system, human or automated, to fail. But it is in order to reduce the number of such scenarios and help establish trust, not to render tragedy impossible, that every self-driving car should robustly and provably prefer its own destruction to that of a person outside itself.

We are not aiming for a complete solution, just an intuitive one. Self-driving cars will, say, always brake to merge, never cross a double yellow in normal traffic, and so on and so forth — and will crash themselves rather than hit a pedestrian. Regardless of the specifics and limitations of the model, that’s a behavior anyone can understand, including those who must consent to it.

Although even the most hard-bitten existentialist would be unlikely to support a systematic framework for suicide, it makes a difference when “suicide” is more likely to mean a fender bender and damage to one’s pocket rather than the death or injury of another. To destroy oneself is different when there is no self to destroy, and practically speaking the risk to passengers, equipped with airbags and seat belts, is far less than the risk to pedestrians.

How exactly would this all be accomplished in practice? Well, it could of course be required by transportation authorities, like seat belts and other safety measures. But unlike seat belts, the proprietary and complex inner workings of an autonomous system aren’t easily verifiable by non-experts. There are ways, but we should be wary of putting ourselves in a position where we have to trust not a technology but the company that administrates it. Either can fail us, but only one can betray us.

We should be wary of putting ourselves in a position where we have to trust not a technology but the company that administrates it. Either can fail us, but only one can betray us.
Perhaps there will be no need to rely on regulators, though: No brand of car wants to have its vehicles associated with running down a pedestrian. Today there are probably more accidents in Civics and Camrys than anything else, but no one thinks that makes them dangerous to drive — it just means more people drive them, and people make mistakes like anyone else.

On the other hand, if an automaker’s brand of self-driving vehicle hits someone, it’s obvious (and right) that the company will bear the blame. And consumers will see that — for one thing, it will be widely reported, and for another, there will probably be highly robust tracking of this kind of thing, including footage and logs from these accidents.

If automakers want to avoid pedestrian strikes and fatalities, they will incorporate something like this self-destruction protocol in their cars as a last line of defense, even if it leads to a net increase in autonomous collisions. It would be much preferable to be known as having a cautious AI than a killer one. So I think that, like other safety mechanisms, this or something like it will be included and, I hope, publicized on every car not because it’s required, but because it makes sense.

People deserve to know how things like self-driving cars work, even if few people on the planet can truly understand the complex computations and algorithms that govern them. They should, like regular cars, be able to be understood at a surface level. This case of understanding them at an extreme end of their behavior is not one that will be relevant every day, but it is a crucial one because it is something that matters to us at a gut level: knowing that these cars aren’t evaluating us as targets via mysterious and fundamentally inadequate algorithms.

To repurpose Camus: “These are facts the heart can feel; Yet they call for careful study before they become clear to the intellect.” Start with a simple solution we feel to be just and work backward from there. And soon — because this is no longer a thought experiment.

Hailing a driverless ride in a Waymo

“Congrats! This car is all yours, with no one up front,” the pop-up notification from the Waymo app reads. “This ride will be different. With no one else in the car, Waymo will do all the driving. Enjoy this free ride on us!”

Moments later, an empty Chrysler Pacifica minivan appears and navigates its way to my location near a park in Chandler, the Phoenix suburb where Waymo has been testing its autonomous vehicles since 2016.

Waymo, the Google self-driving-project-turned-Alphabet unit, has given demos of its autonomous vehicles before. More than a dozen journalists experienced driverless rides in 2017 on a closed course at Waymo’s testing facility in Castle; and Steve Mahan, who is legally blind, took a driverless ride in the company’s Firefly prototype on Austin’s city streets way back in 2015.

waymo Driverless notificationBut this driverless ride is different — and not just because it involved an unprotected left-hand turn, busy city streets or that the Waymo One app was used to hail the ride. It marks the beginning of a driverless ride-hailing service that is now being used by members of its early rider program and eventually the public.

It’s a milestone that has been promised — and has remained just out of reach — for years.

In 2017, Waymo CEO John Krafcik declared on stage at the Lisbon Web Summit that “fully self-driving cars are here.” Krafcik’s show of confidence and accompanying blog post implied that the “race to autonomy” was almost over. But it wasn’t.

Nearly two years after Krafcik’s comments, vehicles driven by humans — not computers — still clog the roads in Phoenix. The majority of Waymo’s fleet of self-driving Chrysler Pacifica minivans in Arizona have human safety drivers behind the wheel; and the few driverless ones have been limited to testing only.

Despite some progress, Waymo’s promise of a driverless future has seemed destined to be forever overshadowed by stagnation. Until now.

Waymo wouldn’t share specific numbers on just how many driverless rides it would be giving, only saying that it continues to ramp up its operations. Here’s what we do know. There are hundreds of customers in its early rider program, all of whom will have access to this offering. These early riders can’t request a fully driverless ride. Instead, they are matched with a driverless car if it’s nearby.

There are, of course, caveats to this milestone. Waymo is conducting these “completely driverless” rides in a controlled geofenced environment. Early rider program members are people who are selected based on what ZIP code they live in and are required to sign NDAs. And the rides are free, at least for now.waymo VID 20191023 093743

Still, as I buckle my seatbelt and take stock of the empty driver’s seat, it’s hard not to be struck, at least for a fleeting moment, by the achievement.

It would be a mistake to think that the job is done. This moment marks the start of another, potentially lengthy, chapter in the development of driverless mobility rather than a sign that ubiquitous autonomy is finally at hand.

Futuristic joyride   

A driverless ride sounds like a futuristic joyride, but it’s obvious from the outset that the absence of a human touch presents a wealth of practical and psychological challenges.

As soon as I’m seated, belted and underway, the car automatically calls Waymo’s rider assistance team to address any questions or concerns about the driverless ride — bringing a brief human touch to the experience.

I’ve been riding in autonomous vehicles on public roads since late 2016. All of those rides had human safety drivers behind the wheel. Seeing an empty driver’s seat at 45 miles per hour, or a steering wheel spinning in empty space as it navigates suburban traffic, feels inescapably surreal. The sensation is akin to one of those dreams where everything is the picture of normalcy except for that one detail — the clock with a human face or the cat dressed in boots and walking with a cane.

Other than that niggling feeling that I might wake up at any moment, my 10-minute ride from a park to a coffee shop was very much like any other ride in a “self-driving” car. There were moments where the self-driving system’s driving impressed, like the way it caught an unprotected left turn just as the traffic signal turned yellow or how its acceleration matched surrounding traffic. The vehicle seemed to even have mastered the more human-like driving skill of crawling forward at a stop sign to signal its intent.

Only a few typical quirks, like moments of overly cautious traffic spacing and overactive path planning, betrayed the fact that a computer was in control. A more typical rider, specifically one who doesn’t regularly practice their version of the driving Turing Test, might not have even noticed them.

How safe is ‘safe enough’?

Waymo’s decision to put me in a fully driverless car on public roads anywhere speaks to the confidence it puts in its “driver,” but the company wasn’t able to point to one specific source of that confidence.

Waymo’s Director of Product Saswat Panigrahi declined to share how many driverless miles Waymo had accumulated in Chandler, or what specific benchmarks proved that its driver was “safe enough” to handle the risk of a fully driverless ride. Citing the firm’s 10 million real-world miles and 10 billion simulation miles, Panigrahi argued that Waymo’s confidence comes from “a holistic picture.”

“Autonomous driving is complex enough not to rely on a singular metric,” Panigrahi said.

It’s a sensible, albeit frustrating, argument, given that the most significant open question hanging over the autonomous drive space is “how safe is safe enough?” Absent more details, it’s hard to say if my driverless ride reflects a significant benchmark in Waymo’s broader technical maturity or simply its confidence in a relatively unchallenging route.

The company’s driverless rides are currently free and only taking place in a geofenced area that includes parts of Chandler, Mesa and Tempe. This driverless territory is smaller than Waymo’s standard domain in the Phoenix suburbs, implying that confidence levels are still highly situational. Even Waymo vehicles with safety drivers don’t yet take riders to one of the most popular ride-hailing destinations: the airport.

The complexities of driverless

Panigrahi deflected questions about the proliferation of driverless rides, saying only that the number has been increasing and will continue to do so. Waymo has about 600 autonomous vehicles in its fleet across all geographies, including Mountain View, Calif. The majority of those vehicles are in Phoenix, according to the company.

However, Panigrahi did reveal that the primary limiting factor is applying what it learned from research into early rider experiences.

“This is an experience that you can’t really learn from someone else,” Panigrahi said. “This is truly new.”

Some of the most difficult challenges of driverless mobility only emerge once riders are combined with the absence of a human behind the wheel. For example, developing the technologies and protocols that allow a driverless Waymo to detect and pull over for emergency response vehicles and even allow emergency services to take over control was a complex task that required extensive testing and collaboration with local authorities.

“This was an entire area that, before doing full driverless, we didn’t have to worry as much about,” Panigrahi said.

The user experience is another crux of driverless ride-hailing. It’s an area to which Waymo has dedicated considerable time and resources — and for good reason. User experience turns out to hold some surprisingly thorny challenges once humans are removed from the equation.

WAYMO 1. edjpg

The everyday interactions between a passenger and an Uber or Lyft driver, such as conversations about pick-up and drop-offs as well as sudden changes in plans, become more complex when the driver is a computer. It’s an area that Waymo’s user experience research (UXR) team admits it is still figuring out.

Computers and sensors may already be better than humans at specific driving capabilities, like staying in lanes or avoiding obstacles (especially over long periods of time), but they lack the human flexibility and adaptability needed to be a good mobility provider.

Learning how to either handle or avoid the complexities that humans accomplish with little effort requires a mix of extensive experience and targeted research into areas like behavioral psychology that tech companies can seem allergic to.

Not just a tech problem

Waymo’s early driverless rides mark the beginning of a new phase of development filled with fresh challenges that can’t be solved with technology alone. Research into human behavior, building up expertise in the stochastic interactions of the modern urban curbside, and developing relationships and protocols with local authorities are all deeply time-consuming efforts. These are not challenges that Waymo can simply throw technology at, but require painstaking work by humans who understand other humans.

Some of these challenges are relatively straightforward. For example, it didn’t take long for Waymo to realize that dropping off riders as close to the entrance of a Walmart was actually less convenient due to the high volume of foot traffic. But understanding that pick-up and drop-off isn’t ruled by a single principle (e.g. closer to the entrance is always better) hints at a hidden wealth of complexity that Waymo’s vehicles need to master.

waymo interaction

As frustrating as the slow pace of self-driving proliferation is, the fact that Waymo is embracing these challenges and taking the time to address it is encouraging.

The first chapter of autonomous drive technology development was focused on the purely technical challenge of making computers drive. Weaving Waymo’s computer “driver” into the fabric of society requires an understanding of something even more mysterious and complex: people and how they interact with each other and the environment around them.

Given how fundamentally autonomous mobility could impact our society and cities, it’s reassuring to know that one of the technology’s leading developers is taking the time to understand and adapt to them.

MIT uses shadows to help autonomous vehicles see around corners

We’re still not at the point where autonomous vehicles systems can best human drivers in all scenarios, but the hope is that eventually, technology being incorporated into self-driving cars will be capable of things humans can’t even fathom – like seeing around corners. There’s been a lot of work and research put into this concept over the years, but MIT’s newest system uses relatively affordable and readily available technology to pull off this seeming magic trick.

MIT researchers (in a research project backed by Toyota Research Institute) created a system that uses minute changes in shadows to predict whether or not a vehicle can expect a moving object to come around a corner, which could be an effective system for use not only in self-driving cars, but also in robots that navigated shared spaces with humans – like autonomous hospital attendants, for instance.

This system employs standard optical cameras, and monitors changes in the strength and intensity of light using a series of computer vision techniques to arrive at a final determination of whether shadows are being projected by moving or stationary objects, and what the path of said object might be.

In testing so far, this method has actually been able to best similar systems already in use that employ LiDAR imaging in place of photographic cameras and that don’t work around corners. In fact, it beats the LiDAR method by over half a second, which is a long time in the world of self-driving vehicles, and could mean the difference between avoiding an accident and, well, not.

For now, though, the experiment is limited: It has only been tested in indoor lighting conditions, for instance, and the team has to do quite a bit of work before they can adapt it to higher speed movement and highly variable outdoor lighting conditions. Still, it’s a promising step and eventually might help autonomous vehicles better anticipate, as well as react to, the movement of pedestrians, cyclists and other cars on the road.

Hyundai is launching Botride, a robotaxi service in California with Pony.ai and Via

A fleet of electric, autonomous Hyundai Kona crossovers — equipped with a self-driving system from Chinese autonomous startup Pony .ai and Via’s ride-hailing platform, will start shuttling customers on public roads next week.

The robotaxi service called BotRide will operate on public roads in Irvine, California, beginning November 4. This isn’t a driverless service; there will be a human safety driver behind the wheel at all times. But it is one of the few ride-hailing pilots on California roads. Only four companies, AutoX, Pony.ai, Waymo and Zoox have permission to operate a ride-hailing service using autonomous vehicles in the state of the California.

Customers will be able to order rides through a smartphone app, which will direct passengers to nearby stops for pick up and drop off. Via’s expertise is on shared rides, and this platform aims for the same multiple rider goal. Via’s platform handles the on-demand ride-hailing features such as booking, passenger and vehicle assignment and vehicle identification (QR code). Via has two sides to its business. The company operates consumer-facing shuttles in Chicago, Washington, D.C. and New York. It also partners with cities and transportation authorities — and now automakers launching robotaxi services — giving clients access to their platform to deploy their own shuttles.

Hyundai said BotRide is “validating its user experience in preparation for a fully driverless future.” Hyundai didn’t explain when this driverless future might arrive. Whatever this driverless future ends up looking like, Hyundai sees this pilot as a critical marker along the way.

Coverage area of Hyundai robotaxi pilot

Hyundai said it is using BotRide to study consumer behavior in an autonomous ride-sharing environment, according to Christopher Chang, head of business development, strategy and technology division, Hyundai Motor Company .

“The BotRide pilot represents an important step in the deployment and eventual commercialization of a growing new mobility business,” said Daniel Han, manager, Advanced Product Strategy, Hyundai Motor America.

Hyundai might be the household name behind BotRide, but Pony.ai and Via are doing much of the heavy lifting. Pony.ai is a relative newcomer to the AV world, but it has already raised $300 million on a $1.7 billion valuation and locked in partnerships with Toyota and Hyundai.

The company, which has operations in China and California and about 500 employees globally, was founded in late 2016 with backing from Sequoia Capital China, IDG Capital and Legend Capital.

It’s also one of the few autonomous vehicle companies to have both a permit with the California Department of Motor Vehicles to test AVs on public roads and permission from the California Public Utilities Commission to use these vehicles in a ride-hailing service. Under rules established by the CPUC, Pony.ai cannot charge for rides.

Sense Photonics brings its fancy new flash lidar to market

There’s no shortage of lidar solutions available for autonomous vehicles, drones, and robots — theoretically, anyway. But getting a lidar unit from theory to mass production might be harder than coming up with the theory in the first place. Sense Photonics appears to have made it past that part of the journey, and is now offering its advanced flash lidar for pre-order.

Lidar comes in a variety of form factors, but the spinning type we’ve seen so much of is on its way out and more compact, reliable planar types are on the way in; Luminar is making moves to get ahead, but Sense Photonics isn’t sitting still — and anyway, the two companies have different strengths.

While Luminar and some other companies aim to create a forward-facing lidar that can detect shapes hundreds of feet ahead in a relatively narrow field of view, Sense is going after the short-range, wide-angle side of things. And because they sync up with regular cameras, it’s easy as pie to map depth onto the RGB image:

Sense Photonics makes it easy to match traditional camera views with depth data.

These are lidars that you’d want mounted on the rear or sides of the vehicles, able to cover a wide slice of the surroundings and get accurate detection of things like animals, kids, and bikes quickly and accurately. But I went through all this when they came out of stealth.

The news today is that these units have gone from prototype to production design. The devices have been ruggedized so they can be attached outside of enclosures even in dusty or rainy environments. And performance has been improved, bumping the maximum range in some cases out to over 40 meters, well over what was promised before.

The base price of $2,900 covers a unit with an 80×30 degree field of view, but others cover wider areas, up to 95 by 75 degrees — a large amount by lidar standards, and in higher fidelity than other flash lidars out there. You do give up some other properties in return for the wide view, though. The proprietary tech created by the company lets the lidar’s detector be located elsewhere than the laser emitter, too, which makes designing around the things easier (if not exactly easy).

Obviously if people are meant to order these online from the company these are not going to be appearing in next year’s autonomous vehicles. No, it’s more for bulk purchases by companies doing serious testing, before their self-driving cars go into production.

Whether Sense Photonics kit or some other lucky lidar company’s ends up on the robo-fleets of tomorrow is up in the air, but it does help for your product to actually exist. You can find out more about the company’s lidar platform here.

Who will own the future of transportation?

Autonomous vehicles are often painted as a utopian-like technology that will transform parking lots into parks and eliminate traffic fatalities — a number that reached 1.35 million globally in 2018.

Even if, as many predict, autonomous vehicles are deployed en masse, the road to that future promises to be long, chaotic and complex. The emergence of ride-hailing, car-sharing and micromobility hints at some of the speed bumps between today’s modes of transportation and more futuristic means, like AVs and flying cars. Entire industries face disruption in this new mobility world, perhaps none so thoroughly as automotive.

Autonomous-vehicle ubiquity may be decades away, but automakers, startups and tech companies are already clambering to be king of the ‘future of transportation’ hill.

How does a company, city or country “own” this future of transportation? While there’s no clear winner today, companies as well as local and federal governments can take actions and make investments today to make sure they’re not left behind, according to Zoox CEO Aicha Evans and former Michigan Gov. Jennifer Granholm, who spoke about the future of cities on stage this month at Disrupt SF. 

Local = opportunity

Evolution in mobility is occurring at a global scale, but transportation is also very local, Evans said. Because every local transit system is tailored to the geography and the needs of its residents, these unique requirements create opportunities at a local level and encourages partnerships between different companies.

This is no longer just a Silicon Valley versus Detroit story; Europe, China, Singapore have all piled in as well. Instead of one mobility company that will rule them all, Evans and Granholm predict more partnerships between companies, governments and even economic and tech strongholds like Silicon Valley.

We’re already seeing examples of this in the world of autonomous vehicles. For instance, Ford invested $1 billion into AV startup Argo AI in 2017. Two years later, VW Group announced a partnership with Ford that covers a number of areas, including autonomy (via a new investment by VW in Argo AI) and collaboration on development of electric vehicles.

BMW and Daimler, which agreed in 2018 to merge their urban mobility services into a single holding company, announced in February plans to unify these services and sink $1.1 billion into the effort. The two companies are also part of a consortium that includes Audi, Intel, Continental and Bosch, that owns mapping and location data service company HERE.

There are numerous other examples of companies collaborating after concluding that going it alone wasn’t as feasible as they once thought.

The Station: A new self-driving car startup, Inside Tesla’s V10 software, Lilium’s big round

If you haven’t heard, TechCrunch has officially launched a weekly newsletter dedicated to all the ways people and goods move from Point A to Point B — today and in the future — whether it’s by bike, bus, scooter, car, train, truck, flying car, robotaxi or rocket. Heck, maybe even via hyperloop.

Earlier this year, we piloted a weekly transportation newsletter. Now, we’re back with a new name and a format that will be delivered into your inbox every Saturday morning. We’re calling it The Station, your hub of all things transportation. I’m your host, senior transportation reporter Kirsten Korosec.

Portions of the newsletter will be published as an article on the main site after it has been emailed to subscribers (that’s what you’re reading now). To get everything, you have to sign up. And it’s free. To subscribe, go to our newsletters page and click on The Station.

This isn’t a solo effort. Expect analysis and insight from senior reporter Megan Rose Dickey, who has been covering micromobility. TechCrunch reporter Jake Bright will occasionally provide insight into electric motorcycles, racing and the startup scene in Africa. And then of course, there are other TechCrunch staffers who will weigh in from their stations in the U.S., Europe and Asia.

We love the reader feedback. Keep it coming. Email me at [email protected] to share thoughts, opinions or tips or send a direct message to @kirstenkorosec.

A new autonomous vehicle company on the scene

the station autonomous vehicles1

Deeproute.ai is the newest company to receive a permit from the California Department of Motor Vehicles to test autonomous vehicles on public roads.

Here is what we know so far. The Chinese startup just raised $50 million in a pre-Series A funding round led by Fozun RZ Capital, the Beijing-based venture capital arm of Chinese conglomerate Fosun International. The company has research centers in Shenzhen, Beijing and Silicon Valley and is aiming to build a full self-driving stack that can handle Level 4 automation, a designation by the SAE that means the vehicle can handle all aspects of driving in certain conditions without human intervention.

Deeproute.ai is also a supplier for China’s second-largest automaker Dongfeng Motor, according to TechNode. The startup plans to offer robotaxi services in partnership with Dongfeng Motor for the Military World Games in the city of Wuhan next month.

Snapshot: Tesla Smart Summon

the station electric vehicles1Remember way back in September when Tesla started rolling out its V10 software update? The software release was highly anticipated in large part because it included Smart Summon, an autonomous parking feature that allows owners to use their app to summon their vehicles from a parking space.

We have some insight into the rollout, courtesy of TezLab, a Brooklyn-based startup that developed a free app that’s like a Fitbit for a Tesla vehicle. Tesla owners who download the app can track their efficiency, total trip miles and use it to control certain functions of the vehicle, such as locking and unlocking the doors, and heating and air conditioning. TezLab, which has 20,000 active users and logs more than 1 million events a day, has become a massive repository of Tesla data.

TezLab shared the data set below that shows the ebb and flow of Tesla’s software updates. The X axis shows the date (of every other bar) and a timestamp of midnight. (Because this is a screenshot, you can’t toggle over it to see the time.)

Screen Shot 2019 10 11 at 3.52.53 PM

This data shows when Tesla started pushing out the V10 software as well as when it held it back. The upshot? Notice the pop on September 27. That’s when the public rollout began in earnest, then dipped, then spiked again on October 3 and then dropped for almost a week. That lull followed a slew of social media postings demonstrating and complaining about the Smart Summon feature, suggesting that Tesla slowed the software release.

A geofencing bright spot

Speaking of Smart Summon, you might have seen the Consumer Reports review of the feature. In short, the consumer advocacy group called it “glitchy” and wondered if it offered any benefits to customers. I spoke to CR and learned a bit more. CR notes that Tesla is clear in its manual about the limitations of this beta product. The organization’s criticism is that people don’t have insight into these limitations when they buy the “Full Self-Driving” feature, which costs thousands of dollars. (CEO Elon Musk just announced the price will go up another $1,000 on November 1.)

One encouraging sign is that CR determined that the Smart Summon feature was able (most of the time) to recognize when it was on a public road. Smart Summon is only supposed to be used in private areas. “This is the first we’ve seen Tesla geofence this technology and that is a bright spot,” CR told me.

Deal of the week

money the station

There were plenty of deals in the past week, but the one that stood out — for a variety of reasons — involved German urban air mobility startup Lilium . Editor Ingrid Lunden had the scoop that Lilium has been talking to investors to raise between $400 million and $500 million. The size of this yet-to-be-closed round and who might be investing is what got our attention.

Lilium has already raised more than $100 million in financing from investors, including WeChat owner and Chinese internet giant Tencent, Atomico, which was founded by Skype co-founder Niklas Zennström, and Obvious Ventures, the early-stage VC fund co-founded by Twitter’s Ev Williams. International private banking and asset management group LGT and Freigeist (formerly called e42) are also investors.

TechCrunch is still hunting down details about who might be investing, as well as Lilium’s valuation. (You can always reach out with a tip.)

Lunden was able to ferret out a few important nuggets from sources, including that Tencent is apparently in this latest round and the startup has been pitching new investors since at least this spring. The round has yet to close. Lilium isn’t the only urban air mobility — aka flying cars — startup that been shaking the investor trees for money the past six months. Lilium’s challenge is attempting to raise a bigger round than others in an unproven market.

A little bird

blinky cat bird green

We hear a lot. But we’re not selfish. Let’s share. For the unfamiliar, a little bird is where we pass along insider tips and what we’re hearing or finding from reliable, informed sources in the industry. This isn’t a place for unfounded gossip. Sometimes, like this week, we’re just helping to connect the dots to determine where a company is headed.

Aurora, an autonomous vehicle startup backed by Sequoia Capital and Amazon, published a blog post that lay outs its plans to integrate its self-driving stack into multiple vehicle platforms. Those plans now include long-haul trucks.

Self-driving trucks are so very hot right now. Aurora is banking on its recent acquisition of lidar company Blackmore to give it an edge. Aurora has integrated into a Class 8 truck its self-driving stack known as “Aurora Driver.” We hear that Aurora isn’t announcing any partnerships — at least not now — but it’s signaling a plan to push into this market.

Got a tip or overheard something in the world of transportation? Email me at [email protected] to share thoughts, opinions or tips or send a direct message to @kirstenkorosec.

Keep (self) truckin’

the station semi truck

Ike, the autonomous trucking startup founded by veterans of Apple, Google and Uber Advanced Technologies Group’s self-driving truck program, has always cast itself as the cautious-we’ve-been-around-the-block-already company.

That hasn’t changed. Last week, Ike released a lengthy safety report and accompanying blog post. It’s beefy. But here are a few of the important takeaways. Ike is choosing not to test on public roads after a year of development, unlike most others in the space. Ike has a fleet of four Class 8 trucks outfitted with its self-driving stack as well as a Toyota Prius used for mapping and data collection. The trucks are driven manually, (a second engineer always in the passenger seat) on public roads. The automation system is then tested on a track.

There are strong incentives to demonstrate rapid progress with autonomous vehicle technology, and testing on public roads has been part of that playbook. And Ike’s founders are taking a different path; and we hear that the approach was embraced, not rejected, by investors. 

Screen Shot 2019 10 12 at 7.56.36 AM

In the next issue of the newsletter, check out snippets from an interview with Randol Aikin, the head of systems engineering at Ike. We dig into the company’s approach, which is based on a methodology developed at MIT called Systems Theoretic Process Analysis (STPA) as the foundation for Ike’s product development.

In other AV truck-related news, Kodiak Robotics just hired Jamie Hoffacker as its head of hardware. Hoffacker came from Lyft’s Level 5 self-driving vehicle initiative and also worked on Google’s Street View vehicles. The company tells me that Hoffacker is key to its aim of building a product that can be manufactured, not just a prototype. Check out Hoffacker’s blog post to get his perspective.

Nos vemos la próxima vez.

Waymo and Renault to explore autonomous mobility route in Paris region

Waymo and Renault are working with the Paris region to explore the possibility of establishing an autonomous transportation route between Charles de Gaulle airport and La Défense, a neighborhood just outside of Paris city limits that plays host to a large number of businesses and skyscrapers, including a large shopping center. This is part of the deal that Renault and Nissan signed with Waymo earlier this year, to work together on potential autonomous vehicle services in both Japan and France.

This route in particular is being explored as a lead-up project to potentially be ready in time for the Paris Olympic Games, which are taking in place in Summer 2024. The goal is to offer a convenient way for people living in the Île-de-France area where Paris is located to get around, while also providing additional transportation options for tourists and international visitors. The region is committing €100 million (around $110 million) to developing autonomous vehicle infrastructure in the area to serve this purpose, across a number of different projects.

“France is a recognized global mobility leader, and we look forward to working with the Ile-de-France Region and our partner Groupe Renault to explore deploying the Waymo Driver on the critical business route stretching from Roissy-Charles de Gaulle Airport to La Défense in Paris,” said Waymo’s Adam Frost, Chief Automotive Programs and Partnerships Officer, in an emailed statement.

Defined routes designed to meet a specific need, especially in time for showcase events like the Olympics, seems to be a likely way that Waymo and others focused on the deployment of autonomous services will work in terms of pilot deployments, since it’s a perfect blend of demand, regulatory exemption and motivation and city/partner support.

Waymo to customers: “Completely driverless Waymo cars are on the way”

Waymo, the autonomous vehicle business under Alphabet, sent an email to customers of its ride-hailing app that their next trip might not have a human safety driver behind the wheel, according to a copy of the email that was posted on Reddit.

The email entitled “Completely driverless Waymo cars are on the way” was sent to customers that use its ride-hailing app in the suburbs of Phoenix. It isn’t clear if the email was sent to members of its early rider program or its broader Waymo One service.

Both the early rider program and Waymo One service use self-driving Chrysler Pacifica minivans to shuttle Phoenix residents in a geofenced area that covers several suburbs including Chandler and Tempe. All of these “self-driving rides” have a human safety driver behind the wheel.

A driverless ride is what it sounds like. No safety driver behind the wheel, although a Waymo employee would likely be present in the vehicle initially.

Waymo could not be reached for comment. TechCrunch will update the article if the company responds. The email is posted below.

Screen Shot 2019 10 09 at 3.06.56 PM

Waymo, formerly known as the Google self-driving project, first began testing its technology in 2009 in and around its Mountain View, Calif., headquarters. It’s been a slow and steady roll ever since. The company has expanded its test area to other cities, spun out into its own business and iterated the vehicle design and the sensors around it,

 

Waymo opened a testing and operations center in Chandler, Arizona in 2016. Since then, the company has ramped up its testing in Chandler and other Phoenix suburbs, launched an early rider program and slowly crept toward commercial deployment. The early rider program, which required vetted applicants to sign non-disclosure agreements to participate, launched in April 2017.

In December, the company launched Waymo One, a commercial self-driving car service and accompanying app. Waymo One signaled that the company was starting to open up its service. Members of the early rider program were transferred to the Waymo One, which allowed them to bring guests and even talk publicly about their rides. More recently, Waymo opened another technical service center in the Phoenix area in preparation to double its capacity and grow its commercial fleet.

While driverless Waymo vehicles have been spotted periodically, they have never been used to shuttle the general public. The introduction of driverless vehicles would be milestone for the company.

And yet, there remains a number of questions. It’s unclear how many of these driverless rides there will be or the what constraints Waymo will place on them. It’s likely that these will operate in more simple, controlled environments for months before it expands to more complex situations.