Why L5 Autonomous Vehicles Make No Sense
A lot of people in my Twitter feed seem really excited about the idea of owning a car that can drive itself anywhere, anytime. My fellow Tesla drivers call this Full Self-Driving. I call this funny, because I’ve spent pretty much my whole adult life trying to drive anywhere, anytime I feel like it, and I still haven’t quite figured out how to do it.
Certainly not in one car, and definitely not safely.
I’ve driven all over the world, in big cars, small cars, new cars, old cars, race cars, rally cars, and trucks. I almost drowned fording a Canadian river in a Land Rover, sank a dune buggy into the surf in Thailand, and drove nonstop from New York to LA through a snowstorm in a wooden-framed Morgan 3-wheeler.
The #1 lesson I’ve learned from all of this gonzo driving is that smart people can make some very dumb decisions behind the wheel. Human nature is the art of doing things science says one shouldn’t. It may start with driving games and some harmless off-roading in New Mexico, but it ends with hurtling 150 mph through the driving rain to set a Cannonball Run record, or strapped into a rented 4×4 tumbling down an Icelandic volcano.
This was all boatloads of fun, but it showed a profound lack of common sense.
The technical term for a vehicle that can drive itself anywhere, anytime is Level 5, or just L5. But the colloquial term for humans who drive like this is “fool.” The best human drivers who ever lived aren’t L5, because they know — whether it’s experience, instinct, a sixth sense, or just the ability to read a map — that there are times and places you just shouldn’t drive.
Why? Because risk can only be reasonably measured up to a point. Once you layer in unpredictable conditions like severe weather, or go off-road, you’re no longer playing chess, you’re playing roulette. Have you ever driven through a tornado? I did. Once. What’s “safe” varies driver-to-driver. Some people can safely drive places others can’t. The same human driver might be safe one day, and dangerous the next. We get tired. Our eyes can deceive us. There are things we can’t see. There are things we can’t react to.
And yet some of us choose to drive in places we shouldn’t, in conditions we can’t handle. I know because I used to be one of them.
One key benefit of developing artificial intelligence is to help us avoid the consequences of the suboptimal choices some of us make every day. That means teaching an AI-driven car to know where it can drive us safely — and where it can’t. That’s what I call common sense, and not everyone has it. Ask any group of drivers to look at the same road, or trail, or rocks. Some of them will say NO WAY—
But there will always be that person who says WATCH THIS.
If I’m going to send an autonomous vehicle to pick up my daughter from school, I don’t want it to say WATCH THIS. Before I let my daughter nap in the back of one of these things, I need to know that it knows when to say NO WAY. I want to know that it will stick to roads where it can and will drive more safely than I will. And not just me, but also people like the ride-hail driver who kept stabbing the brakes last week and made me spill coffee on my white linen pants.
Avoiding the mistakes we often make means setting boundaries. A lot of people don’t like to hear about boundaries. I’m one of them, but I also know that some boundaries are good, like the ones I give my 2-year-old. The hard boundaries we give kids eventually become the soft boundaries adults call common sense.
The Best Technology Mirrors Our Common Sense
What do you call self-driving technology that mirrors our common sense? I consider it what SAE calls Level 4, or just L4. An L4-capable vehicle can drive itself within certain boundaries and conditions. That could just be a suburb in good weather, or a city in the rain, or many cities…and the highways connecting them. As L4 technology learns, L4 vehicles can drive more places, in harder conditions. A lot like people. Eventually L4 vehicles will work most of the places most of us are likely to need a ride, or a package delivered.
Think of what cell phone coverage maps looked like in 1995 versus today.
Cell phones and cars that can drive themselves are technically very different, but they’re exactly the same in the way we use them. They don’t need to work everywhere, anytime – although we’d like them to. They just need to reliably work everywhere most of us go. If we go somewhere service is unreliable, we plan ahead with an alternative, like a satellite phone. It took years for cellphones to get there. But they did.
But the funny thing about the people excited about L5 cars is the idea that nothing short of anywhere, anytime is worth doing. If we applied that logic to cell phones, we still wouldn’t have them. Just because my iPhone doesn’t get a signal in parts of Yellowstone National Park, why shouldn’t I be able to use one in Dallas or Miami?
Oh wait, I can. Where there is demand, supply follows. A lot of people live in cities. Yellowstone? Not so much, which is why only half the park has decent connectivity.
The Problem With L5 Vehicles
Which brings us to the other funny thing about the idea of L5 capability. Even if you could teach a car to drive itself safely anywhere, anytime, being smart is not enough. Vehicles aren’t just software, they’re hardware too. That was the #2 lesson of my adventures: the world is very big and diverse, and so are the vehicles we had to build to reach its farthest corners.
Like sporks and houseboats, the more things a thing is asked to do, the worse it is at everything. Why do race cars look different than trucks? Because they’re each designed to do different things really, really well. They all have wheels, but a race car isn’t going to get far off-road, and a truck isn’t going to be the best on a track, or even getting around lower Manhattan.
If you want one vehicle to reliably go anywhere, anytime, choices must be made. Sporks mean compromises. If you had to pick one vehicle with sufficient compromises to make it as close to L5 as possible, it would resemble a spork-on-wheels, which is what we have right here:
These nearly L5 vehicles have obvious drawbacks. How would you park one downtown? I don’t have garage space for this in my building. The bigger the spork, the harder it is to eat. Visibility, speed, ride quality, fuel economy and comfort are all amazing…but not good. What if I have to change a tire, or tread? Owning one seems like a big and unnecessary hassle.
I want more convenience in my life, not less.
What kind of person wants to drive everywhere in all conditions anyway? No one except idiots like me, who do it for fun. I get that there are people who might absolutely need to, like first responders, or the military, or people in the movies who need to get somewhere to do something before a clock runs out and something bad happens. Like that time Paul Walker had to save the dogs at the South Pole, or the end of every Mission Impossible.
That’s a lot of something somethings, but most of us don’t live like that.
If perfect is the enemy of good, then why try to solve a problem that for 99% of us doesn’t exist? If L5 makes no sense, L4 makes perfect sense, because a car that knows where it can safely go is just common sense. That’s the point of L4: common sense. Someday L4 vehicles will blanket the world we need them to, the same way cellphones do.
And for those who absolutely want to drive where AI can’t go, there will always be steering wheels.
(Reprinted with permission - original post @ www.groundtruthautonomy.com)
Alex Roy is Director of Special Operations at Argo.AI, host of the No Parking & Autonocast podcasts, editor-at-large at The Drive, founder of the Human Driving Association, author of The Driver, and Producer of APEX: The Secret Race Across America. He held the Cannonball Run record from 2006-2013. You can follow him Twitter and Instagram.
✋🏽 Stop Funding Your Business With Your Own Money & Invest In Properly Structuring Your Company So You Can Be Set For Life | The Benefits of Having Business Credit Are Virtually Unlimited!
1yAlex, I like this ,thanks for sharing!
Sr. Software Engineer, Advanced Development at Roku Inc.
2yI think these are straw man arguments. I'm pretty sure that SAE definition of L5 as "a vehicle can drive itself everywhere in all conditions" means "everywhere in all conditions where a normal human driver - not a race driver, not a stunt man - would drive". E.g., when the road is snowed, and chains are required to drive further, nobody expects a Level 5 autonomous car without chains to keep going. L4 requires geofencing, and an L4 vehicle is allowed to just stop whenever it doesn't know what to do next. Which limits it to taxi and maybe ride-hailing services, because who would want to own a car like that?
Head of Business Development at Osbourne Purdie | Sales Strategy | Corporate Communications
2yA lot more nuanced thinking like this really has the potential to drive the conversation and technology forward in a useful way… the all or nothing tropes on the subject seem to draw battle lines instead of meaningful discussion.
Senior Product Marketing Manager | Marketing Strategy | Product Marketing Go-to-Market (GTM) | Connected Technology (Automotive, Cloud, Saas)
2yWell said!