Cleantech Market Intelligence
Automakers Need to Start Being More Candid About the Limits of Autonomous Technology
When was the last time you ever actually read an end-user license agreement or terms of service before clicking “Accept” to install a piece of software or join the latest social network? Odds are that unless you are a lawyer, the answer is never. The technology companies that make these products would probably like it to stay that way. However, in the world of the self-driving car, that is not an acceptable policy. The tragic death of a Tesla Model S driver in Florida highlights the need for all automakers to be more open and transparent about the limitations of autonomous technology.
Revolutionary (When It Works)
It seems that barely a day goes by when we don’t get a breathless press release from an automaker, supplier, technology company, or Silicon Valley startup about the amazing progress that they are making on self-driving technology. You can already go out today and purchase vehicles from a number of brands that promise at least partial autonomous capability, and full autonomy is being targeted by the end of this decade. While Tesla Autopilot, Volvo Pilot Assist, and other similar systems seem truly magical when they work as advertised, there are far more scenarios where these systems do not function at all.
Unfortunately, we have not seen Tesla CEO Elon Musk stand on a stage and tell people not to use Autopilot in the city, on curving rural roads, or in the snow. GM CEO Mary Barra stood on the stage at the 2014 ITS World Congress in Detroit and promised a Cadillac with hands-off Super Cruise capability in 2016. I’ve experienced prototype systems from Toyota and Honda and driven production systems from Tesla and Volvo, and when they work, they are incredibly impressive.
I am an engineer by training and technology analyst by trade, and I have a much greater understanding than the average consumer about how these systems work. As a result, I can never truly relax with these systems because I’m always on the lookout for the failure mode, and they are numerous. Unless very explicitly told, the average consumer will be so excited by the prospect of turning over control to a computer that they will not pay any attention to the warnings that Autopilot is very much in beta before enabling it. Volvo doesn’t even give that warning before allowing Pilot Assist to be engaged.
Tesla is fortunate that many of its existing customers are early adopters that expect technology to be imperfect, although most of them probably don’t expect to be at risk of injury when it fails. When the Model 3 arrives and mainstream consumers try Autopilot and find its limitations, they aren’t likely to be as forgiving, and the same is true for every other automaker offering autonomous features. Navigant Research’s Autonomous Vehicles report projects more than 4 million autonomous-capable vehicles to be sold by 2025. Those customers need to know what the systems can do—and, more importantly, what they cannot.
We don’t yet know all the details of what happened in the tragic crash in Florida. Similar accidents where one vehicle crosses a highway divider happen all the time, and fatalities occur when humans are in control. What we do know is that we are far from a time when we can just sit back and relax and let the computer do the driving. Every company involved in this space needs to be far more upfront with consumers about this technology can do or risk poisoning the market.