Why Putting the Brakes on Self-Driving Cars Is Reckless

Keeping self-driving cars off the road could be more dangerous than giving the robots a chance.

Brian Buntz

September 2, 2016

6 Min Read
Self-driving cars have the potential to be substantially safer.
Thinkstock

Car accidents are one of the top causes of death, killing more than 30,000 people in the United States and more than a million people internationally each year. The vast majority of these accidents—94% according to NHTSA estimates—are caused by driver errors and carelessness. Among the riskiest behaviors are speeding, road rage, and driving while intoxicated, drowsy, or distracted. 

Still, the prospect of self-driving cars is threatening for many. Three-quarters of drivers are afraid to ride in self-driving vehicles, according to a survey from AAA released earlier this year.

The first documented fatality involving a self-driving car has created a backlash as well, although it seems to have little to slow down investment in autonomous driving technology.

“It is newsworthy that there was a fatality involving Tesla’s Autopilot technology,” says Derek Kerton, founder of the Kerton Group and Autotech Council in Silicon Valley. “But put it into perspective. People die every day in Fords, Chevys, Hondas, and Toyotas as well, and they die with cruise control, without cruise control. We die every which way in cars. The question should be: ‘Are the Tesla fatalities out of proportion?’”

Thus far, Tesla says their customers’ cars have logged about 200 million miles of autonomous driving. The data indicate that Teslas driving with Autopilot are slightly safer than the average American car on a per mile basis. It’s important to note, however, that we need even more data before we can reliably conclude that a driver with Autopilot is better or worse than a driver without it.

Computers Can Learn Quickly

Self-driving cars could have significant advantages over human drivers. First, they can learn from wrecks and near misses, and share their information with other cars. “When a Tesla drives into a truck, it will learn from that experience. In general, the car maker will fix and update the software by next month,” Kerton says. “And if it requires better hardware, they will fix it for the next version of the car that comes out.” While this type of information likely will not be fully shared among automakers, there is room for one carmaker to learn from well-known accidents of the other, as well as from police records, NHTSA reports, and so forth, Kerton says.

Computers begin by driving based on the level of skills humans impart in them, but can learn quickly and will get steadily better, with few upper limits. Conversely, human driving ability plateaued decades ago. In fact, most improvements in safety in the past decades came from technology: anti-lock brakes, for example. While data from several studies suggests that public service announcements moderately reduce risky behaviors such as drunk driving, they remain intractable problems. Meanwhile, new challenges such as texting while driving have emerged. Meanwhile, self-driving cars are only going to improve. “We are at the stage where any further delays to self-driving cars are just killing more people,” Kerton says.  “If the machines are close to parity with us at driving now, then the sooner we let them start driving in the real world, the sooner we begin this fleet learning, and the sooner we reap the benefits.”

That is an opinion shared by many in Silicon Valley. Google has stated that self-driving cars are already safer than human drivers. And Elon Musk is so confident in the capabilities of self-driving cars that he thinks that it will become eventually illegal for humans to drive. “It’s too dangerous,” Musk was quoted as saying by the Verge. “You can’t have a person driving a two-ton death machine.”

Critics contend that Tesla’s Autopilot functionality—representing Level 2 self-driving capability—is beta-level functionality and is therefore unsafe. But this capability isn’t really brand new. It is an extension of Level 1 autonomy, which has a long history. Things like cruise control, electronic stability control, and anti-lock brakes have decades of history in automobiles and have long become standard features. Even autonomous driving was not thrust onto the public without years of R&D and trials.

Tesla extensively tested its Autopilot feature before its debut it in October 2015. In a statement, Tesla notes that it began installing the required hardware into Model S cars in October 2014. This includes radar, a forward-looking camera, ultrasonic sensors, and digitally-controlled electric assist braking, acceleration, and steering. In that first year, Tesla evaluated how their autonomous software would make decisions and compared it to what the human driving the car did. “As that alpha test year went on, their machines improved and they were eventually mimicking in software what the more reliable human drivers were doing,” Kerton says. “If you want to visualize it, think of baby Maggie Simpson mimicking Marge’s driving, move for move, in the opening sequence of the Simpsons.” Tesla waited until the car was comparable to a human driver before moving from this alpha test to the public beta.

Who Are the Guinea Pigs?

After word broke in June that a self-driving Tesla was involved in a fatal crash, Consumer Reports concluded that the company was using its Autopilot-using customers as “guinea pigs.” Bloomberg noted that the wreck spurred widespread criticism of Tesla’s on-the-road testing of the technology.  

Despite this backlash, Kerton is asking people to consider an unconventional argument: “The people who want to put the brakes on autonomous driving are saying that the promoters of self-driving cars are reckless,” Kerton says. “But, as of 2016, the opposite is true. Blocking autonomous driving is actively thoughtless and reckless. People need to think this through, and do the cruel math on how we preserve the most lives over the next decades, not just this year.”

Kerton says that most futurists, engineers, and AI experts agree that autonomous driving has the promise to reduce road fatalities, accidents, injuries, and costs someday. Still, opinions are divided as to when self-driving cars will be better than human drivers.

But that doesn’t mean that testing self-driving cars now is a reckless experiment. The technology has been in development for many years and tested for years in mock towns cleared of actual civilians. As noted, Tesla tested the technology for a year before activating the software. But constraining self-driving cars to fake villages and towns that can never exactly replicate real driving conditions could limit how quickly autonomous driving systems improve. This could slow self-driving cars from saving lives. And while we wait, some 3,500 people internationally are dying each day as a result of car accidents. Kerton says the real danger is allowing the perfect to be the enemy of the good.

Critics of self-driving cars insist on sticking with the status quo of having humans rule the roads. Kerton counters: “It’s not like people driving cars is some natural order, and self-driving cars are experimenting to change a winning formula. In fact, there are two experiments: Experiment A is the test where humans drive cars. We have run experiment A for over 100 years, and we have seen how fatal it is,” Kerton says. “Experiment B is: Now that they’re ready, let’s give the robots a shot and begin the machine-learning that will offer us a downward trend in fatalities.”

About the Author

Brian Buntz

Brian is a veteran journalist with more than ten years’ experience covering an array of technologies including the Internet of Things, 3-D printing, and cybersecurity. Before coming to Penton and later Informa, he served as the editor-in-chief of UBM’s Qmed where he overhauled the brand’s news coverage and helped to grow the site’s traffic volume dramatically. He had previously held managing editor roles on the company’s medical device technology publications including European Medical Device Technology (EMDT) and Medical Device & Diagnostics Industry (MD+DI), and had served as editor-in-chief of Medical Product Manufacturing News (MPMN).

At UBM, Brian also worked closely with the company’s events group on speaker selection and direction and played an important role in cementing famed futurist Ray Kurzweil as a keynote speaker at the 2016 Medical Design & Manufacturing West event in Anaheim. An article of his was also prominently on kurzweilai.net, a website dedicated to Kurzweil’s ideas.

Multilingual, Brian has an M.A. degree in German from the University of Oklahoma.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like