In every one of our motor vehicle accident cases one of the fundamental questions that must be answered is, “Who was at fault?” Usually, any case accident analysis would typically point to one of the operators of the vehicles involved in the accident. That paradigm has now changed. The recent involvement of both Tesla and Uber self-driving cars in fatal accidents have stimulated open debate about the safety and long term viability of autonomous vehicles. An obvious question to ask is how well do self-driving vehicles compare to those operated by humans? Are the present generation of self-driving vehicles really “safer” to operate than conventional operated motor vehicles? Is this the right question to ask?
First some context. The Insurance Institution for Highway Safety reported that 37,461 people died in motor vehicle accidents in 2016. That is a 5 percent increase in the overall per capita death rate when compared to the rate in 2015.
The facts are clear, driving a motor vehicle is dangerous. People are poorly designed for the unusual combination of skills safe motor vehicle operation now requires: continuous focus on an ever-increasing number of audio and visual inputs to which a driver is now subjected to and close attention to detail combined with repetitive, prolonged and often boring limited physical activity. The result is a growing number of motor vehicle accidents caused by distracted, inattentive or impaired driving. Driving a motor vehicle is unlike many of the activities that we have now become used to and too often take for granted- – continually browsing the internet or using a cell phone. These activities are now being performed by the operators of motor vehicles leading too often to horrific accidents. It’s no wonder many dream of a day when we can sit back and relax and let the computer take over. Or do we?
Self-driving cars now present a different set of problems. While human drivers have short attention spans, slow reaction times and, sometimes, good situational awareness; artificial intelligence (AI) computers have infinite attention spans, fast reaction times and poor situational awareness. The Uber crash footage depicts these presumed “advantages” both graphically and tragically.
So, is there a middle ground? Are there technology solutions on the horizon that can improve the safety of human drivers without the dangers of the current generation of autonomous vehicles? Currently, there are a number of solutions that are designed to assist humans with the task of driving a car. Among the many solutions are autonomous emergency braking and steering (AEB/AES) and camera-based driver monitoring systems. These and similar technologies are designed to assist drivers and reduce the likelihood of common human errors. These technologies are also significantly less expensive than the cost of completely autonomous vehicles.
We have already entered the next wave of automotive innovation. The safety of motor vehicles has increased steadily over the last 40 years with the rate of crash deaths per 100,000 people in 2016 being about half what it was in 1975 and number of crash deaths per 100 million miles traveled declining from 3.35 to 1.18. These improvements in safety have largely been due to the improvements in motor vehicle design. Unfortunately, with ever more technologically complex vehicles being operated on our highways, the abilities and behavior of motor vehicle operators has not correspondingly improved.
The next wave of innovation will be a revolution in the use of technology designed to assist motor vehicle operators. Many driver assist and warning solutions are already on the market with implementation and use growing particularly in commercial vehicles and luxury cars. Soon we will see this new wave of driver assistance technology become common place throughout the automotive industry.
Along with these innovations will come a whole new set of legal and social questions and issues. With assisted and automated driving on the rise who will be to blame when accidents occur? How will motor vehicle accidents be investigated? By the police or by a computer geek? Who will be to blame if the cause of an accident is found to be a software bug or hardware hiccup? The car manufacturer, the software company or the vehicle dealership that failed to update the software? Won’t every at fault operator of an autonomously driven motor vehicle now blame some glitch in the “driverless” operation of vehicle as causing the accident? When and under what circumstances will the operator of an autonomously driven motor vehicle be expected to reassume direct control and operation of the vehicle? These are just a few of the questions that will arise in the legal evaluation of every motor vehicle accident case involving vehicles with driver assist technologies.