How safe should we expect self-driving cars to be?
Are automated and autonomous automobiles ready to be trusted?
The safety of self-driving cars is back in the spotlight after two recent accidents with Uber and Tesla vehicles. In the first instance, Elaine Herzberg became the first known pedestrian to be killed by an ‘autonomous vehicle’ when she was struck by an Uber Volvo XC90 on March 18. In the second, an investigation into a Tesla Model X that crashed, killing the driver, found that the vehicle had been in autopilot mode at the time of the crash. Both companies have denied that their vehicles were to blame for the accidents, but they have still caused concern over the safety of self-driving cars.
In February, our editor-at-large wrote about the year ahead for self-driving vehicles with greater support and favourable regulation from governments in the UK and US. However, there is still unease with members of public, shown by a recent poll from Advocates for Highway & Auto Safety which showed that 64% of respondents were concerned about sharing the road with driverless cars. The events of the last month will not alleviate those fears.
The Uber crash, and to a lesser extent Tesla’s, raises an important question for the self-driving industry – how safe is safe enough? It is generally accepted that self-driving cars will be safer in the long-term than human drivers, as computer-programmed vehicles are less unpredictable, but the route to this point is unclear. As companies like Uber and Tesla test their cars, there will inevitably be more crashes like the one last month, especially with tests on public roads, as problems with the technology become apparent.
However, some in the self-driving car industry are keen to argue that the technology should be celebrated for reducing road deaths overall, rather than criticised when it causes them. Cars kill pedestrians and drivers every day. In the United States, the National Safety Council says that there are 1.25 deaths per every 100 million miles driven and the number of miles driven every year is in the trillions. Statistics from the National Highway Traffic Safety Administration say that more than 37,000 people died in the US from car accidents in 2016.
Tesla argue that vehicles using their autopilot system are 3.7 times less likely to be involved in a fatal crash, compared to the national average for regular cars. In a statement concerning the Model X crash, Tesla said, “The consequences of the public not using Autopilot, because of an inaccurate belief that it is less safe, would be extremely severe.”
“There are about 1.25 million automotive deaths worldwide. If the current safety level of a Tesla vehicle were to be applied, it would mean about 900,000 lives saved per year.” Similarly, Tesla have stressed that the severity of the Model X crash was partly due to the driver ignoring warnings to put their hands on the wheel and partly due to a highway safety barrier on the road which had been previously damaged and not replaced. They went as far as saying “the only way for this accident to have occurred is if Mr. Huang [the driver] was not paying attention to the road, despite the car providing multiple warnings to do so.”
Tesla’s has since been strongly criticised by the National Transportation Safety Board, who are investigating the crash, for releasing ‘investigative information’ in their statement and the company has been forced to leave the official investigation.
With the Uber case, the problem is different because the crash in Tempe, Arizona probably wouldn’t have been stopped by a human driver. Even Sylvia Moir, Tempe’s Police Chief, said that it would have been difficult to prevent the crash in any mode (self-driven or human-driven) due to the way that Herzberg appeared so suddenly. However, some in the industry have voiced frustrations that this was the sort of crash that self-driving vehicles should be able to prevent.
By using sensors, rather than cameras, self-driving cars should be able to pick up on things that the human eye would usually miss, such as pedestrians emerging from dark shadowy areas. But there are concerns that sensors may have blind-spots that developers haven’t realised yet or that they won’t be able to detect all possible hazards. Time will tell if this is true for the Uber Volvo XC90 crash but right now, some people are worried that what could have been a preventable crash was not avoided.
There is little doubt with researchers and developers that self-driving cars will make our roads safer overall and much of the concern surrounding these accidents is caused by the high expectations the industry has set for itself. Trying to match the car industry for safety would be easy, as cars kill many people, but developers want to see revolutionary change. We are used to the idea that human-drivers cause accidents and the Tesla and Uber accidents would not have been major news if they had been committed by a regular driver. But when a machine makes those mistakes, especially a machine that is designed not to, it is uncomfortable. We understand that humans aren’t perfectly designed to drive cars and yet we expect, or at least hope, that self-driving cars will be.
The road to mainstream self-driving cars will probably see a lot more crashes like those seen in the last month and a recalibration of our expectations for what this technology can do is needed. As Tesla have said, if self-driving cars can simply reduce deaths, rather than eliminate them, they should be praised. But it will be a difficult road ahead. Designing these cars is not easy, due to the huge number of variables that human drivers take for granted; something as simple as the weather can cause a massive headache for software developers. If we want to see a world with self-driving cars, we will have to accept that mistakes will be made along the way.