Self-driving cars – don’t judge based on one or two accidents

The headline news this morning was that a self-driving car had hit and killed a pedestrian and it was accompanied by the usual punditry decrying the dangers of robots.

While the individual instance was tragic for the individuals involved, it was no less tragic than the 6000 other annual pedestrian deaths in the USA that don’t involve self-driving cars. We are never going to move forward if we hold self-driving cars to a standard that is absurdly higher than that we hold human drivers to. No matter how good the controlling systems are in a self-driving car there are going to be accidents, and some of those will involve fatalities. Simple logic tells us, though, that humans are very often poor drivers and that overall self-driving cars are going to be an improvement in safety.

Self-driving cars wont get sleepy. They wont get distracted by the phone. They wont get drunk. They can see through poorer weather conditions. They don’t get old. The list goes on; and, really, as long as properly implemented, a self-driving car can only be an improvement on the millions of human drivers we put in charge of tons of metal hurtling about at high-speed.

There is of course a fascinating ethical problem with hard-wiring life choices into a computer. Rather than every individual human deciding where to draw the ethical line between their own and others safety, that decision will have to be programmed into the car’s operating system. Again, logically, that’s not a bad thing – for a start the rules are explicit and can be questioned and agreed upon. They would also be consistent. Logically if we have the computing and sensing power, coded decisions will be better overall. But logic falls in the face of an exciting headline and the fact we have an atavistic distrust of handing that decision to a machine, even though the machine is operating within parameters specified by other humans.

Perhaps more importantly, when we talk about the ethical decision that has to be made, we often concentrate on the moment of an accident: Will the car decide to hit the pedestrian to avoid hurting the driver? Which of two undesirable outcomes will be pursued? But in doing that we ignore the over-arching ethical decisions human drivers make daily when they put their own convenience above others’ safety. When they speed. When they drink and drive. When they sit in no stopping zones outside schools and block lines of sight because the fact they have to get their child to soccer outweighs other children’s safety. A self-driving car is not going to make those dodgy ethical choices that lead to countless accidents.

I’m not arguing that self-driving cars are, or will be, perfect. That’s simply not possible given the complexity of what we’re talking about. Driving a car is an inherently dangerous undertaking and we let people do it with remarkably little oversight or control. I am arguing that we shouldn’t try to hold self-driving cars to an impossibly high standard of never having an accident.

Ultimately the litmus test for self-driving cars in this context cannot be whether one or two or ten accidents occur. In the USA (where self-driving cars are most likely to take off) there are a, horrifying, 37,000 road fatalities annually. If introducing self-driving cars could bring that number down by a demonstrable percentage then self-driving cars are worth pursuing. That’s how we ought to look at the introduction of self-driving cars – not by dissecting individual accidents, but by looking at the bigger picture and seeing if the technology can do good.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.