Robots need ethics now

Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk?

This is the core of a question posed in an interesting article in The New Yorker today on Moral Machines.  Google’s driver-less cars are now legal in three US States and the author points out that in a few years time driver-less cars may not just be preferred, they may be mandatory: They don’t get distracted, don’t make phone calls, don’t drink. But the interesting thing in that future scenario is that: “That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems.”

The tricky thing of course is a combination of choosing an ethical system and actually managing to program it in. Whose ethics do you use and how do you formulate them into a coherent set of rules. Asimov’s three laws are the usual starting point. But any analysis of even the first law “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” reveals a morass of tortuous ethical conundrums. How much damage is harm, how consequent must the harm be – would a robot stop me having a drink or eating a doughnut because it’s not good for me and will lead to eventual harm? How do you judge the trade-off in the school bus question?

Programming a car to make a decision between the school bus or you is tough. It’s tough enough for humans to make those judgement calls. Just this week there was an awful accident north of Sydney when a car swerved to avoid hitting a dog and ended up causing an accident that saw several people lose their lives. In retrospect hitting the dog would probably have been the better call; but weighing the issue in a split second and balancing out practicalities and the ethics is something we’re simply not that good at on the fly.

Having spent all of this year teaching ethics to primary school kids immediately before I teach them programming and robotics and I’m only too aware of the contrast between the disciplines. I keep telling my technology groups that “the computer will do exactly what you tell it to” – if something is not working it’s because of the way you programmed or built it; it’s an entirely black and white issue. Ethics on the other hand is a field of not just 50, but thousands of subtle shades of grey.

This question gets even more pointed when you start talking not about cars but about robotic soldiers. That doesn’t have to mean some fanciful Robocop or Terminator – just an autonomous drone making it’s own decision about when to drop its bomb. That’s not even fanciful, it’s with us today. How do you build ethics into a situation like that?

Gary Marcus, in The New Yorker,  doesn’t provide an answer to these dilemmas but he does clearly point to the need for more time, effort and money to be spent on finding those answers.

The New Yorker article is here. There’s also a good Economist article on the same topic here.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.