How will self-driving cars value life in different countries?

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation.”

When you think about in more than a passing fashion you quickly realise that people in different countries drive differently. Anyone who’s driven on an Italian road or through a French city can testify to that. The implications of those differences start becoming more stark when you think about self-driving cars, and especially when you contemplate the ethics of the decision-points that have to be programmed into the cars.An interesting study has taken a look at the ethical differences between countries and how they resolve into differences in how people value individual lives. They basically used the trolley problem – car hurtling towards pedestrian crossing has to choose who to hit – presented on a country by country basis.

Understanding these differences is crucial. On a facile level you’d just ask some ethicists what they think a car, or human driver for that matter, would do and then code that into an autonomous vehicle. But that concept falls under its own weight if different countries and different groups don’t agree on the same ethical principles – if they don’t value life and limb in the same way, you can’t just program a car with a universal set of values. And if the values do not reflect local ideas of morality – the cars will be disproportionately blamed or the use of autonomous vehicles rejected.

In other words, even if ethicists were to agree on  how autonomous vehicles should solve moral dilemmas, their work would be useless if citizens were to disagree with their solution, and  thus opt out of the future that autonomous vehicles promise in lieu of the status quo. Any attempt to devise artificial intelligence ethics must  be at least cognizant of public morality.

So, how do these differences play out? It turns out there are stark, and in some cases disturbing, ethical variations.

For example, countries with more individualistic cultures are more likely to put a higher value on young lives – so if a car is hurtling towards the pedestrian crossing it will prioritise the baby in the pram over the old person. That puts France at one end of the scale (individualistic, save the child) and China and Taiwan at the other end. Australia sits towards the middle of the scale but on the individualistic end.

Lest this all seem rather abstract there was a clear difference the value that was placed on women’s lives: “In nearly all countries, participants showed a preference for female characters; however, this preference was stronger in nations with better health and survival prospects for women.” So in countries where women tend to die in childbirth or female infanticide is common, they place a lower value on women’s lives. That sort of thing would lead to a material difference in the way you’d program an autonomous vehicle. It also leads to uncomfortable questions about who makes that decision explicit.

The choice of prioritising passengers or pedestrians was interesting. Japan sat solidly at one end of the scale on this one – prioritising pedestrians, while China was locked in at the other end prioritsing passengers. Again Australia sits roughly in the middle. Part of the underlying reasoning behind the choice between passengers and pedestrians is to do with status. In some countries “people found it okay to a significant degree to spare higher status over lower status”.

These sort of differences gave rise to an interesting point for the study authors: “In fact, in some cases, the authors felt that technologists and policymakers should override the collective public opinion.” Of course these value judgements are implicit in human choices today, but there’s an interesting difference when you think about making implicit value-judgements explicit in code. Some have argued that you just don’t do it – you code for random decisions rather than explicit values, but it’s hard to see that working when in any context a random choice is going to be ‘wrong’ fifty percent of the time.

The study is imperfect as the authors recognise: “We used the trolley problem because it’s a very good way to collect this data, but we hope the discussion of ethics don’t stay within that theme,” he said. “The discussion should move to risk analysis—about who is at more risk or less risk – instead of saying who’s going to die or not, and also about how bias is happening.” In other words in the real World the decisions being made are not black and white, with the choice solely between one of two groups dyeing, they are all about shades of grey. That limitation doesn’t change the utility of understanding the complexity involved here.

It is fascinating to contemplate that the need to make the moral and ethical judgements explicit for coding might lead to significantly better outcomes around the World (of course the problem is that I mean ‘better’ in my terms and not everyone might agree that autonomous cars should not, for example, prioritise saving the rich). Anyway, as the authors put it:

“Indeed, we can embrace the challenges of machine ethics as a unique opportunity to decide, as a community, what we believe to be right or wrong; and to make sure that machines, unlike humans, unerringly follow these moral preferences.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.