Design Engineering

Can autonomous vehicles make moral and ethical decisions?

Staff   

Automation self-driving cars

A new study suggests that human moral behavior can be well described by algorithms and could be used by machines to make decisions.

driverless carsAs more and more auto manufacturers are shifting focus to self-driving vehicles, one major question is looming in the minds of researchers, lawmakers and consumers — can a self-driving vehicle be moral, act like humans do, or act like humans expect humans to?

It has generally been accepted that the answer to this question is “probably not”. However, a new study has found for the first time that human morality can be modelled meaning that machine based moral decisions are, in principle, possible.

The research, Virtual Reality experiments investigating human behavior and moral assessments, from The Institute of Cognitive Science at the University of Osnabrück, used immersive virtual reality to allow the authors to study human behavior in simulated road traffic scenarios.

The study followed participants that were asked to drive a car in a typical suburban neighborhood on a foggy day. Drivers experienced unexpected unavoidable dilemma situations with inanimate objects, animals, and humans and had to decide which was to be spared.

Advertisement

The results were conceptualized by statistical models leading to rules, with an associated degree of explanatory power to explain the observed behavior.

One of the key findings was that moral decisions in the scope of unavoidable traffic collisions can be explained well, and modeled, by a single value-of-life for every human, animal, or inanimate object.

Up until now, it has generally been assumed that moral decisions are strongly context dependent and therefore cannot be modeled or described algorithmically.

“But we found quite the opposite,” explains Leon Sütfeld, first author of the study. “Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.”

This implies that human moral behavior can be well described by algorithms that could be used by machines as well.

The study’s findings have major implications in the debate around the behavior of self-driving cars and other machines, like in unavoidable situations.

“We need to ask whether autonomous systems should adopt moral judgements,” explains Prof. Gordon Pipa, a senior author of the study. “If yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”

This study suggests that its possible that machines can be programmed to make human like moral decisions and it is crucial that society engages in an urgent and serious debate.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” explains Prof. Peter König, a senior author of the paper. “Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans.”

The study’s authors say that autonomous cars are just the beginning as robots in hospitals and other artificial intelligence systems become more common place. They warn that we are now at the beginning of a new epoch with the need for clear rules otherwise machines will start marking decisions without us.

The research is published in Frontiers in Behavioral Neuroscience.

http://home.frontiersin.org

Advertisement

Stories continue below

Print this page

Related Stories