Design Engineering

Self-driving cars to think and make moral decisions just like humans

Staff   

Automation self-driving cars

Human ethical decisions can be implemented into self-driving cars enabling them to make decisions based on dilemma's they face on the road.

The future of self-aware robots is no longer just the premise for Isaac Asimov’s iRobot. A new study has determined that human morality can be modelled by AI, possibly enabling machines to make moral decisions.

Google self-driving car; autonomous vehicles - automakersThe researchers used immersive virtual reality to study human behaviour in simulated road traffic scenarios then applied it to self-driving vehicles.

Participants were asked to drive a car in a typical neighbourhood on a foggy day and encountered unexpected, unavoidable dilemma situations with objects, animals and humans. The test was designed to force the driver to decide which of the three should or could be spared. The research showed that moral decisions in relation to unavoidable traffic conditions can be explained and modelled by a single value-of-life for every human, animal or inanimate object.

Prior to this groundbreaking study, it was assumed that moral decisions relied heavily on context and therefore could not be modelled or described algorithmically.

Advertisement

“But we found quite the opposite,” says Leon Sütfeld, first author of the study. “Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.” This implies that human moral behavior can be well described by algorithms that could be used by machines as well.

This research will have a significant impact on the debate about the behaviour of self-driving vehicles and other machines in unavoidable situations.

The senior author of the study, Professor Gordon Pipa, believes that the research shows that it is now possible to program a machine to make it act more like a human when it comes to making moral decisions.

“We need to ask whether autonomous systems should adopt moral judgements, if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?” he adds.

Autonomous cars are just the beginning for the author’s of this study. The implications of this research are far reaching and include robots and AI in situations where critical decisions are required, like in hospital operating rooms.

They warn that we are now at the beginning of a new epoch with the need for clear rules otherwise machines will start marking decisions without us.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” explains Prof. Peter König, a senior author of the paper. “Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans.”

The research, Virtual Reality experiments investigating human behavior and moral assessments, from The Institute of Cognitive Science at the University of Osnabrück, was published in Frontiers in Behavioral Neuroscience.

http://home.frontiersin.org

Advertisement

Stories continue below

Print this page

Related Stories