Teaching autonomous vehicles to make ethical decisions on the road

A German study shows that self-driving cars may soon be able to make moral and ethical decisions like humans.

A vehicle equipped with Mobileye technology (photo credit: COURTESY MOBILEYE)
A vehicle equipped with Mobileye technology
(photo credit: COURTESY MOBILEYE)
Jerusalem-based Mobileye and other companies developing self-driving vehicles may have to cope with the eventuality that robots could make ethical decisions on whom to save and whom to sacrifice in a car accident.
This conclusion on autonomous vehicles was made from a study by German researchers at the Institute of Cognitive Science at the University of Osnabrück, just published in Frontiers in Behavioral Neuroscience.
Described by the authors as “groundbreaking” research, the study has strong implications if human ethical decisions can actually be made by machines.
Contrary to previous thinking, the researchers found for the first time that human morality can be modeled – meaning that machine-based moral decisions are, in principle, possible – by using immersive virtual reality to study human behavior in simulated road traffic scenarios.
The study participants were asked to drive a car in a typical suburban neighborhood on a foggy day, then were presented unexpectedly with unavoidable dilemmas involving inanimate objects, animals and humans and had to decide which of the latter should be saved from injury or death.
Cognitive scientist Leon Sütfeld, the study’s lead author, wrote that until now it had been assumed that moral decisions are strongly dependent on context and therefore could be modeled or described algorithmically, “But we found quite the opposite.
“Human behavior in dilemma situations can be modeled by a rather simple value-of-lifebased model that is attributed by the participant to every human, animal, or inanimate object.” This, he continued, implies that human moral behavior can be well described by algorithms that could be used by machines as well.
The study’s findings may have major implications in the debate around the “behavior” of self-driving cars and other machines in unavoidable situations.
Prof. Gordon Pipa, another senior author of the study, said that since it now seems to be possible that machines can be programmed to make human-like moral decisions, it is crucial that society engages in an urgent and serious debate.
Advertisement
“We need to ask whether autonomous systems should adopt moral judgments,” Pipa insists.
For example, a child running onto the road would be classified as significantly involved in creating risk, thus being less deserving to be saved in comparison to a bystander on a footpath. “But is this a moral value held by most people and how large is the scope for interpretation?” the researchers asked.
The issue could also be relevant to autonomous weapons firing by robots in war. The study’s authors say that autonomous cars are just the beginning, as robots in hospitals and other artificial intelligence systems become more common.
They warn that we are now at the beginning of a new era with a need for clear rules, otherwise machines may start marking decisions without us.