Who should die in the event of an unavoidable accident involving a self-driving car? The driver? A random pedestrian or cyclist? That’s the sort of questions manufacturers will have to answer as part of an ethical debate surrounding autonomous vehicles, accord to a French study quoted by Technology Review.
As the website points out, how should the car be programmed to act? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?
The answers are key because they could have a big impact on the social acceptance of self-driving cars.
Jean-Francois Bonnefon and his team at the Toulouse School of Economics in France surveyed several hundred workers on Amazon’s Mechanical Turk about these very questions. What did they find? In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.
However, “[participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in autonomous vehicles, more than they wanted to buy autonomous vehicles themselves,” Bonnefon said.
This is only the first few steps into an inevitable, complex moral dilemma. What’s your take?