The self-driving cars programming needs to anticipate “all” the cases of figure, or most of them, including those we would prefer not thinking of. For instance: on the highway, having to decide between colliding with a wrecked vehicle, with possibly someone inside, or its passengers trying to reach the emergency lane.

MIT Technology Review published an article* addressing the ethical problem on one hand (defining the logic in the choice between two bad solutions) and the legal one on the other (if the car owner must chose in first place between various choice algorithm variants, who will be responsible, in the case of an accident of this type?).

Cleverly, it points out a slight inconsistency of the human psyche: the vast majority of the interrogated people chooses the utilitarian approach (minimizing damage, kill 1 instead of 10 persons, for instance) BUT they are not so sure about it if the sacrificed person is the driver or a one of his/her loved ones…  This might generate a dilemma for the manufacturers: what if no one wants to buy the car that makes the “right” choices?

Beyond this reflection that is both urgent and crucial, I find it positive that the fundamental problem is fully considered again: yes, choosing the car directly means creating a risk for other people’s lives and one’s own – the WHO estimates 1.25 million persons per year die through road fatalities!

 

*  Why Self-Driving Cars Must Be Programmed to Kill, MIT Technology Review, October 22 2015 http://www.technologyreview.com/view/542626/why-self-driving-cars-must-be-programmed-to-kill/ (consulted on October 22 2015)

This post is also available in: French