Who should the Tesla-Trolly Kill?
Your self-driving car is cruising along at 35 miles per hour while you’re watching Netflix until it confronts the inevitable: three school children run out onto the road. Your car has a few options: (1) turn off the road to the right and run into a tree and likely kill you; (2) keep straight and run the three kids over, likely killing them; or (3) turn hard left and pile into the four business people heading to their favorite lunch spot. This moral dilemma is a derivative of the “trolley problem.” In the era of self-driving vehicles, we may be able to predetermine which lives are more valuable. In other words, the out-of-control trolley conductor’s split-second decision may be preprogrammed into automated vehicles.
Do you have a knee-jerk answer about who should be saved? Good. The harder question is determining why you think the way you do. Do you favor the children because they are younger than the businessmen, and thus you value younger lives? Do you favor the businessmen because there are more of them, or because they contribute more today to society? Do you believe the driver should sacrifice himself in order to die a hero? Would it change if he was the pope? Mercedes announced that they will program their cars to save the life of the driver in inevitable crash scenarios. That makes good sense, otherwise buyers might be deterred from buying those vehicles. However, eventually this may become a regulatory matter beyond the role of automakers.
I suggest self-driving car systems value first crashing into the least likely to die, and second value saving the greater number of people. Valuing people differently based on their actuarial value could be taken to an extreme. Doing so would have to rely on a database of personally identifying information or otherwise make superficial real-time estimations. Accuracy problems and algorithm biases could run amuck. Saving the driver at all costs, as Mercedes suggested, could lead to robotic cars plowing over school children. In contrast, the utilitarian calculation is simple and doable: save the most lives. Ultimately, in a fully autonomous future, one hopes that these accidents are few and far between.
Comments are closed.