The Ethical Quandary of Self-Driving Cars
Imagine the beginning of what promises to be an awesome afternoon: You’re cruising along in your car and the sun is shining. The windows are down, and your favorite song is playing on the radio. Suddenly, the truck in front of you stops without warning. As a result, you are faced with three, and only three, zero-sum options.
In your first option, you can rear-end the truck. You’re driving a big car with high safety ratings so you’ll only be slightly injured, and the truck’s driver will be fine. Alternatively, you can swerve to your left, striking a motorcyclist wearing a helmet. Or you can swerve to your right, again striking a motorcyclist who isn’t wearing a helmet. You’ll be fine whichever of these two options you choose, but the motorcyclist with the helmet will be badly hurt, and the helmetless rider’s injuries will be even more severe. What do you do? Now imagine your car is autonomous. What should it be programmed to choose?