Bentham, Kant, and AI-driven cars

Imagine you are driving your car at 90kmh, and you realize the brakes are not working. 20 meters ahead of you, crossing the street, are five people who will inexorably die if you run them over. Looking to your right you see one person which is sitting in a terrace eating an ice cream. Your steering wheel is working well, so you could steer your car towards the terrace killing one person, but sparing the five.

What would you do? Most people would choose to kill the one person to spare the five. Sacrificing one life in order to save five seems the right thing to do.

Now imagine you are standing on a bridge overlooking the road. Down the road comes the brakeless car, and at the end of the road are five people who are about to die run over by this car.

You feel helpless to avert this disaster, until you notice, standing next to you on the bridge, a very heavy man. You could push him off the bridge, on to the track, into the path of the car. He would die, but the five people crossing the road would be saved. To make your job even more similar to the previous example, this man is standing on top of a trapdoor which you could open by turning a car-sized steering wheel to the right.

Would you push the unsuspecting fat man? Most people would find it terribly wrong to push the man onto the track. But this raises a moral puzzle: Why does the principle that seems right in the first case – sacrifice one live to save many – seem wrong in the second?

This example, taken from a beautifully taught Harvard online free course goes to explain how the consequentialist moral reasoning of Jeremy Bentham’s utilitarianism (it is the greatest happiness of the greatest number that is the measure of right and wrong) does not hold under the scrutiny of Immanuel Kant’s Categorical moral reasoning (Act only according to that maxim whereby you can, at the same time, will that it should become a universal law).

After 11th May 1997,when IBM’s Deep Blue beat human champion Garry Kasparov, no human will ever again, beat AI at chess.

After 27th May 2017, when Google’s Alphago beat human champion Ke Jie, no human will ever again beat AI at Go.

Once AI driven cars are mainstream, no human will ever again be allowed to drive.

While a human driver has visibility of 120 degrees, an AI has 360 vision from the car, through connectivity could simultaneously have 360 vision from other neighbouring cars, from traffic cameras, or from satellites. Other AI senses could come into play which the human counterpart does not have, for example an AI car could have sonar, similar to bats echoing a signal to detect proximity. An AI could have real time knowledge of where every other car in a radius of 10 kilometres is, in what direction it’s headed, and if it has any technical issue.

Human drivers get road rage, get impatient when sitting in a traffic jam, try to stick their car’s nose in front to gain one inch if possible… all these human behaviours cause traffic jams, accidents and inefficiencies. A perfect AI driver will have none of these traits, and thanks to this, traffic jams will be greatly, if not totally, reduced.

Having a human drive in the future will be as rare as having a human clean the clothes today. I could do it, but why woud I? I have a washing machine for that.

One human problem, is what to tell the AI to do when faced with the example at the beginning of this story. Should an AI follow Jeremy Bentham’s utilitarian principles? Should it choose to kill one to save five? This is particularly dangerous if we will also want the AI to perform surgery, as in order to save 5 patients who needed a heart, a lung, a kidney, a liver, and a stomach, it could choose to kill and butcher a perfectly healthy patient which had just gone to have a regular check-up in order to get the needed organs.

In the brink of a new era, these classic, unresolved questions are more important than ever.

driverAI