A little while ago, I visited the Moral Machines website. It’s a study put out by MIT to see how people would respond to a machine having to choose between lives. Turns out, I prefer hoomans a lot more than pets, if forced to choose between them. Some scenarios are distinctly uncomfortable – how do you choose between two doctors on one hand and two obese parents and their child on the other? Or the owner of a vehicle, and a poor sod on the street? While you will likely not see the same scenarios I did, I am curious to see how you would respond if you were a Moral Machine.
Let me take a step back. Isaac Asimov, a famous science-fiction writer, came up with three laws to govern self-aware robots:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
These laws are quite sound, but they effectively consign robots to forced labor in human-free environments. No robot warfare, no robot police, and no robot self-driving vehicles would be allowed because, as New York Times writer Gary Marcus said in his post Moral Machines, “an automated car that aimed to minimize harm would never leave the driveway.” That same car may even smash human-free vehicles parked in their driveways to stop those from leaving!
While computer algorithms are nowhere near “comprehending” abstract ethics like good and evil, or even predicting what would happen if a water balloon is tossed at a wall, robots are even today making choices. Self-driving vehicles are legal in several states, but are they ethical? Can they be held responsible for making a choice, or does responsibility fall to the programmer(s)? The consumer who purchased the car? If we are aiming to entrust more and more of our lives to different sized boxes of plastic, metal, and glass, how do we know they would make the choices we would, or correct us when we are wrong? In years to come, politicians, engineers, and moral leaders will have to gather and sweep aside ideological boundaries to confront the progress of technology together for the betterment of society. Who knows? A computer might be one of them…