A little while ago, I visited the Moral Machines website. It’s a study put out by MIT to see how people would respond to a machine having to choose between lives. Turns out, I prefer hoomans a lot more than pets, if forced to choose between them. Some scenarios are distinctly uncomfortable – how do you choose between two doctors on one hand and two obese parents and their child on the other? Or the owner of a vehicle, and a poor sod on the street? While you will likely not see the same scenarios I did, I am curious to see how you would respond if you were a Moral Machine.

Let me take a step back. Isaac Asimov, a famous science-fiction writer, came up with three laws to govern self-aware robots:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

These laws are quite sound, but they effectively consign robots to forced labor in human-free environments. No robot warfare, no robot police, and no robot self-driving vehicles would be allowed because, as New York Times writer Gary Marcus said in his post Moral Machines, “an automated car that aimed to minimize harm would never leave the driveway.” That same car may even smash human-free vehicles parked in their driveways to stop those from leaving!

While computer algorithms are nowhere near “comprehending” abstract ethics like good and evil, or even predicting what would happen if a water balloon is tossed at a wall, robots are even today making choices. Self-driving vehicles are legal in several states, but are they ethical? Can they be held responsible for making a choice, or does responsibility fall to the programmer(s)? The consumer who purchased the car? If we are aiming to entrust more and more of our lives to different sized boxes of plastic, metal, and glass, how do we know they would make the choices we would, or correct us when we are wrong? In years to come, politicians, engineers, and moral leaders will have to gather and sweep aside ideological boundaries to confront the progress of technology together for the betterment of society. Who knows? A computer might be one of them…

Tags:

CC BY-SA 4.0 Ethics of Metal, Glass, and Plastic by Eric is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

2 Comments
  1. Raymond 7 months ago

    Eric, your post is interesting. It strikes at the fundamental role of AI within our society, and how it will continue to develop. To what extent will humans implement AI into our everyday lives? Will AI come to dominate everyday life to the extent where our lives will become dictated, or reliant on AI? Regarding Moral Machines, I believe the AI will place emphasis, if AI were to ever come into this situation, on the productivity of the groups in question. However, resolving the issue will require humans to actively acknowledge that we are relinquishing a part of humanity. We are empirically defining one life as more valuable than another, and implementing our beliefs into the computer. Here’s a link to the future of ethics that I found interesting:
    https://futureoflife.org/2017/07/31/towards-a-code-of-ethics-in-artificial-intelligence/

  2. Billy 7 months ago

    Eric, I find your topic concerning the ethical responsibilities of artificial intelligence intriguing, and I would like to help you further your research. After digging around a little bit, I came across this website that holds five basic principals of robotics, similar to those you have already presented:

    “Robots should not be designed as weapons, except for national security reasons.
    Robots should be designed and operated to comply with existing law, including privacy.
    Robots are products: as with other products, they should be designed to be safe and secure.
    Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
    It should be possible to find out who is responsible for any robot.”

    I feel as though you can do a lot more with this topic, and I would like to see what research you will present in the future. Thank you for sharing.

Leave a Reply

CONTACT US

We welcome new members. You can send us an email and we'll get back to you, asap.

Sending

Youth Voices is organized by teachers at local sites of the National Writing Project and in partnership with Educator Innovator.

CC BY-SA 4.0All work on Youth Voices is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License
Missions on Youth Voices

Log in with your credentials

or    

Forgot your details?

Create Account

%d bloggers like this: