“Google Tackles AI, Robotics Safety to Prevent Future Toasters from Killing Us in Our Sleep”, by Joel Hruska, is a three page synopsis of a 29 page extensive study done by Cornell University and Google that analyzes safe practices when it comes to developing concise, effective, and adaptive AI systems.

As Hruska states, the mains points of safe Ai design entail; 1) Avoid negative side effects, 2) avoid reward hacking, 3) scalable oversight, 4) safe exploration, and 5) robustness to distributional shift. He uses an example of video on the web created by Boston Dynamics. It features a 60 pound robot performing daily chores. These tasks included placing cups into either the dishwasher or recycling bin and delivering beer to its operator.

While this is a limited example of what robots functioning under sound AI are capable of, it is only an example in 2018. This is only one of hundreds of machines that compile databases, run simulations, and operate in industrial settings, in a few decades we are looking at a world dominated by computers.

In a Ted Talk held at Erasmus University by Etienne Augé, the speaker persuades the audience why science fiction is necessary. It is not because it invokes the imagination, while that may seem to be an important part. Science fiction “shares communities with politics and propaganda”. It is a form of mass persuasion that we enjoy and experience. “It is about telling a story to inspire people… preventing and inventing the future, not predicting it”. Science fiction has debuted countless post apocalyptic worlds, such as Terminator, or dystopias, such as Matrix. Like Augé states, these are predictions for what the world could possibly shape into with careless pursuit of knowledge. Therefore, heeding the signs of our times, studies, research, and caution becomes critical in ensuring the prosperity of our world.

As we work towards the future, we are not set on a predestined path. We are actively experiencing our errors. As long as we proceed with care, we can avoid Skynet. We see the consequences today of technology and how it influences the separation between the haves and have-nots. That is why AI can become a tool to fix this schism, not expand it by being used as a tool of commercialism .

References:

HRUSKA, JOEL. “Google Tackles AI, Robotics Safety to Prevent Future Toasters from Killing Us in Our Sleep.” PC Magazine, Aug. 2016, pp. 11-14. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&db=cph&AN=117128586&site=ehost-live.

TEDxTalks. “Why Our World Needs Science Fiction: Etienne Augé at TEDxErasmusUniversity.” YouTube, YouTube, 23 Mar. 2014, www.youtube.com/watch?v=FJkixvgJqsY.

Tags:

CC BY-SA 4.0 Preventing Skynet by Raymond is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Comment Here

1 Comment
  1. Ravi 7 months ago

    Raymond,
    Reading this was very interesting, i enjoyed your title and how it refers to a famous movie. here is an article i read about the pros and cons of developing advanced AI that I think you’d like https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

CONTACT US

We welcome new members. You can send us an email and we'll get back to you, asap.

Sending

Youth Voices is organized by teachers at local sites of the National Writing Project and in partnership with Educator Innovator.

CC BY-SA 4.0All work on Youth Voices is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License
Missions on Youth Voices

Log in with your credentials

or    

Forgot your details?

Create Account

%d bloggers like this: