I stumbled upon the article “Posthuman enough?”. While this article did not pertain directly to my question, it nonetheless provided insight into how AI and robotics will influence the value of human life through the contested topic of genetic engineering. The article itself was an analysis of the consequences of genetic engineering and how genetic engineering will influence civilization from both the approach of science and morality through contemporary works.
Mr. Richard B. Norgaard, the author of “Posthuman enough?” emphasizes that “the distinction between correction and enhancement may be very difficult to enforce” in the field of genetic engineering. The predominant point against genetic engineering is arguing for the sanctity of nature, life, and natural processes. Much like genetic engineering, my question must first create a distinction between rudimentary “dumb” machines, and the very possible “sentient” AI. Within each category, consequences will follow suite. Humans are incredibly ingenious with our ability to create, manipulate, and influence life and material. When we place our efforts towards creating a sentient life, we must answer whether or not we are breaking our most fundamental beliefs. In this effort, are we overstepping your boundaries? In the case of an artificial sentient life that can process information and maintain a consciousness just as we can, are we not creating a new species?
Approaching from the moral front, playing God is a breach of most moral codes. According to the Oxford Dictionary, human character is defined as “the mental and moral qualities distinctive to an individual”. In our attempt to mimic consciousness, we sacrifice character -the extent of our humanity- for scientific pursuit. But this point is mostly subjective to the person in question, and the society which establishes the envelope under morality.
Norgaard, in his analysis of the consequences of genetic engineering, also raises an issue that is reflected in the process of making a future dominated by robotics and AI. The first: in this process of developing advanced robots and AI, the approach to warfare will change dramatically, as will military and political tension between nations with successful robotics programs and nations without. The second, more critical in non sentient AI: the commercial aspects of selling advanced robotics merchandise to the masses will result in an increased socio-economic barrier segregating the haves from the have-nots in society.
Both of these pose serious challenges to the preservation of a just society. We pose a scenario were we not only endanger civilized society, but international welfare as well. We have to also recognize the dangers the robots will pose to the users themselves. In non sentient AI, we can force them to abide by Asimov’s three laws of robotics; A robot may not injure a human being or, through inaction, allow a human being to come to harm, a robot must obey orders given it by human beings except where such orders would conflict with the First Law, and a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. However, it is not entire impossible that a glitch may happen within the code and a mass replication of this glitch occurs, just as cancer does to humans. In a reality with sentient AI, we must assume that sentient AI create a consciousness far more benevolent that the human mind, one polluted by domination and illusory superiority.
But, as reflected in Norgaard’s article, “[Bill Mckibben] makes it very clear that decisions need to be taken now that will affect our future path, but also that not all of the decisions need to be made at once and that decisions can be modified as we learn more”.
Norgaard, Richard B. “Posthuman enough?” BioScience, vol. 54, no. 3, 2004, p. 255+. Science in Context, http://link.galegroup.com/apps/doc/A114856806/SCIC?u=pioneer&xid=c496b346. Accessed 23 Feb. 2018.
Tags: ai asimov future robotics sentient