The Persistence of Intelligence
For many hundreds of years, humans have envisioned a sort of successor or usurper to mankind, created by themselves. Frankenstein was one such instance, but AI is more realistic and achievable. Artificial intelligence (AI), by definition, consists of already in use technology, meaning that they have a sort of deductive quality specific to living organisms like language and chess algorithms. This is one type of AI that impacts our lives. The other is the evolved version of this AI that we are already close to creating. This AI has cognitive and deductive qualities that can be used to influence the physical world through its own sense of being. This type of AI causes a great danger to our species in the future, while the former classification of the system causes great danger to our species right now. Soon there will come a period in human technology and development when debate on the moral and ethical issues of Artificial intelligence will be imperative for retaining our humanity and existence.
Artificial intelligence has two very distinct definitions. The ability to think in a cognitive and comprehensive way, resembling that of the living, qualifies as the commonly envisioned, future of Mechanical intelligence. According to Copeland artificial intelligence means "...the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience."(Copeland np). This is the secondary type of artificial being. I deem it secondary because it is not yet achieved and requires more development while the primary form has already been implemented in today's society. Primary artificial intelligence; such as, Siri, drones, and computers are a common occurrence in a modern and developed society. Primary artificial intelligence reflects the base attributes of human mental capability, carried out in a far more proficient way. Primary robotic intelligence has no ability to learn nor does it have the ability to perceive in any aspect of reality, and essentially it is no more unique than plant, a series of systems continuing in a repetitive cycle until it wilts. This version of intelligence has no possibility of conscience or moral processing, because it is inanimate, but secondary intelligence may prove differently. If a machine designed to feel and think is not produced of natural means, does that make it inhuman? If so, then it will suffer.
Currently this technology is qualified as science fiction, but as humanity progresses, soon there will be an artificial awakening. In reality there is a common belief that that which is revolutionary in technology must take a more significant amount of time to come to fruition. To build a true AI it must be able to learn the necessities of General intelligence in subjects like self awareness that it struggles in. "It took 3 billion years of evolution to produce apes, and then only another 2 million years or so for languages and all the things that we are impressed by to appear"(Mark np). This is the time to develop, by natural means, the self aware functional intelligence we have as humans. The quote was made because the author believes that the AI is not a threat and even less likely to achieve the goals set for it because of the way it was approached in the creative process. It was modeled off of higher functioning human behavior. While mathematics isn't hard for computers, walking and things in the physical realm are a greater challenge for AI. But science works to achieve this goal of secondary artificial intelligence in a modern pursuit. Given that humans can learn from their mistakes and failures the same article states, "AI is not dead, but re-grouping, and is still being driven, as always, by testable scientific models"(Mark np). A graph shows the predicted evolution over a short period of 10 years involving three phases to complete replacement of human capabilities with robotics(Stanley np). Phase1 is the current advancement in robotics comprehension showing 'passive' or mindless yet cognitive activity. This is the current state as far as most higher functioning robots as of now. Phase 2 shows "limited" cognitive ability which does not make for a secondary AI system, although it does show development in areas replacing human thought processes. Phase 3 is "Complete autonomous ability" which refers to a fully functional intelligent computer who can think independently of outside programming. This is the eventual goal of artificial intelligence programing at which time a new set of ethical issues will take to effect.
Most issues involving implementation of an AI system, currently, are centered around military advancements. The driving force in almost every scientific advancement of such large magnitude as robotic intelligence is governmental utilization in means of war. A perfect example of this would be the Internet, because it was created for the sole purpose of military communication and information gathering, and now it can be found in the majority of households and public buildings. The same occurrence seems to be happening with high functioning computer systems. There is a striking amount of debate into how ethical the replacement of personnel guided drones would be to that of algorithm driven war machines. When a life hangs in the balance it might rely on something out of the hands of moral or humane thought. One problem that seems to be an issue of debate is the incomprehensibility of complicated binary algorithms. When a computer is endowed with the ability to learn and "If the machine learning algorithm is based on a complicated neural network, it may prove nearly impossible to understand why, or even how, the algorithm is judging applicants..."(Bostrom, Yudkowsky np). Once at a certain stage in a robot's ability to process information based off deductive thought, if the program makes unreasonable or one sided illogical decisions, it might be impossible to track the problem given the vast network of random neural processes. It goes without saying that these decisions would generally be weighed to kill or not. Common thought about this subject is that it would be a great risk to put the lives of others in the hands of a machine unfeeling of emotion or moral conscience, but a major issue discussed in a robots ethics meeting "is that automated weapons could conceivably help reduce unwanted casualties in some situations, since they would be less prone to error, fatigue, or emotion than human combatants"(Knight np). Given this viewpoint, it would be more acceptable to relinquish bias of man for a more logical and unforgiving enforcer, but as an overall understanding that the fate of any living being should not be measured in logical terms and the belief that forgiveness is essential to humanity, it would be morally unjust to accept this idea.
The hypothetical effects of a society integrated or overpowered with artificial intelligence produces a wide variety of eventual outcomes. The commonly assumed reality of future intersection with humanity and AI is that there must be some conflict in which the human race will be forced to secede from its throne as ruler of the known world. This does not necessarily have to be an outcome of progress, with the right tweaks. If a machine is endowed with intelligence, then a conscience or an ability to deeply moralize a situation can also be created. If such morality is accessible to a being of superior caliber but far more potent than the inferior, it is likely that coexistence or even subjugation of the mechanically autonomous can be achieved. It can be assumed that "when AI algorithms take on cognitive work with social dimensions—cognitive tasks previously performed by humans—the AI algorithm inherits the social requirements"(Bostrom, Yudkowsky np). Therefore, systems that use cognitive thought take on ethical constraints because they are not simply programmed machines without volition. Theses robots, which may be in use for services would not be limited by general job or life description. A true AI would make its own system of learning without being "raised" to do so. Although the likelihood of that happening is minimum because of humanity's current and increasing want for logical and unbiased decision. The book The Quest for Artificial Intelligence mentions a man named jack Schwartz who is a humanist discussing theoretical controversies of AI, one being a government implementation of such beings and a usurper of power leading to a human race "...resting on robot armies and police forces independent of extensive human participation and entirely indifferent to all traditional human or humane considerations" that might have been more beneficial to society than the intended supremely just robot (Schwartz 653). Careless actions will lead to such a future.
Advancements in today's society are very close to a basic achievement of AI. Military robots, being implemented in war, are not so similar to that of the envisioned robotic intellectual of the future as some of the newer recreational devices in today's market. Nao, a robot designed to process information in a deductive and almost cognitive way, is most probably the spearhead in look and structure of this intellectual industrialization (Contact Info np). In a video this robot was able to briefly display a basic function of a secondary AI system called self awareness. It shows three Nao robots being given a test to identify the others who can't speak, but only one stands up to say "I don't know" and corrects himself after processing the fact that he, himself just spoke then saying "sorry, I know now"(Contact Info np). Under no programming to do so, other than to answer the proposed question, it uses deductive reasoning and an unexpected variable, being that he recognized himself talking, to reevaluate its decision. It could easily be said that it, instead, was programmed to respond or process the answer specifically, but Nao uses its own knowledge of basic material to formulate communication and deductive reasoning. The machine would best be classified between a secondary and primary robot, because it possesses the basic qualities of humans without the consciousness of capacity for such consciousness. Nao is an an example of how soon the secondary age of AI is rising.
Problems regarding intellectually inspired beings like robots will soon require careful thought into the ethical issues it will face in society. Many military and social aspects exist today that show great need for debate in this area, as well as in future implementation of the AI system. These inventions foreshadow an artificial awakening that reflects the embodiment of this quote, in the legendary words of ultron "Everybody creates the thing they dread"( Contact Info np). Humanity will create its demise.Works Cited
Contact Info. "Amazing Robot Becomes Self-Aware (Explained)."http://youtu.be/jx6kg0ZfhAI.
Name of Website. Youtube, 18 Jul 2015. Web. 26 Jan 2016.Copeland,B.J.. "Artificial Intelligence (AI)."library.eb.com. NA,NA. Web. 18 Jan, 2016.
Humphrys, Mark. "AI is Possible .. but AI won't Happen:
The Future of Artificial Intelligence."computing.dcu.ie. Cambridge College, Aug 1997. Web.
21 Jan, 2016.Gatling, Caleb. "As Artificial Intelligence Grows, so do Ethical Concerns." sfgate.com. NA, 31,
Jan 2014. Web. 22, Jan 2016.Knight, Will. "Military Robots: Armed, but How Dangerous?." Technologyreview.com. Robotics
News, 3, Aug 2015. Web. 24, Jan 2016.TRACKING THE EVOLUTION OF ROBOTS. Timeline for adopti
robotenomics.com. Website Publisher, Date of Publication. Web. 29 January 2016.J. Nils, Nilsson. "THE QUEST FOR ARTIFICIAL INTELLIGENCE
A HISTORY OF IDEAS AND ACHIEVEMENTS". Stanford University: Stanford University
Press, 13 sept 2009. Web. 24 Jan. 2016.Yudkowsky, Elizer, Bostrom, Nick. nickbostrom.com. Na. Cambridge University Press, 2011.
web. 21 Jan, 2016