The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically[citation needed] divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings.[1] It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.
Robot rights
Robot rights are the moral obligations of society towards its machines, similar to human rights or animal rights.[2] Robot rights, such as right to exist and perform own mission, may be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[3] These may include the right to life and liberty, freedom of thought and expression and equality before the law.[4] The issue has been considered by the Institute for the Future[5] and by the U.K. Department of Trade and Industry.[6]Experts disagree whether specific and detailed laws will be required soon or safely in the distant future.[6] Glenn McGee reports that sufficiently humanoid robots may appear by 2020.[7] Ray Kurzweil sets the date at 2029.[8] Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.[9]
The rules for the 2003 Loebner Prize competition envisioned possibility of robots having rights of their own:
61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[10]
Threat to privacy Edit
Threat to privacy Edit
Aleksandr Solzhenitsyn's The First Circle describes the use of speech recognition technology in the service of tyranny.[11] If an AI program exists that can understand natural languages and speech (e.g. English), then, with adequate processing power it could theoretically listen to every phone conversation and read every email in the world, understand them and report back to the program's operators exactly what is said and exactly who is saying it. An AI program like this could allow governments or other entities to efficiently suppress dissent and attack their enemies.Main article: Computer Power and Human Reason
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
A therapist (as was proposed by Kenneth Colby in the 1970s)
A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
A soldier
A judge
A police officer
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[12]Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[12] AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.
YOU ARE READING
Ethical Virus
Non-FictionNon fiction Future ethical considerations The future of ethics