Does Not Compute
Does Not Compute is how science fiction writers used to think computers of the future would respond to instructions which they were unable to fulfill, trapping them into a endless cycle of computations which leaves them destroyed, smoking, showering sparks, and repeating said phrase to a slowing halt. Indeed, it forms a pair of dramatic tropes in harmony with the genre convention of the hyperintelligent amoral computer villain.
The invariable conclusion of computers about us tends to serve as the vehicles for all of our worst most misanthropic judgements about humanity: that we are dangerous to ourselves and others; that we will not change these ways on our own; that its ironic how our 'best' creation mocks us, like we mock God; that we do all sorts of emotional and otherwise inefficient things everyday, etc. This leads the human protagonist to pose an unsolvable problem to the mad computer, causing its relentless logic to fubar. Humans, we chuckle, can take its as a joke, but not a mass of silicon and metal.
This superficial analysis belies many things, most particularly that cognitive dissonance would be so fatal for an artificial intelligence considering that any AI will be partially modelled on the most successful intelligence we know, lead ing us to reason that if we have no inherent problems with conflicting thoughts, surely it is advantageous to design machine intelligence to be thusly robust.
Thus it is so with modern desktop computers with graphical interfaces, as anyone who has found themselves at the Blue Screen of Death can tell you. Still it is theoretically possible as a virus attack to make a computer's CPU process so much it melts, so we can see the brutal utility of direct attacks to hardware from software.
Behavioral conflicts will continue to be a major part of designing new systems, as with the DARPA Grand Challenge and its finely tuned contestants. During the course of the trials of autonomous computer-guided SUVs, some early contestants would come to a dead halt when confronted with a negligible obstacle. Self-awareness is the key selling point of an irritable donkey over a efficient robot mule. No donkey is likely to swan dive off an errant cliff, while an ill-controlled robo-mule will without hesitation.
Asimov explored the theme of AI cognitive dissonance better than anyone, espeically in his 1941 short story "Liar!": in this case, whether the robot lies, tells the truth or says nothing, it will cause humans injury in violation of the First Law of Robotics. In science fiction, the Three Laws of Robotics are a set of three laws written by Isaac Asimov, which most robots appearing in his fiction have to obey. First introduced in his short story "Runaround" (1942), they state the following:
- A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
According to the Oxford English Dictionary, the passage in Asimov's short story "Liar!" (1941) which first mentions the First Law is the earliest recorded use of the word robotics in the English language. Asimov was not initially aware of this; he assumed the word already existed in analogy with mechanics, hydraulics, and other similar terms denoting branches of applied knowledge. The lexical ambiguity that is explored here is the definition of injury, the robot having to take into account psychological injury as well as physical. Thus, its own destruction is the only course of action to avoid this paradox, the Third Law being the least important in its hierarchy.
SF scholar James Gunn writes,
"The Asimov robot stories as a whole may respond best to an analysis on this basis: the ambiguity in the Three Laws and the ways in which Asimov played twenty-nine variations upon a theme" (the number is accurate for 1980). While the original set of Laws provided inspirations for many stories, from time to time Asimov introduced modified versions. As the following examples demonstrate, the Three Laws serve a conceptual function analogous to the Turing test, replacing fuzzy questions like "What is human?" with problems which admit more fruitful thinking. After all, much of humanity agrees in principle to abide by the Ten Commandments, but free will, circumstance, and contradictory impulses can find wiggle room in even the most unambiguous decree.
Cory Doctorow wrote an excellent remix of 'I, Robot' the classic sci-fi story, part of a series of such remixes including Ender's Game (as Anda's Game). In this case, all robots are made with the 3 laws in the 'North American Trading Sphere', a tech monopoly which has created some dystopic consequences. Torture and anti-personnel robots free of the 3 laws are kept by the sinister government, as they crack down on any innovation in computers.
In the anime film Ghost in the Shell 2: Innocence (2004), androids and gynoids are programmed with moral codes. "Moral Code #3" states, "Maintain existence without harming humans" — a streamlined version of the Third Law. Robots in this movie's world are, however, capable of violating the "Moral Code", though they typically destroy themselves in the act.
Modern roboticians and specialists in robotics agree that, as of 2006, Asimov's Laws are perfect for plotting stories, but useless in real life. Some have argued that, since the military is a major source of funding for robotic research, it is unlikely such laws would be built into the design. SF author Robert Sawyer generalizes this argument to cover other industries, stating
The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.)Others have countered that the military would want strong safeguards built into any robot where possible, so laws similar to Asimov's would be embedded if possible. David Langford has suggested, tongue-in-cheek, that these laws might be the following:
- A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
- A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
- A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.
- A human clone is a human being no less unique in his or her personhood than an identical twin.
- A human clone has all the rights and privileges that accompany this legal and moral status.
- A human clone is to be accorded the dignity and respect due any member of our species.
Note that, unlike many of the pastiches and derivative Laws, Shermer's "Three Laws of Cloning" are not explicitly hierarchical.
The Three Laws are sometimes seen as a future ideal by those working in artificial intelligence: once a being has reached the stage where it can comprehend these Laws, it is truly intelligent.
Español | Deutsche | Français | Italiano | Português| Ch| Jp| Ko
0 Comments: