"Three Laws of Robotics" by Isaac Asimov
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov developed the Three Laws because he was tired of the science fiction stories of the 1920s and 1930s in which the robots, like Frankenstein's creation, turned on their creators and became dangerous monsters. The positronic brains of Asimov's robots were designed around the Three Laws, so that it was impossible for the robots to function without them. There were enough ambiguities in the Three Laws to make for interesting stories, but there was only one story in the collection, "Little Lost Robot", in which a robot posed any sort of danger to a human being. In the movie I, Robot the robots run amok and become dangerous monsters despite (or is it because of?) the Three Laws. There are no loopholes in Asimov's stories that would allow the behavior exhibited by the robots in the movie.
The Three Laws and the behavior of robots that resulted from their use became an implicit aspect of numerous science fiction stories that followed Asimov's popular positronic robot series. Researchers who later designed and built robots have said that utilizing principles akin to the Three Laws is simply common sense. But today not everyone believes that the Three Laws are sufficient or desirable for robots of the future. The Singularity Institute for Artificial Intelligence has a website which explores the Three Laws from an academic perspective and gives reasons why they may not be the last word in robotic design.
The Zeroth Law
Asimov added a "Zeroth Law", so named to continue the pattern of lower-numbered laws superseding in importance the higher-numbered laws, stating that a robot does not merely act in the interests of individual humans but the abstract concept of humanity as a whole.
0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.