ORIGINAL VERSION
1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
SECOND VERSION
1. No robot may harm a human being.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
THIRD VERSION
1. No Machine may harm humanity; or, through inaction, allow humanity to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The "Three Laws of Robotics" is Asimov's underlying moral system for the robots in his science fiction work I, Robot. Humans program the robots with three inviolate laws. Throughout the course of the book the Three Laws evolve from the original to the final (third) version. Humans make the first alteration. Robots make the final alteration. Notice that the only difference between the three versions is the First Law. Answer the following questions about Asimov's moral system. Submit your answers online. You may cut and paste into the answer field.
1. Categorize the original version by one of the moral theories discussed in class (deontology, utilitarianism, rights theory, social contract theory, etc.). Justify your choice.
2. Categorize the third version by one of the moral theories discussed in class. Justify your choice.
3. Under which version could a robot directly harm one human to save another?
4. What would a robot do in a