By Luis Caycho
I always asked myself if those stories about robots overcoming humankind will become real. Sci-Fi books are being my favorites since I was a kid and I watched every major movie about this subject. My favorites is “I, robot” that tells the story of a society in the future that relies on robots for all its domestic activities, but somehow one of those robots became aware of his own self and started to develop a mind, but most important, a soul. The robot started to develop a sense of what is right and wrong, and not because some program installed in its memory or an algorithm protocol of orders, it begun making decisions not based on instructions or learning by mistake process, but by searching deep on its “heart” what was the right thing to do. The robot’s name is Calvin and the movie, starred by Will Smith, is based on a set of short stories by Isaac Asimov, prolific writer considered a master in hard science fiction. On his “I, robot” short stories, one of them titled “Three Law of Robotic”, and which he considered his maximum contribution to human kind of the future (Asimov wrote the book on 1950), he came up with three laws that he thought a future society must input on robots in order to coexist with them as part of their day by day living. Those laws are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. These laws seem to be really basic, but their logic really doesn’t have any gaps, at least at first impression. When Calvin (the robot) encounters a conflict with those commands, he started to develop its artificial intelligence and becoming more human. When Calvin is in a situation that its deactivation will be harmful