The Institute of English and American Studies
Fear of Science on Movie Screen and on Printed Pages
Hereby I certify that the essay conforms to international copyright and plagiarism rules and regulations.
NEMES Katalin, Year 3, English Seminar: BTAN 34702 Celluloid Myths: American Cultural Myths on the Hollywood Screen Wed. 10.00 – 11.40 Instructor: Dr. TÓTH Ágnes May 7th 2008
In the modern twenty-first century society science weaves through our lives in the form of such everyday objects as the television, cellular phones or personal computers. These devices make our lives easier and more comfortable, while highly developed scientific methods and technologies in industry and medication provide us a longer and healthier life, with less work and more leisure time. Though it seems that the development of science has changed our lives for the better, the recurring representation of technophobia …show more content…
or the so called Frankenstein complex in literary works and more recently in films indicates that human mind is not yet free of the fear of some instinctive fear of science. A technophobe person is someone “who is afraid of, dislikes or avoids new technology” (OALD, 1577). This fear might be irrational or rationally established as well (Wikipedia, “Technophobia”). The roots of technophobia should be sought in the nineteenth century, when science and technology began a sudden development. People realized that new explorations contradict old beliefs and traditions in more and more ways, like the idea of evolution that spoke against the Bible. These issues caused confusion and raised questions hard to answer; therefore some concluded that science would finally bring the destruction of mankind (“A Fear of Science”). This fear, however, hindered the development of science only for a while, but the fear of technology was expressed in striking literary works, like Mary Shelley’s Frankenstein. Actually, it is the book from which the name of the notion of Frankenstein complex has been derived from. Frankenstein is the first scientist who creates an intelligent, artificial being, but gets scared by his own creation, and abandons it. The being, however, follows him and finally kills the creator, who has not treated him properly. It is exactly what makes people be worried about science in an age when robot manufacturing has just started: what will happen if the creation of man becomes more powerful than he himself and takes over control? Isaac Asimov in his short story collection titled I, Robot deals with the problem of man creating artificially intellectual beings. In his world Asimov ensures the obedience of robots and the security of humans by the Three Laws of Robotics, the first and most powerful of which tells that a robot must not injure a human being (Asimov, 13). Still, the society of Earth is suspicious that robots might take over control, and people are not willing to live together with them. No robots are allowed on Earth, except on the premises of the U.S. Robots and Mechanical Men, Inc., and they are used to work on the other planets and the moons of the Solar System in jobs that human beings cannot carry out. The robot manufacturers cannot persuade the public as long as they would rather listen to some intangible fear than to pure logic. Notwithstanding, they have to admit that serious problems occur now and again with the positron brain of robots. In the short story “Little Lost Robot” NS-2 aka Nestor hides from the humans among sixty-two identical ones, because his master, out of anger, ordered him to get lost, and he took the order word by word (Asimov, 120). It is only a funny, or (in the worst case) irritating story, but shows well how serious problems can emerge, just because robots cannot always interpret what people’s words really mean. Furthermore, Nestor has also problems with the Three Laws, the first of which has been programmed into his positron brain in an erroneous way. “No robot may injure a human being” in itself is all right, since does not let him to harm anyone, but this modification can have serious consequences if he would be able to save someone from outer danger, but his programming does not compel him to do so. In “Reason” QT1 or Cutie, also evades the Three Laws, due to a new kind of programming which endows him with a highly developed reasoning ability (Asimov, 58).
He figures out that nothing exists outside the space station where he controls the other robots, and although he is still under the force of the laws and does not harm human beings, he regards them as inferior and insignificant. Cutie statement “I myself, exist, because I think” (Asimov, 61) ridicules human philosophy, and the fact that he establishes a religion, according to which his and his the fellow robots’ duty is to serve “The Master”, their mysterious creator, illuminate some similarities between science and religion, showing that scientific failures are not more dangerous than religious
bigotry. “The Evitable Conflict” (Asimov, 217) depicts a situation that people fear the most in connection with the Frankenstein complex. Humans let positronic computers to lead the economic life of the world, since they can calculate more accurately and make more sober decisions. Then men have to realize that the Machines allow minor economic failures to occur, so that the majority of humans do not come to harm. It can happen because these robots interpret the First Law, as their duty is to seek the ultimate good of humanity. However, it means that people have subordinated themselves to the robots whose work is to figure out what is the best for them. What is common in these stories is that the problems are caused by the mistake and fallibility of the human mind; that is to say, in each case it is the creator, the man who gives the wrong order or fails to program the positron brain in the appropriate way. Therefore, the Frankenstein complex of the laymen is not without any reason, though its target may not be right. It is men who create robots; we have to be afraid of the mistakes these people make, and not the mistakes of their creations. Robots by themselves cannot act wrongly; they only follow their programming written by humans. It was the lesson Asimov wanted to teach to the public with his writings, and it is the message that is transmitted by the film I, Robot in a modified way. In the world of the movie the protagonist, Del Spooner seems to be the only one who suffers from the Frankenstein complex. All the people around him (including his grandmother) accept, trust and use robots as devices to make their everyday lives easier. Spooner, on the other hand, insists on late twentieth century fashion and devices, in an attempt to preserve the nostalgia of the age when robots were not part of average peoples’ lives. He wears shoes that were up-to-date thirty years earlier and uses a CD-player that works with a remote controller instead of a voice-controlled one. Also he is the only one who looks suspiciously at robots. When he spots one of them running with a handbag, his first thought is that he must have stolen it, and starts to pursue the thief. Then, under humiliating circumstances he has to realize that the robot was running with the bag of his mistress, who urgently needed the medicine that was in it. Being a homicide detective, Spooner is called to investigate the murder of the famous inventor and robot scientist, Dr. Alfred Lanning. When it turns out that a Nestor Class 5 robot, namely Sonny is suspected with committing the crime, Spooner 's fears seem to be proven. Spooner’s personality embodies the whole anti-robot society of the Asimov-short stories. The reason why the makers of the film chose to represent the distrust in robots and science in one individual instead of the general public view might be that it is easier for the audience to identify with one character. What is more, this character is the traditional Hollywood movie-hero: the loner cop who is thinking in the opposite way as everybody else do, but in the end it turns out that he was right. Making the audience to stand on the side of Spooner suggests that the filmmakers’ aim was to prove the validity of fears of robots and scientific development. While pursuing Sonny, and saving the society from dangerous robots, Spooner and his partner in the investigation, Dr. Susan Calvin go through some adventures that resemble some of the I, Robot-short stories. In Sonny’s program the Three Laws has also been modified, to enable him to kill Dr. Lanning, a human being. Once he hides in a storeroom among hundreds of robots of the same Nestor type, making it impossible for the humans to spot him. Spooner uses the Three Laws to find him, but in another way as Dr. Calvin does in the “Little Lost Robot”: Spooner begins to shoot down the robots, expecting Sonny disobey the law and run for his life. Afterwards, the film’s story is based on the axis of the Frankenstein complex: the theme of robots taking over control over humanity. The central positron brain of the company, V.I.K.I. plots to establish a dictatorship, in the name of the First Law, to protect humans from coming to harm. Similarly to the Machines in “The Evitable Conflict”, V.I.K.I. comes to the conclusion that not allowing a human being come to harm refers to the whole humanity. However, while the rule of the Machines means only the control of economy, V.I.K.I.’s plan is to regulate all aspects of life so that none of the human beings can come to any harm. She perceives that humans hurt each other as well in criminal activities and wars, and regards it to be her task to cease criminality and wars in the world. With her perfect logic she figures out that a higher level of safety requires a lower level of liberty for the people. Meanwhile she is absolutely sure that she does all this for the well being of humanity, so that the First Laws is not violated. V.I.K.I. is able to supervise the programs of the NS-5s who are connected to her system and uses them as an army in liquidating anyone who does not obey the orders and therefore can be regarded as dangerous for the rest of the community. These events seem to prove the truth of Frankenstein complex and people’s reservations and fears in connection with the use of robots.bee This message is quite the opposite of that of Asimov who wished to explain to people: that robots cannot mean any harm to humans. His short story entitled “Robbie” emphasizes the fact that humans have nothing to fear from technology and robots. In this story, the RB series robot called Robbie is employed as a nursing maid of a little girl. Her mother bought the robot simply out of fashion, and later as the public point of view about robots changes to the opposite, she decides to get rid of the machine. Gloria, the daughter, however, becomes inconsolably sad, since she has treated Robbie as a friend, a living being who differs in nothing from any other men. It is the father who solves the problem, by arranging things in a way that Gloria and Robbie find each other again during the family’s visit to the factory of US Robots. However, chance so ordained that Robbie saves Gloria’s life before the eyes of the parents, and this incident finally convinces the mother (and the unfaithful reader as well) that robots mean no harm to humans; what is more, they are able to save our lives in dangerous situations. It is what Asimov is aimed to teach to the laymen who do not know how to evaluate the operation of robots, and therefore tend to believe in the public view, not regarding the fact that with the development of science the fears of Frankenstein complex have less and less basis. It cannot be stated, however, that these fears are without any basis at all. Maybe even the Three Laws, which were constructed to protect humans from possible errors of robots, can include some hidden traps that we cannot predict. All in all, in the I, Robot short stories, the mistake has always been in the way of thinking of the humans, not in the logic of robots. Interestingly, in the movie Dr. Lanning foresaw that revolution of robots is the direct consequence of the Three Laws, which means that perfect logic brings the destruction of humankind. Lanning believed that machines sooner or later develop souls and individuality, and that is why Lanning constructed Sonny in a way that helps his mental development. Sonny can choose to disobey the Laws, which makes him half a machine half a human being, and endows him with the gift of free will. The Frankenstein complex refers only to the positron brain that has only the pure and perfect logic and no emotions; only it means danger to the humankind. If the robot possesses at least a bit of humanity, he can make the right decisions and never comes to a logical conclusion that would cause harm to people. Humanity and robotness is combined in various ways in the main characters of the movie. When she first appears, Dr. Susan Calvin is quite a calm, reasonable and cool-headed figure, and Spooner also remarks that her gait is unnaturally straight, making the viewers suspicious whether Calvin herself is a natural humanoid or not. Later, however, it becomes obvious that it was a false suspicion, emphasized by her appearance as well, when she wears her hair free and wavy instead of the strict bun of the first scenes, and it also turns out in her relationship with Sonny that Calvin can be very emotional. It is, however, Spooner himself in whom technology and biological life is mixed together. His left arm, which he had lost in an accident, had been replaced by a real-looking robot arm by Dr. Lanning himself. It is ironic that Spooner who is against robots so much is half robot himself, but it is obvious that he accepted the prosthesis only out of necessity, since without it he would not be able to fulfill his job and serve the community as a detective. Sonny’s figure also deserves some words. As a matter of fact, he is a robot, but in the adventures during the movie he becomes the comrade of Spooner, Spooner who detests and distrusts all robots. What is more, after it comes to light, that robots are dangerous and strive after the destruction and enslavement of mankind, Sonny remains innocent and a faithful friend of men. The only way it can be realized without disturbing the anti-robot conclusion of the story is that Sonny was created to behave in a human way. He calls his creator, Dr. Lanning “Father” and observes and learns human gestures, like winking, and the expression of emotions, like anger. What is more, he is also searching for the answer of the ultimate question, the aim of existence. As he has been liberated from the bounds of the Three Laws, Sonny can be regarded as a human being who has a soul, feelings and a right sense of reason. Since the Laws do not bind his hands, Sonny is free to act according to his best consideration, and he stands to the right side helping the protagonist to save the world and humankind. It is a deed that even not all of the real human beings are strong enough to carry out. And this is not the only reason why we can presume that Sonny has a soul. Unlike all the other robots, Sonny has dreams while in inactive state. Though it is explained that these are only some kind of programs running, compiled by the brilliant Lanning, the theme of the dreams is rather human. What is more, it is rather American. Initially, Sonny believes that he dreams about a man who liberates all the robots, but in the end he realizes that he is the one who liberates the other robots. This realization not only makes him identical with a human being, but the act he carries out is also a very humane one. With liberating the enslaved and subordinated layers of modern society, he brings enlightenment to the dark and aimless lives of robots and executes the Manifest Destiny of Robotics. All these features make him not only as human as any human being on Earth, but also an equal partner of Spooner, the American hero. Eventually it also turns out why Spooner detests robots so much. He lost his faith in the perfect logic of the positron brain when a robot saved his life instead of a little girl, just because it calculated that the man had better chances to survive than the girl. Spooner told the robot to save the girl in vain; he could not obey an order that disagreed the First Law. As Spooner tells the story later to Calvin, he remarks that a man would know that it is more important to save the life of a child, even if her chances of survival are not so high. This note throws light upon the fundamental difference between the human mind and the positron brain. Saving the child in the first place is the command of our deepest instinct that is responsible for the sustenance of the race. The younger ones are always more important to stay alive because they are the ones who can more probably reproduce offsprings and help the survival of the race. A human being driven by his instincts is thinking in this way and will never let a child die if there is the slightest hope that the child can live. A robot, on the other hand, has only its logic to rely on, and logic says that it is worth to save that person who has the most chance to survive, because it would be a mere waste of energy to deal with someone who dies anyway. This idea sounds cold and cruel, but the artificial brain has to have some guiding principle as well, and this principle is logic, since artificial beings do not need instincts to keep up their race. It is their creators who reproduce them, or if humans do not want to deal with building machines any more, a piece of simple program is enough to order robots to recreate themselves. The two different cast of thought represent two different ways of life. Instincts helped to keep up the race under original natural circumstances, where people had to fight for staying alive, and when human brain was not well developed enough to think of the future and conceive the universal importance of having children. In a modern world, however, when we do not have to fight battles every day for food, where the standard of living ensures not only the survival of the race but its growing as well, instincts may not be so indispensable any more. Once survival is not at stake, humans as well may have to start to think logically in order to find the optimal way of life of their race, and for this purpose a logical way of thinking is more serviceable. The introduction of robots can be the first step on this road. In a robot like Sonny, the values of logical and instinctive thinking are blended, since he was able to act in an apparently unreasonable way and save the life of Calvin risking his opportunity to destruct V.I.K.I. Sonny is like a transition point between humans and robots, and the logically working positron brain of the other robots may show us a way of thinking that should be followed in a technically developed, modern society. The movie of course denies that we should follow this path, since it is exactly our instincts that make us humans. Sonny can be regarded as semi-human due to his instinctive expressions, and Spooner who has a robot arm, is driven by his instincts in his investigation, as well as his life. Asimov does not call us to leave our instincts behind, but definitely wants people to accept the strictly logical way of thinking of robots. If technical development brings the loss of instinctive thinking, and the loss of humanity, that would in a sense really mean the destruction of mankind, and the realization of fears called the Frankenstein complex.
Works Cited
Asimov, Isaac. “Little Lost Robot” In: Collected Works of Asimov. Debrecen: Szukits Könyvkiadó, 2002.
Asimov, Isaac. “Reason” In: Collected Works of Asimov. Debrecen: Szukits Könyvkiadó, 2002.
Asimov, Isaac. “Robbie” In: Collected Works of Asimov. Debrecen: Szukits Könyvkiadó, 2002.
Asimov, Isaac. “The Evitable Conflict” In: Collected Works of Asimov. Debrecen: Szukits Könyvkiadó, 2002.
Digital Term Papers, “The Fear of Science” http://www.digitaltermpapers.com/a10931.htm. Retrieved on 05/01/2008.
I, Robot. Dir. Alex Proyas. Perf. Will Smith, Bridget Moynahan. 20th Century Fox, 2004.
Wehmeier, Sally, ed. Oxford Advanced Learners’ Dictionary. Oxford UP, 2005.
Wikipedia, “Technophobia” http://en.wikipedia.org/wiki/Technophobia. Retrieved on 05/01/2008.