The Chinese speaker receives an answer to the question they asked in their native language. When the Chinese speakers obtains well concluded answers to the question they asked, it is to their belief that the person responding on the other side understands Chinese very well. In Searle’s thought experiment it is known that only one person speaks Chinese and the individual in the other room ONLY speaks English. I will discuss Searle’s thought experiment using him as the English speaker in the scenario. The room Searle is in has a variety of books that he can use to help him respond to the questions being asked of him by the native Chinese speaker. In the letters there are only symbols given to him in which he must find the exact symbols in the books available to him to respond properly. Searle does not know what the symbols mean and the books given are to serve him like a computer program. The books give instruction on how to handle …show more content…
Genuine ideas and understanding requires more than computing, the software must be able to comprehend the semantics of a sentence. The Turing test helps assist in determining whether a computer software is able to understand what is being asked of it or the information it is giving. The computer program will pass the test when the evaluator is not able to distinguish between a computer and a person. The test is not to prove that a computer can give the right answers rather that it can give human like responses. I believe that Searle’s argument has a strict handle which is necessary to determine whether a computer software has a strong AI. When and if the time comes for a strong AI computer software, it will be able to pass the Turing test and convince us as people that it has the mental capacity equivalence of a person. Whether Strong AI is certain it must is prove to understand Chinese or the semantics of the Chinese language, but as it comes to show the computer program is not able to understand the language but can manipulate the sentences because of prior patterns. This is supported by Searle’s Chinese room thought experiment because the program cannot understand the semantics of a sentence from the syntax. Another example of this is the Siri program in the