In 1980, John Searle began a widespread dispute with his paper, ‘Minds, Brains, and Programmes’ (Searle, 1980). The paper referred to a thought experiment which argued against the possibility that computers can ever have artificial intelligence (AI); in essence a condemnation that machines will ever be able to think. Searle’s argument was based on two key claims. That;
“brains cause minds and syntax doesn’t suffice for semantics” (Searle, 1980, p.417).
Syntax in this instance refers to the computer language used to create a programme; a combination of illegible code (to the untrained eye) which provides the basis and commands for the action of a programme running on a computer. Semantics refers to the study of meaning or the understanding behind the use of language. Searle’s claim was that it is the existence of a brain which gives us our minds and the intelligence which we have, and that no combination of programming language is sufficient enough to contribute meaning to the machine and therein for the machine to understand. His claim was that the apparent understanding of a computer is merely more than a set of programmed codes, allowing the machine to extort answers based on available information. He did not deny that computers could be programmed to perform to act as if they understand and have meaning. In fact he quoted;
“the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (Searle, 1980, p. 417).
Searle’s argument was that we may be able to create machines with ‘weak AI’ – that is, we can programme a machine to behave as if it were thinking, to simulate thought and produce a perceptible understanding, but the claim of ‘strong AI’ (that machines are able to run with syntax and have cognitive states as humans and
References: Chalmers, D. 1992, ‘Subsymbolic Computation and the Chinese Room’, in J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap, Hillsdale, NJ: Lawrence Erlbaum. Harnad, S. 1989. Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence, 1, pp.5-25. Harnad, S. 1993. Grounding symbols in the analog [sic] world with neural nets. Think 2(1): 12-78 (Special issue on "Connectionism versus Symbolism," D.M.W. Powers & P.A. Flach, eds.). Simon, H.A., & Eisenstadt, S.A., 2002. A Chinese Room that Understands Views into the Chinese room. In: J. Preston * M. Bishop (eds). New essays on Searle and artificial intelligence Oxford: Clarendon, pp. 95-108. Hofstadter, D. 1980. Reductionism and religion. Behavioral and Brain Sciences 3(3),pp.433–34. Reynolds, G. H., & Kates, D.B. 1995. The second amendment and states’ rights: a thought experiment. William and Mary Law Review, 36, pp.1737-73. Searle, J. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3, pp.417-424. Searle, J. 1982. 'The Myth of the Computer: An Exchange ', in New York Review of Books 4, pp.459-67.