Providing an alternative view to the theories of weak artificial intelligence is strong artificial intelligence. This approach redefines intelligence to include more than just the ability to solve complex tasks, or simply convince observers that such a quality exists within a system, as per the Turing Test. Strong AI theory rests upon the principle that complex machine systems such as neural networks are capable of establishing connections between different sets of data which were not previously programmed into the system. In other words, the ability to learn. A system which begins and continues to learn, creating and building a knowledge base, it is theorized, increasingly has the ability to exhibit intelligent behavior (Gackenbach, Guthrie, Karpen 1998).
This is not intelligence as defined by Turing; instead it is the very real ability of a system to solve problems of computation or reasoning based on trial and error. If one method of problem solving does not succeed in producing the desired result, a system with an appropriate number of connections can explore different possibilities, much as a human mind would analyze a problem (Minsky 1982). There has been much debate centered around the question of whether out of such learning would truly emerge an intelligent, much less conscious, network. Many argue that consciousness does not simply arise out of intelligent behavior (Gackenbach et al. 1998). Instead, resting on dualistic principles, self-awareness cannot be duplicated just by arranging the appropriate pieces of a network together and letting them function; there is property which is intangible and thus unable to be "built in."
The answer offered by the strong AI position holds that consciousness is an "emergent [property] of any computational system with sufficient levels of self-modification" (Hunt 1995, 59). That is, consciousness and self-awareness are able to be (or will be in the