When we are able to create machines smarter than humans, then those machines could do likewise, but much faster.
The problem of the threat of unfriendly artificial intelligence and losing control of the machines we have built remains unsolved.
Artificial autonomous agents are, in one sense the next stage in the development of technology. Autonomous agents perceive, decide, and act on their own.
A machine (such as your brain) cannot predict (and thus cannot control) a machine of greater algorithmic complexity, which is a bound on a formal measure of intelligence. As a consequence, AI is an evolutionary process: each generation experimentally creates modified versions of itself without knowing which versions will be smarter.
Moral destruction: could we become less human-indeed, less than human-as a result of advances in AI? This might happen if people were to come to believe that purpose, choice, hope, and responsibility are all sentimental illusions. Those who believe that they have no choice, no autonomy, are unlikely to try to exercise it. But this need not happen, for our goals and beliefs-in a word, our subjectivity-are not threatened by AI. The philosophical implications of AI are the reverse of what they are commonly assumed to be: properly understood, AI is not dehumanizing.
Political destruction could result from the exploitation of AI (and highly centralized telecommunications) by a totalitarian state. If AI research had developed programs with a capacity for understanding text, understanding speech, interpreting images, and updating memory, the amount of information about individuals that was potentially available to government would be enormous.
Many people fear that in developing AI, we may be sowing the seeds of our own destruction, our own physical, political, economical, and moral destruction. Computer technology (and AI in particular) cannot in principle achieve the reliability required for a use