The question would be: how do we determine what values, moral system or end-goal should be built into the super-intelligence machine. It is important that we do not make any mistakes in our choices. If we do, it can confer the superintelligence machine a decisive strategic advantage over humans that allows the machine to outsmart and take control of us. But, it does not seem like it is possible to achieve an error-free choice because we could be wrong about morality, what is good for us and what we truly want. Firstly, trying to choose the final goal for superintelligence that will shape our future is not a context we have experience in. The impact of our choices based on our predictions and conceptualization of the future could be vastly different from what will become of reality. Secondly, morality seems to be an evolving concept because the moral beliefs of people have changed with the progression of time, and it varies with different cultural practices. As such, we run the risk of making the wrong choices because we cannot be sure if we harbor moral misconceptions. Also, choosing definite values, a moral system and a specific end-goal will lock superintelligence from ethical and moral progress. Thus, it seems like we are unable to agree on the values, moral system and end-goal that should be built into superintelligence machines. As we risk conferring a decisive strategic advantage to superintelligence, our choices therefore pose existential risks to
The question would be: how do we determine what values, moral system or end-goal should be built into the super-intelligence machine. It is important that we do not make any mistakes in our choices. If we do, it can confer the superintelligence machine a decisive strategic advantage over humans that allows the machine to outsmart and take control of us. But, it does not seem like it is possible to achieve an error-free choice because we could be wrong about morality, what is good for us and what we truly want. Firstly, trying to choose the final goal for superintelligence that will shape our future is not a context we have experience in. The impact of our choices based on our predictions and conceptualization of the future could be vastly different from what will become of reality. Secondly, morality seems to be an evolving concept because the moral beliefs of people have changed with the progression of time, and it varies with different cultural practices. As such, we run the risk of making the wrong choices because we cannot be sure if we harbor moral misconceptions. Also, choosing definite values, a moral system and a specific end-goal will lock superintelligence from ethical and moral progress. Thus, it seems like we are unable to agree on the values, moral system and end-goal that should be built into superintelligence machines. As we risk conferring a decisive strategic advantage to superintelligence, our choices therefore pose existential risks to