1)
Of the four types of definitions provided, the type most interesting to me would have to be thinking humanly. Since I was a kid, I was always fascinated by the idea that artificial intelligence had the chance to progress to the level of having its own "identity" where they are able to think as humans do, make decisions as humans do, and even learn as human do. To this day I still think the same way. This is more related to my concept of what is perceived to be fictional becoming reality. Creating a machine that pretty much clones the functions of the human brain is definitely a topic worth excitement. I feel as though the other three types of definition dull in comparison to this literal meaning. To some extent, research have been done and results have been concluded in each of the other three types. Thinking rationally, acting humanly, and acting rationally can all be done with interesting algorithms and bunch of conditional statements that programmers set beforehand as a guideline to how the AI should respond. For instance, there is a facial recognition program built into a mechanical "Doll" that would respond to basic verbal commands and even read the emotions on one's face and respond accordingly. I would consider an video gaming AI to be thinking rationally based on the percentages of success based on each action. But allow an AI to grow on its own, ever learning and ever questioning, would be one hell of a feature in the advancement in technology. Of course, these effects could get way out of hand but we'll leave that discussion for some other time.
2)
Responding to 1.11) Generally speaking, computers do only what the programmer allows them to do. But more specifically, the algorithms and functions can get so complicated that the programmer itself would not be able to completely execute their own command to that level of correctness. Looking at this, I feel that the latter statement is definitely true because the computer reflects