April 23, 2013
Moral Machines – Ethics
The future of computing and machinery is changing at such an extremely fast pace that we need to invest in the application of ethics to the world of machines. But there are many problems with programming a machine to act ethically. Ethics are simply emotional expressions and machines can’t have emotion. Computers don’t have the practical wisdom that Aristotle thought we use when applying our virtues. What about the ethics of the lesser ethical agents which is the resultant from their human developers. Can a machine represent ethics clearly and then operate well on the basis of this knowledge?
It would have to be able to make plausible ethical judgments and justify them. A clear ethical agent that was independent in that it could handle real-life situations involving an unpredictable sequence of events would be most impressive. We typically regard humans as having consciousness, intentionality, and free will. How can a machine become a full ethical agent—there is not a machine that has consciousness, intentionality, and free will. Machines can have an understanding of ethics. Ethics is important. Future machines will need to have an increased control and independence to do this. More powerful machines need more powerful machine ethics. Programming or teaching a machine to act ethically will help us better understand ethics. We have to have a limited understanding of what a proper ethical theory is. Not only do people disagree on the subject, but individuals can also have conflicting ethical perceptions and beliefs. Programming a computer to be ethical is much more difficult than programming a computer to play world-champion chess—an accomplishment that took 40 years. Chess is a simple domain with well-defined legal moves. Ethics operates in a complex domain with some ill-defined legal moves. We need to understand learning better than we do now of common sense and world knowledge. The deepest problems in