The ethical dilemma is best presented with the following situation: “If a person falls onto the road in front of a fast-moving self-driving car, and the car can either swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian what should it do?” (Lubin). If it saves the passengers then that brings in a question of whether or not that was the “correct life” to take. If it saves the pedestrians then that brings in serious disconnect between the car and its owner, obviously no car owner wants to interact with a car on a daily basis that might kill them. The fact that a bought vehicle might kill its owner in certain situations would also lead to a significant drop in sales (Gent). When people were surveyed about this situation they responded in a utilitarian manner: whatever saves the most people. In contrast, when the same group were asked about this situation regarding people they were related to, they responded with wanting to save themselves and family (Gent). A set of well-established rules should take human choice in the situation out of the picture. The Robot Laws of Morality would be a good starting point in establishing a plan to program robot ethics. “Law 0: A robot may not injure humanity or through inaction, allow humanity to come to harm” (Trappl …show more content…
Humans committing crimes is nothing new and hard to be avoided, so the correct laws need to be put in place to diminish abuse of the new technological systems of the cars. The line between human and machine may be blurred but the blurring of that line will be an important step in establishing laws for objects not operated by humans. Last, inevitable desire for personalization may finally quiet the long lasting debate about robot