Introduction
Autonomous weapons or robots designed to kill humans are not a far-fetched idea in science fiction. Semi-autonomous missile systems that automatically target and destroy hostile ships with minimal human intervention are already in development [1]. Automatic weapons defense systems are already in use by numerous countries [2]. Combined with advanced artificial intelligence, autonomous weapons have the capability to inflict tremendous damage in warfare automatically; hence, it is important to discuss the regulation of such a system before it enters widespread usage. This essay will explore the debate of whether or not the military use of fully autonomous weapons and robots should be banned. It will then discuss the ethical implications of accountability …show more content…
for these autonomous systems.
Findings
As the topic of deploying autonomous weapons can have a high impact on the landscape of warfare in the future, there are numerous points of contention on whether or not the use of artificial intelligence for weapons should be banned. One such issue is the concern that artificial cannot make decisions the way humans can, as machines do not have morals. According to Johnson [3], weapons that can target and kill humans autonomously should be banned, as machines are not capable of mimicking morals and emotion. Human decision-making takes into account moral and emotional considerations. For example, a human will feel grief if a comrade is killed. If a human violates military law, they can be court-martialed and disgraced. Johnson believes that a machine can only approximate this kind of emotional or moral behavior imperfectly. Thus, Johnson concludes that as artificial intelligences do not have morals or emotions, they cannot be trusted to decide whether or not to kill a target. Therefore AI usage in warfare should be banned.
Docherty [2] from the Human Rights Watch echoes the sentiment held by Johnson. He further states that human judgement is often necessary to correctly assess whether a target is hostile or not, because artificial intelligence cannot discern the intentions of human beings accurately. Docherty maintains that an artificial intelligence cannot follow international humanitarian law, as compliance requires complex and subjective judgement to be applied. Some may argue that artificial intelligence is capable of performing a truly unbiased survey of a situation; hence, decisions by robots may actually be superior to those made by humans. Docherty refutes this by stating that emotion is necessary for good judgement. Docherty believes that emotion serves as a protection against killing innocent civilians. A machine that does not feel emotion would not hesitate to kill an enemy, armed or not. In summary, Docherty believes that intelligent machines are unable to comply with existing humanitarian law, as they cannot fully emulate the facets of human judgement required to do so. Hence, their use in the military should be restricted.
Bailey [4] does not agree with the views of Docherty and Johnson. Bailey argues that morality for robots does not really matter as long as the robots are shown to consistently protect civilians or neutralize targets more effectively than humans in practice. Bailey agrees that an artificial intelligence unable to accurately judge the hostility of a target should not be used. Bailey further states that there is no barrier that prevents an AI from approximating human judgement with high accuracy. As long as the artificial intelligence meets this requirement, there should be no qualms in deploying intelligent machines in military contexts.
Bailey's argument invalidates the morality and decision arguments proposed by Docherty and Johnson. The purpose of morality in warfare is to mitigate the effects of war on innocent individuals, and to ensure that people engaged in warfare suffer minimally. If an artificial intelligence is able to protect individuals with greater effectiveness than humans in general, it does not matter whether the artificial intelligence has morality or not. Therefore, in regards to the issue of morality in artificialc intelligence, it can be concluded that the lack of morals in an AI should not be a factor in the discussion on banning autonomous weapons.
In addition to the morality of autonomous weapons and robots, a major point of contention is accountability of the actions taken by an autonomous machine.
There are situations in which accountability of an autonomous weapon's actions is not clear. According to Docherty [2], the purpose of accountability is twofold: preventing harm to innocent people, and allowing victims to seek justice. If an artificial intelligence makes a mistake, punishing it would not serve the purposes of accountability. However, punishing the programmer, manufacturer, or the commander that deployed it does not make much sense either. Docherty argues that a military commander does not have the control to prevent the autonomous robot from harming innocent people, as the robot is autonomous. In addition, Docherty states that it is unfair to punish either the programmer or the manufacturer, because it is just impossible and unfeasible to fully enumerate all possible decisions an artificial intelligence can arrive at. Thus, there is no party that can be held accountable for the actions of an autonomous weapon in a satisfactory manner, and hence they should not be
utilized.
Schulzke [5] and Bailey [4] propose a model to distribute responsibility for an autonomous weapon or robot. The military has a hierarchy that guarantees that some individual will oversee an autonomous weapon. Military commanders already manage subordinates that act autonomously. An autonomous weapon can simply be considered as a type of subordinate. Therefore, when an autonomous weapon makes a mistake, Schulzke and Bailey argue that the individual that directly oversees the actions of the autonomous weapon should be held accountable. This point will be further explored in the next section on ethical implications.
Ethical implications
One of the major ethical issues of deploying autonomous weapon is accountability. According to Docherty [2], there seems to be no good way to hold a person accountable for the actions of an artificial intelligence, implying that there is no way for a victim to seek recourse. Schulzke [4] and Bailey [5] argue that there is a clear chain of command for accountability in the military. An overseer can be held responsible for the actions of their human subordinates despite the subordinates being able to act autonomously. By analogy, Schulzke and Bailey assert that an overseer can also be held accountable for an autonomous weapon, which can just be considered as a robotic subordinate of the overseer.
However, Schulzke and Bailey's arguments do not make much sense. In the military, the supervisor is often also responsible for training their subordinates on proper decision-making. However, this is not the case for autonomous weaponry. The supervisor of the autonomous weapon is not exposed to its decision-making process. There is no way for the supervisor to understand or correct the decision-making process of an AI. It is quite unfair to punish a person that does not have enough control to prevent the situation from occurring. Instead, as the decision-making process of an artificial intelligence is exposed only to the programmer, the programmer should be responsible. Programmers are the only ones with direct control over an autonomous weapon's decision-making process. Hence, programmer should be held accountable for any errors an autonomous robot may commit.
According to the Rule 4.1 in the Rules of Conduct of the HKIE, it is the responsibility of the programmer to protect the public and ensure that the programs they develop are safe [6]. If a bug in an artificial intelligence causes an autonomous weapon to commit an error, and the programmer did not exert reasonable effort to locate the bug, then he or she should be held accountable. Admittedly, if a programmer exerted reasonable effort to find bugs and could not detect that particular bug, the programmer should not be held accountable for it. In this case, there seems to be no party that can be held responsible for the actions of the artificial intelligence. Without a responsible party, regulating the use of autonomous weapons becomes far more difficult; it would be endanger the welfare of the public to allow them.
Conclusion
After a thorough analysis of the numerous perspectives the banning of autonomous weapons, the argument for banning seems to be stronger. Docherty and Johnson argued that robots are not able act morally. Bailey soundly refutes that argument by demonstrating that morality is not applicable in the case of autonomous weaponry. However, the issue of accountability is still present. Docherty is concerned that for some situations there is no single individual that can be held accountable for an autonomous weapon's actions. This concern is not sufficiently addressed by Bailey and Schulzke's proposed models of accountability, which propose that the direct supervisor of the weapon be held accountable for its actions. The lack of control the supervisor has over the weapon means it makes no sense to punish them. In some situations, the programmer can be held accountable, but this is not applicable to all scenarios. Due to this accountability gap in regards to autonomous weaponry, they should not be deployed until a clear and convincing chain of responsibility can be established.