Lethal Autonomous Weapons and Virtue Theory Essay

Exclusively available on IvyPanda Available only on IvyPanda

Opening

Lethal Autonomous Weapons, also known as LAWs, are highly controversial. Until recent years they were mostly a science fiction idea. In the 1970s the first true LAWs were introduced such as the US Phalanx CIWS. These systems were created to defend military ships from projectiles and aircraft autonomously, with the operator only setting the parameters for the targeting system. Similar systems were later adopted for use on tanks and missile defense systems. These technologies were not used to fire directly at people, but only the projectiles fired at the vehicle. The swiftness of the computer made the use of targeting algorithms much more effective than any manned solution. However, today autonomous weapons are starting to be used on people. This fact raises a lot of ethical dilemmas. This paper will cover how the virtue theory of ethics would find the use of Lethal Autonomous Weapons to be ethically unacceptable.

We will write a custom essay on your topic a custom Essay on Lethal Autonomous Weapons and Virtue Theory
808 writers online

Claim One

The use of LAWs can lead to an increase in civilian casualties. A machine makes no judgment call before firing. It does not have consciousness or even a basic level of thinking. Therefore, an algorithm is responsible for shooting at targets without a clear distinction between combatants and civilians. The problem of the distinction between combatants and civilians is a very complicated one. A system that could distinguish between them would need to judge situations as good as a human can. Current systems are unable to judge for themselves, making complex distinction impossible. Robert Sparrow writes the following: “Before it would be ethical to deploy AWS [LAWs], then, the systems will need to be capable of making these sorts of discriminations, all of which involve reasoning at a high level of abstraction (99).” Sparrow then provides a line of examples where the situations would be too difficult even for a system designed to recognize weapons in hands of people: “For instance, not every person carrying a weapon is directly engaged in armed conflict (in many parts of the world carrying a weapon is a matter of male honor); with prior approval, foreign warships can pass through the territorial waters of another state; neutral troops or peacekeeping forces are sometimes present in areas in which legitimate targets are located, and children sometimes climb on decommissioned tanks placed in playgrounds (98).”

Claim Two

Widespread use of LAWs could lead to detachment of people from war and its consequences, thus making wars easier to start. Due to the nature of technology, people are taken out of the loop of the operation of these technologies. It means that the responsibility for the actions of automated weapons falls on virtually nobody. Andreas Matthias describes this phenomenon as the responsibility gap (1). Due to the automation of machines and their ability to change their code to adapt to the environments, the responsibility is lifted even from the programmer of the machine. Denise Garcia writes on the issue of accountability “If no one can be held accountable for what robots do, it is unethical to deploy them. Human dignity requires that humans should always be in the loop in matters of life and death (61).” This is a reflection of the responsibility gap. She also connects this issue with the possible proliferation of war: “War becomes easier to declare the greater the distance between the attackers and the attacked, as the risk to the attacker is lowered (3).”

Claim Three

The use of LAWs is unethical because it removes they are unable to be virtuous. As previously established, machines are incapable of reasoning and thinking at this stage of development. Without thinking and emotion, no virtue can be ascribed to them, making their actions unethical. Although war is a horrible event, soldiers have been involved in a lot of virtuous activities. They are capable of saving civilians, even when it means risking their life, they have provided supplies and connected with civilians in combat zones on a personal level, and they have become synonymous with the virtue of courage. The use of such machines would eliminate courage from war, making it a systematic killing of people on the opposite side of the conflict. By lacking virtues and morality, they are also unable to disobey their orders which could lead to unethical orders being followed without opposition. Johnson and Axinn with the following on courage: “In the use of a robot operated by an on-board computer, no soldier exhibits courage. This is an advantage toward saving lives on one side, but hardly an example of honor and courage (136).” Shannon Vallor echoes this statement by pointing out the inherent beliefs in virtues in the army: “Because armed military robots raise the prospect of handing over the most critical kinds of military service to entities that are, at least for the foreseeable future, constitutionally incapable of it; this imperils a core ethical value shaping military culture (possibly the core value, since loyalty, courage, etc. may be viewed as extensions or implications of ethical service) (173).”

Major Ethical Argument

According to these claims, virtue ethics would find the use of LAWs unethical. Virtue ethics are based on the excellence of virtue, practical or moral wisdom, and flourishing through happiness. This happiness is called Eudaimonia and reaching it was considered the goal of life by Aristotle. The first claim shows that the number of casualties among civilians in the war would increase. It means that they would never be able to reach this happiness, and therefore it would be unethical. The second claim shows that wars would be easier to start and no one would be directly responsible for the actions of machines. Just as with the first claim, people who die in these wars would be denied flourishing. The third claim directly plays against the concept of virtue, and by extension the theory of virtue ethics. Aristotle writes this about the nature of virtue: “To sum up: Virtue is about pleasures and pains; the actions that are its source also increase it, or if they are done badly, ruin it; and its activity is about the same actions as those that are its sources (21).” This statement shows that virtue lies in between two vices. However, machines do not have vices and therefore cannot have virtues.

Counterargument

An obvious counterargument to this would be to make the machines sophisticated enough to be able to make decisions by all the international laws of war and make them be able to reason so they would be able to judge the situation before acting. However, this argument denies the difficulty of creating such machines. At the moment, people are still unsure about how human brains process thoughts, so trying to create a machine that is better at reasoning than a human seems virtually impossible. Sparrow echoes this sentiment: “It is difficult to imagine how any computer could make these sorts of judgments short of the development of a human-level general intelligence – that is, “strong” AI (99).”

Conclusion

Virtue ethics find the use of LAWs unethical due to the possible increase in wars, civilian casualties, and the lack of virtue in machines. At this time, only two countries actively use LAWs directly against people: South Korea and Israel. Hopefully, this technology would prove to be too controversial for other countries to adopt.

1 hour!
The minimum time our certified writers need to deliver a 100% original paper

Works Cited

Aristotle, and Terence Irwin. Nicomachean Ethics. Hackett Publishing, 1999.

Garcia, Denise. “Killer Robots: Why The US Should Lead The Ban.” Global Policy, vol. 6, no. 1, 2015, pp. 57-63.

Johnson, Aaron M., and Sidney Axinn. “The Morality of Autonomous Robots.” Journal of Military Ethics, vol. 12, no. 2, 2013, pp. 129-141.

Matthias, Andreas. “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.” Ethics and Information Technology, vol. 6, no. 3, 2004, pp. 175-183.

Sparrow, Robert. “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.” Ethics & International Affairs, vol. 30, no. 01, 2016, pp. 93-116.

Vallor, Shannon. “Armed Robots and Military Virtue.” Law, Governance and Technology Series, vol. 14, no. 1, 2014, pp. 169-185.

Print
Need an custom research paper on Lethal Autonomous Weapons and Virtue Theory written from scratch by a professional specifically for you?
808 writers online
Cite This paper
Select a referencing style:

Reference

IvyPanda. (2020, September 24). Lethal Autonomous Weapons and Virtue Theory. https://ivypanda.com/essays/lethal-autonomous-weapons-and-virtue-theory/

Work Cited

"Lethal Autonomous Weapons and Virtue Theory." IvyPanda, 24 Sept. 2020, ivypanda.com/essays/lethal-autonomous-weapons-and-virtue-theory/.

References

IvyPanda. (2020) 'Lethal Autonomous Weapons and Virtue Theory'. 24 September.

References

IvyPanda. 2020. "Lethal Autonomous Weapons and Virtue Theory." September 24, 2020. https://ivypanda.com/essays/lethal-autonomous-weapons-and-virtue-theory/.

1. IvyPanda. "Lethal Autonomous Weapons and Virtue Theory." September 24, 2020. https://ivypanda.com/essays/lethal-autonomous-weapons-and-virtue-theory/.


Bibliography


IvyPanda. "Lethal Autonomous Weapons and Virtue Theory." September 24, 2020. https://ivypanda.com/essays/lethal-autonomous-weapons-and-virtue-theory/.

Powered by CiteTotal, best essay citation creator
If you are the copyright owner of this paper and no longer wish to have your work published on IvyPanda. Request the removal
More related papers
Cite
Print
1 / 1