Back to All Topics

The Ethical Cost of Artificial Intelligence in Autonomous Weapon Systems

By Joshua K. Smith
Midwestern Baptist Theological Seminary

September 26, 2021

Print


Fueling the war machine


By Joshua K. Smith
Midwestern Baptist Theological Seminary

Print

Autonomous weapon systems have been around for a while now. However, what is meant by autonomous is open to interpretation. The typical meaning of an autonomous weapon system is comprising of three elements; it must be able to search, identify, and engage a target. From precision-guided munition (PGM) to Unmanned Ariel Vehicles (UAVs), the ability to extend the decision-making and engagement of targets has been deployed since the early 90s. Where the battlefield is drastically changing is with the integration of Artificial Intelligence (AI). Neither autonomous nor AI systems are fully unsupervised, nor is that the preface of the military branches that employ these systems. There is a healthy fear of entrusting full unsupervised deployment of AI-driven weapons systems on the battlefield.[1]

To better understand the ethical cost of AI in autonomous weapons systems we need to first clarify the different iterations of these systems and briefly look at current systems to see the trajectory of future weapons. There are several degrees of autonomy:

  • Semi-autonomous Operation (Human in the loop): system performs a task and awaits a human user’s approval before taking action.
  • Supervised Autonomous Operation (Human on the loop): the system can sense, make a decision, and take action. The human can intervene if desired.
  • Fully Autonomous Operation (Human out of the loop): the system can sense, make a decision, and take action. The human user will not be able to intervene in time.

Aegis

One system that captures the advancements in the area of weapon tech is the defensive command system Aegis by Lockheed Martin. Aegis is a complex blending of the three modes of autonomous operations mentioned above. The system is driven by a computer called “Command and Decision” which regulates the activity of the weapon and radar. Aegis can be either human in the loop or human out of the loop depending on the desire of the user. However, even after a missile is fired in the human out of the loop mode, a human user can still abort the strike and reassert control.

HARM & Harpy

Other examples include the Israeli Harpy system and Anti-Radiation Missile (HARM). Both of these systems, like the Aegis, are primarily defensive and target enemy radar. The HARM system using a loitering munition, has the ability, once launched, to target, identify, and engage while keeping a human user in the loop via radio connections, so still semi-autonomous. The Harpy, on the other hand, is fully autonomous and once the command by the user is given to destroy enemy radar within a 500km range, the system will select and engage without human interaction. This system is used by Israel, China, Chile, South Korea, and Turkey. These systems, in reality, are the present and most likely the future of AI-driven weapons systems. No one is out to make a T-800 or similar killer bots because the risks are too high. Even with defensive systems like the Aegis, Harpy, and HARM, there are fatal accidents or malefactions. Although this truth may alleviate some of the anxieties about future weapon systems, there are other risks and ethical costs to their development that are less obvious to the public.

A Christian Assessment and Critique

“We affirm that the use of AI in warfare should be governed by the love of neighbor and the principles of just war…When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.” This quote is from Article: 10 War, taken from the Ethics Religious Liberty Commission’s (ERLC) statement, “Artificial Intelligence: A Statement of Principles.” The ERLC’s statement has provided a great foundation for evangelicals to think about the just use of AI systems in war, but we must think deeper about the ethical issues surrounding AI and war from a theological lens. This essay introduces the reader to the ethical issues surrounding the use of AI-driven technology in modern combat.

Let’s define AI and clarify its relation to the military-industrial complex, because the lay reader may be unaware that there is, in fact, a connection. Artificial Intelligence or AI is often thought of as simple algorithms and mathematical calculation that is driven by data. In a sense, that is true but behind the veil of any technology is an ideology that is driven by the economics of a moral actor (for better or worse). AI was a nebulous component for military advancement of power from the beginning. The Defense Advanced Research Projects Agency (DARPA) has invested millions in AI grants since 1963, the most recent campaign in 2018 invested 2 billion dollars.[2] Investments by the Pentagon were aimed at developing technology for surveillance, enemy detection, voice-controlled aerial systems, robotics, and autonomous weapon systems.[3] The technology of this caliber is not developed in a vacuum.

In the public eye, national defense and upgraded weapon systems are simply common sense. The rhetoric one will hear for why research and funding ought to be budgeted is to defend against threats both foreign and domestic. However, this comes at a literal and moral cost. As previously mentioned, it costs billions of dollars to invest in these systems, which deplete both the taxpayers and the immense resources required to prop up AI machines. There is also a moral cost, for violence begets violence (cf. Hab 2:9–11 NASB). Likewise, the militarization of countries, in the name of defense and safety, also pressures other countries to militarization. In this context, it is the argument a child makes when they want the latest tech or toy, “I must have X or Y because other children have X or Y.” As countries develop autonomous weapons systems, there is fear in the mind of other countries, that if they do not develop similar technology, their country will be at risk. The hard truth is that much of the advances and research around AI and autonomous weapons systems is driven by a reciprocal fear of being attacked or by perceived threats. There are, however, many more costs to this technology.

‘Bugsplat’and Due Care 

The military has used the collateral damage estimation tool (CDET) since 2003 during the Iraq War. CDET is also known as ‘bugsplat.” This tool is used to measure the given number of civilians that are estimated to be killed in a drone strike.[4] Likewise, the CIA used a machine-learning algorithm named SKYNET that collected SIM card data of future ‘legitimate targets.’ To make this assessment, algorithm scores of potential targets were made through risk assessment. The Military and CIA use such algorithms to check off the box of due care. The problem with this approach is that it uses a biased system to justify a kill and also to make the process of machine-killing more palatable to the public. Drone usage should trouble the Christian public for two reasons. First, ethics cannot be reduced to computation. It is dishonest and morally problematic to suggest that technology, in this case, AI weapon-systems, is morally virtuous by default and that because they do not have emotions, they are ethically superior for assessing and making decisions that involve human life. The judgment to take a human life should not be contracted out to machines. The psychological impact of killing is meant to have a deep and troubling impact upon the human conscience. This is a part of God’s design––moral intuition, emotions, and empathy for others are intended to help us in our judgment as well as being held accountable to our conscience.

Second, the use of AI weapon systems and lethal autonomous weapon systems (LAWS) will propagate further conflict, not resolve it. The narrative propagated by the Military and scholars like Ronald Arkin is that the use of these systems will replace human soldiers and thus lead to more ethical battlefields. As with congress’s “Authorization for the Use of Military Force” joint resolution act in September 2001, the Department of Defense, Commander in Chief, and others are in a position to interpret and justify the use of lethal force through unmanned aerial vehicles (drones) and AI weapons systems convenient to their agenda. Likewise, with bugsplast algorithms and future AI systems, there is a guise of an unbiased system that is making war less dangerous and reducing collateral damage. However, the estimated death total in Pakistan, Afghanistan, Somalia, and Yemen (between 8,858–16,901) argues otherwise. As Elke Schwarz points out in her book Death Machines: The Ethics of Violent Technologies,[5] ethical killing is a myth. For example, with drone operators, yes, the distance from embodied killing has been removed but the amount of kill strikes and potential for moral harm and PTSD has increased immensely.  

To consider the ethics of AI and robots, we cannot ignore the reality that most of the funding for this technology comes from the military. This is a concern because it reveals something about our nature, specifically our economic and political addiction to the war machine. One of the biggest threats we are facing in Christian ethics, largely being ignored, is our numbness to the physical and spiritual cost of security and warfare. There are obvious advantages to have security forces, drones, and military personal, but there are also costs and political implications. For the Christian, it is not just consideration of the ethics in war-making, but also the ethical cost of war-building.

Every dollar attributed to military spending is one less dollar to care for widows, orphans, to educate, to free the oppressed, and invest in the future that our children will occupy. I mention this because we see very clearly in the discussion of AI and automated weapons systems that the U.S. and other developed countries are trapped in a cycle of fear, specifically of fear-induced aggression. For example, China has stated they want to be the world leader in AI by 2030. Of course, this also entails the leader of AI weapon systems. Thus, the U.S. deems it worthy to ensure that whatever AI weapons others have they must also have equally powerful and advanced systems. In sum, that is the war-building machine ethos.

Conclusion

The ethical cost of AI-driven and autonomous weapon systems is high. Even though the majority of the weapon systems are, at the moment, designed and utilized primarily as defensive shields, Christians should still be wary of the integration of AI into weaponry systems. As this article has mentioned, there are budgetary, environmental, psychological, and political costs to developing these systems. Our Lord said, “Blessed are the peacemakers” in Matthew 5:9, may we take this ethos seriously in our consuming habits and public policy. Yes, the war machine produces advances in technology that we all benefit from, but has it become an idol, one that we willingly sacrifice men, women, and children to for the sake of comfort and in the name of security? These systems will protect lives, but they will also fuel the war machine. May we resist the myth that AI and robotic weapons will make war cleaner, safer, and more ethical. Our hope is not in AI or robots to dissolve war, but in a Savior who promised long ago that, “They shall beat their swords into plowshares and their spears into pruning hooks; one nation shall not raise the sword against another, nor shall they train for war again” (Isa 2:4)


[1] Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W. W. Norton, 2018).

[2] DARPA, “AI Next Campaign,” https://www.darpa.mil/work-with-us/ai-next-campaign (accessed last April 28th, 2021).

[3] Yarden Katz, Artificial Whiteness: Politics and Ideology in Artificial Intelligence (New York: Columbia University Press, 2020).

[4] John R. Emery, “Probabilities Toward Death: Bugsplat, Algorithmic Assassination, and Ethical Due Care,” Critical Military Studies (2020): 1–19.

[5] Elke Schwarz, Death Machines: The Ethics of Violent Technologies (Altrincham Street, Manchester: Manchester University Press, 2018), 190.

Joshua K. Smith (PhD, Midwestern Baptist Theological Seminary) is a pastor and theologian researching the ethics of AI and Robots from a Christian perspective. Follow his work at joshuaksmith.org and on Twitter at @Joshuak_Smith.