The Challenges of Using Artificial Intelligence on Lethal Autonomous Weapons Systems
by Amine Bennis
Here is the nightmare scenario for policymakers: a military device such as a drone armed with a missile, working autonomously without any human control thanks to the incorporation of artificial intelligence—known as lethal autonomous weapons system or “LAWS"—makes a mistake that causes civilian deaths instead of military targets. Who is responsible? Do any democratic controls even exist to govern such an incident?
A group of governmental and non-governmental actors established in 2016 under the auspices of the United Nations, known as the Group of Governmental Experts on Emerging Technologies in the Area of LAWS, is currently in Geneva discussing this exact issue, particularly the level of human control over LAWS and the extent to which artificial intelligence (AI) may be applied to military devices. The actors’ objective? To adopt a new protocol to the Convention of Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW).
LAWS might offer a particularly tempting tool for democracies to address the emergence of new war enemies such as terrorist groups that challenge traditional military responses. Indeed, the deployment of massive armies remains a lengthy process and often falls short of meeting the challenges posed by guerrilla confrontations and intelligence warfare. Gathering numerous soldiers and preparing extensive logistics takes time but fails to guarantee an effective response. Democracies are increasingly reluctant to send their citizens to war in remote conflicts with unpredictable results.
Democracies, however, must absolutely resist this temptation of using LAWS. We must not lower or remove human control, supervision, oversight, and judgement over LAWS, or rely on the incorporation of AI into these devices because to increase the level of delegation of warfare to robots raises considerable humanitarian, technological, legal, and geopolitical challenges.
In 2017, researcher Edouard Pflimlin highlighted how such systems inherently go against the principle of human dignity as affirmed in various conventions to which most countries are party, such as the 1949 Geneva Conventions and its protocols. These conventions also require that military responses in times of conflict avoid civilian deaths or disproportionate damage to human beings in contrast to the concrete and direct military advantages anticipated. Such key elements of humanitarian law should not be undermined by removing human control over military devices, which the use of LAWS could infringe upon by causing the deaths or harm of humans without any persons involved in the decision-making process.
Certain media outlets have reported that the demilitarized zone between North and South Korea includes a sentinel robot, the SGR-A1, which can switch to an automated mode and fire against any armed person who holds a weapon and does not raise it in the air, thanks to its infrared sensors. How can we ensure the SGR-A1 will not mistakenly kill an armed refugee fleeing from North Korea? The series of technological failures of drone robots (including the Sentinel, Avenger, Wasp, Raven, Puma, Shadow, Scan Eagle, Global Hawk, Hunter, Grey Eagle, Predator, and Reaper) are public. In 2011, the fallout of the RQ-170 Sentinel drone during a secret mission in Iran was an embarrassing episode for U.S. officials. Technology can, and has misfired.
The key issue of the discussions held in Geneva is the level of human involvement over LAWS on which nations should agree. The outcome of these meetings should ideally result in an agreement on a legally binding protocol that bans the introduction of AI in all lethal military devices, ensuring that these weapons remain under human control at all times.
However, from a legal perspective, any attempt to set a threshold of human control over autonomous or even semi-autonomous technologies raises significant concerns for the allocation of liability in the case of unintended casualties. In particular, questions around accountability could oscillate between the human chain of command in control of these devices and the extent to which such killer robots should or could be considered autonomous in its decision-making abilities. These concerns highlight the inadequacy of introducing AI into military-grade weapons.
Open letters from AI researchers and various commentators have publicly underscored the risks of the proliferation of LAWS, namely the potential for terrorist groups to hack and appropriate AI-equipped devices. In a famous video shown during the 2017 Geneva discussions, AI expert Stuart Russell from the University of California at Berkeley dramatized the risks of integrating and militarizing existing technologies with AI by showing the potential consequences of terrorist attacks committed using LAWS such as the targeted deaths of numerous innocent civilians. The risk of exposing human life to the combination of AI and military devices becomes exponential at a time when militias, terrorist groups, and non-state actors more generally can use drones or military robots.
The ongoing discussions in Geneva present a unique opportunity for participants to raise awareness of the risks of LAWS and to convince all relevant actors that a ban of AI in all lethal military devices is required under the CCW. Human control over military devices must always remain effective. Addressing this issue in a multilateral forum is the appropriate way to avoid a new global arms race and the proliferation of these deadly technologies to non-state actors.
Image: RAF Reaper MQ-9 Remotely Piloted Air System
Courtesy of Corporal Steve Follows RAF/MOD / Wikimedia Commons
Amine Bennis is an international legal counsel with a passion for international relations and geopolitics. He works at the European Bank for Reconstruction and Development as a Principal Counsel*. With 10 years of experience in a multilateral development bank and law firms advising on structuring, negotiation and implementation of debt and equity investments, he has acquired strong diplomatic skills and developed a proven ability to negotiate and reach agreements in English, French and Arabic. Amine is a candidate to the mid-career program of the Master of Arts at The Fletcher School of Law and Diplomacy, specializing in international relations. You may find his LinkedIn profile here.
* The information and views set out in this article are those of its author only and do not necessarily reflect the view of the European Bank for Reconstruction and Development.