Skip to content

FLI’s Position on Lethal Autonomous Weapons

Published:
June 5, 2020
Author:
Taylor Jones

Contents

Described as the third revolution in warfare after gunpowder and nuclear weapons, lethal autonomous weapons (AWS) are weapon systems that can identify, select and engage a target without meaningful human control. Many semi-autonomous weapons in use today rely on autonomy for certain parts of their system but have a communication link to a human that will approve or make decisions. In contrast, a fully-autonomous system could be deployed without any established communication network and would independently respond to a changing environment and decide how to achieve its pre-programmed goals. It would have an increased range and would not be subject to communication jamming. Autonomy is present in many military applications that do not raise concerns such as take-off, landing, and refuelling of aircrafts, ground collision avoidance systems, bomb disposal and missile defence systems. The ethical, political and legal debate underway has been around autonomy in the use of force and the decision to take a human life.

Lethal AWS may create a paradigm shift in how we wage war. This revolution will be one of software; with advances in technologies such as facial recognition and computer vision, autonomous navigation in congested environments, cooperative autonomy or swarming, these systems can be used in a variety of assets from tanks, ships to small commercial drones. They would allow highly lethal systems to be deployed in the battlefield that cannot be controlled or recalled once launched. Unlike any weapon seen before, they could also allow for the selective targeting of a particular group based on parameters like age, gender, ethnicity or political leaning (if such information was available). Because lethal AWS would greatly decrease personel cost and could be easy to obtained at low cost (like in the case of small drones), small groups of people could potentially inflict disproportionate harm, making lethal AWS a new class of weapon of mass destruction.

Some believe that lethal AWS have the opportunity to make war more humane and reduce civilian casualties by being more precise and taking more soldiers off the battlefield. Others worry about accidental escalation and global instability, and the risks of seeing these weapons fall into the hands of non-state actors. Over 4500 AI and Robotics researchers, 250 organizations, 30 nations and the Secretary General of the UN have called for legally-binding treaty banning lethal AWS. They have been met with resistance from countries developing lethal AWS, fearing the loss of strategic superiority.

There is an important conversation underway in how to shape the development of this technology and where to draw the line in the use of lethal autonomy. This will set a precedent for future discussion around the governance of AI. 

An Early Test for AI Arms Race Avoidance & Value Alignment

The goal of AI governance is to ensure that increasingly powerful systems are safe and aligned with human values.

When thinking about the long-term future, it is important not only to craft the vision for how existential risk can be mitigated, but also to define the appropriate policy precedents that create a dependent path towards the desired long-term end-state. A pressing issue in shaping a positive long-term future is ensuring that increasingly powerful artificial intelligence is safe and aligned with human values.

Legal & Ethical Precedent

The development of safe and aligned artificial intelligence in the long-term requires near-term investments in capital, human resources, and policy precedents. While there have been increases in investment into AI safety, especially for “weak” AI, it remains a grossly underfunded area, especially in contrast to the amount of human and financial capital directed towards increasing the power of AI systems. From a policy perspective, the safety risks of artificial intelligence have only recently begun to be appreciated and incorporated into mainstream thinking.

In recent years, there has been concrete progress in the development of ethical principles on AI. Starting with the Asilomar AI Principles, subsequent multi-stakeholder efforts have built on these efforts including the OECD Principles on AI and the IEEE’s Ethically Aligned Design with varying degrees of emphasis on AI safety. A recent paper by the Berkman Klein Center surveyed the landscape of multistakeholder efforts on AI principle development detailed remarkable convergence around eight key themes including: safety and security, accountability, human control, responsibility, privacy, transparency and explainability, fairness, and promotion of human values. The development of consensus principles that AI should be ethical is a welcomed first step. However, much like technical research, principles are insufficient in their robustness and capacity to adequately govern artificial intelligence. Ensuring a future where AI is safe and beneficial to humanity will require us to move beyond soft law and develop governance mechanisms that ensure that the correct policy precedents are set in the near term to steer AI in the direction of being beneficial to humanity.

Lethal autonomous weapons systems, which can be highly scalable, represent a new category of weapons of mass destruction.

It is in this context that the governance of lethal autonomous weapons systems (lethal AWS) emerges as a high priority policy issue related to AI, both in terms of the importance for human-centric design of AI and society’s capacity to mitigate arms race dynamics for AI. Beyond the implications for AI governance, lethal autonomous weapons represent a nascent catastrophic risk. Highly scalable embodiments of lethal autonomous weapons (i.e. small & inexpensive autonomous drones) represent a new category of weapons of mass destruction.

The Importance of Human Control

Lethal autonomous weapons systems refer to weapons or weapons systems that identify, select, and engage targets without meaningful human control. The concept of how to define “meaningful” control remains a topic of discussion, and includes other characterizations such as “human-machine interaction,” “sufficient control,” and “human in the loop” but central to all of these characterizations is a belief that human decision making must be encompassed in a decision to take a human life.

We believe there are many acceptable and beneficial uses of AI in the military, such as its use in missile defense systems, supporting and enhancing human decision making, and increasing capacity for accuracy and discrimination of legitimate targets which has the potential to decrease non-combatant casualties. However these applications would not meet the criteria of being a lethal autonomous weapon system, as these applications either have a non-human target (i.e. incoming missile) or are reliant on robust human-machine interaction (i.e. retain human control). Furthermore, to our knowledge, all of the systems currently in use in drone warfare require a human in the loop, and hence are also exempt.

If the global community establishes a norm that it is appropriate to remove humans from the decision to take a human life and cede that moral authority to an algorithm-enabled weapon, it becomes difficult to envision how more subtle issues surrounding the human responsibility for algorithmic decision making, such as the use of AI in the judicial system or for medical care, can be achieved. By condoning the removal of human responsibility, accountability and moral agency from the decision to take a human life, it arguably sets a dire precedent for the cause of human-centric design of more powerful AI systems of the future.

Lethal autonomous weapons systems can identify, select, and engage targets without meaningful human control.

Furthermore, there are substantial societal, technical and ethical risks to lethal autonomous weapons that extend beyond the moral precedent of removing human control over the decision to enact lethal harm. Firstly, such weapons systems run the risk of unreliability, as it is difficult to envision any training set that can approximate the dynamic and unclear context of war. The issue of “unreliability” is compounded by a future where lethal autonomous weapons systems interact with the lethal autonomous weapons systems of an opposing force, when a weapon system will intentionally be designed to behave unpredictably to defeat other AI-enable counter-measures of an opposing enemy. Fully autonomous systems also pose unique risks of unintentional escalation, as the systems will make decisions faster than human speed, which will reduce the time allowed for intervention in an escalatory dynamic. Perhaps most concerning is the fact that such weapons systems would not require sophisticated or expensive supply chains that would only be accessible to leading military powers. Small lethal autonomous weapons could be produced cheaply and at scale, and it has been argued by Stuart Russell and others, that they would represent a new class of weapons of mass destruction. Such a class of lethal autonomous weapon would be deeply destabilizing due to its risk of proliferation and incentives for competition, as these systems could be produced by and for any state or non-state actors such as law enforcement or even terrorist groups.

Establishing International Governance

In terms of governance, the International Committee of the Red Cross (ICRC) has noted that there are already limits to autonomy in the use of force under existing International Humanitarian Law (IHL), or the “Law of War,” but notable gaps remain in defining the level of human control required for an operator to exercise the context specific judgments that are required by IHL. Hence, new law is needed, and the prospective governance of lethal autonomous weapons may be an early test of the global community’s ability to coordinate on shared commitments for the development of trustworthy, responsible, and beneficial AI. This achievement will go a long way to avoid dangerous arms race dynamics between near-peer adversarial nations in AI-related technology. If nation-states cannot develop a global governance system that deescalates and avoids such an arms race in lethal autonomous weapons, then it is nearly impossible to see how a reckless race towards AGI, with a winner-take-all dynamic, is avoided.

To be clear: it is FLI’s opinion and that of many others in the AI community, including 247 organizations, 4,500 researchers, 30 nations, and the Secretary General of the UN, that the ideal outcome for humanity is a legally-binding treaty banning lethal AWS. This ban would be the output of multilateral negotiations by nation-states and would be inclusive of a critical mass of countries leading the development of artificial intelligence. A legally-binding ban treaty would both set a powerful norm to deescalate the arms race and set a clear precedent that humans must retain meaningful control over the decision to enact lethal harm. Such a treaty would ideally include a clear enforcement mechanism, but other historical examples, such as the Biological Weapons Convention, have been net-positive without such mechanisms.

However, FLI also recognizes that such a treaty may not be adopted internationally, especially by countries leading development of these weapons systems, the number of whom is increasing. The United Nations, through the Convention on Certain Conventional Weapons (CCW),has been discussing the issue of lethal autonomous weapons since 2014, and those negotiations have made little progress beyond the “guiding principles” stage of governance, likely due in part that unanimity is required for developing new law. Hence, while we recognize the benefits of states meeting regularly to discuss the issue, it seems that the best outcome of this forum may be incremental progress at best, as it is unlikely to yield a new protocol that ensures meaningful human control in weapons systems. Therefore urgent action to stigmatize lethal autonomous weapons is needed, and we must also consider supplemental paths for humanity’s future to ensure meaningful human control over these weapons systems.

In the absence of governance on lethal autonomous weapons, it is likely that there will be an unchecked arms race between adversarial nation-states.

In the prospective absence of a legally binding treaty, there is a dangerous alternative future where few, or no, norms or agreements on the governance of lethal autonomous weapons are developed in time to prevent or mitigate their use in battle. In this scenario, it is likely that there will be an unchecked arms race between adversarial nation-states and the setting of a disastrous precedent maintaining a human-centric design of AI.

Thankfully, there is a far-ranging, undeveloped continuum of policy options between the two poles of no effective governance of lethal AWS at all at one end and an outright ban on the other end. Furthermore, these intermediary options could be used to ensure human control in the near term, while helping to generate the political will for an eventual treaty. Such intermediaries could include the development of new law or agreement on the level of human control required for weapons systems at the national level or in other international fora outside of the CCW, establishing international agreement on the limits to autonomy under the law of war, similar to the Montreux process, weapons reviews, or political declarations. Since such policy actions that rest on the continuum between no governance at all and a ban might be necessary, we would be remiss to ignore them entirely, as they may be able to play a key role in supplementing efforts towards an eventual ban. Furthermore, we see an urgent need to expand the fora for discussion of the risks and legality of lethal autonomous weapons at both the national and international level to continue to include militaries, but also AI researchers, the private sector, national security experts, and advocacy groups within civil society, to name a few.

There is an urgent need to develop policies that provide some meaningful governance mechanisms for lethal autonomous weapons before it is too late. Once these lethal AWS are integrated into military strategy or worse, mass-produced and proliferate, the opportunity for preventative governance of the worst risks posed by lethal AWS has likely passed.

Therefore, FLI is open to working with all stakeholders in efforts to develop norms and governance frameworks that ensure meaningful human control and minimize the worst risks associated with lethal AWS. We do so while still maintaining the position that the most beneficial outcome is an outright, legally enforceable ban on lethal AWS.

This content was first published at futureoflife.blackfin.biz on June 5, 2020.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

An introduction to the issue of Lethal Autonomous Weapons

Some of the most advanced national military programs are beginning to implement artificial intelligence (AI) into their weapons, essentially making them 'smart'. This means these weapons will soon be making critical decisions by themselves - perhaps even deciding who lives and who dies.
November 30, 2021

10 Reasons Why Autonomous Weapons Must be Stopped

Lethal autonomous weapons pose a number of severe risks. These risks significantly outweigh any benefits they may provide, even for the world's most advanced military programs.
November 27, 2021

Real-Life Technologies that Prove Autonomous Weapons are Already Here

For years, we have seen the signs that lethal autonomous weapons were coming. Unfortunately, these weapons are no longer just 'in development' - they are starting to be used in real military applications. Slaugherbots are officially here.
November 22, 2021

Why support a ban on Autonomous weapons?

Why support a ban on Autonomous weapons? Artificial Intelligence (AI) will soon become the most powerful technology ever created. It […]
October 26, 2021

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram