Skip to content
All Podcast Episodes

FLI Podcast: Why Ban Lethal Autonomous Weapons?

Published
April 2, 2019

Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion. 

Dr. Emilia Javorsky is a physician, scientist, and Founder of Scientists Against Inhumane Weapons; Bonnie Docherty is Associate Director of Armed Conflict and Civilian Protection at Harvard Law School’s Human Rights Clinic and Senior Researcher at Human Rights Watch; Ray Acheson is Director of The Disarmament Program of the Women's International League for Peace and Freedom; and Rasha Abdul Rahim is Deputy Director of Amnesty Tech at Amnesty International.

Topics discussed in this episode include:

  • The role of the medical community in banning other WMDs
  • The importance of banning LAWS before they’re developed
  • Potential human bias in LAWS
  • Potential police use of LAWS against civilians
  • International humanitarian law and the law of war
  • Meaningful human control

Once you’ve listened to the podcast, we want to know what you think: What is the most convincing reason in favor of a ban on lethal autonomous weapons? We’ve listed quite a few arguments in favor of a ban, in no particular order, for you to consider:

  • If the AI community can’t even agree that algorithms should not be allowed to make the decisions to take a human life, then how can we find consensus on any of the other sticky ethical issues that AI raises?
  • If development of lethal AI weapons continues, then we will soon find ourselves in the midst of an AI arms race, which will lead to cheaper, deadlier, and more ubiquitous weapons. It’s much harder to ensure safety and legal standards in the middle of an arms race.
  • These weapons will be mass-produced, hacked, and fall onto the black market, where anyone will be able to access them.
  • These weapons will be easier to develop, access, and use, which could lead to a rise in destabilizing assassinations, ethnic cleansing, and greater global insecurity.
  • Taking humans further out of the loop will lower the barrier for entering into war.
  • Greater autonomy increases the likelihood that the weapons will be hacked, making it more difficult for military commanders to ensure control over their weapons.
  • Because of the low cost, these will be easy to mass-produce and stockpile, making AI weapons the newest form of Weapons of Mass Destruction.
  • Algorithms can target specific groups based on sensor data such as perceived age, gender, ethnicity, facial features, dress code, or even place of residence or worship.
  • Algorithms lack human morality and empathy, and therefore they cannot make humane context-based kill/don’t kill decisions.
  • By taking the human out of the loop, we fundamentally dehumanize warfare and obscure who is ultimately responsible and accountable for lethal force.
  • Many argue that these weapons are in violation of the Geneva Convention, the Marten’s Clause, the International Covenant on Civil and Political Rights, etc. Given the disagreements about whether lethal autonomous weapons are covered by these pre-existing laws, a new ban would help clarify what are acceptable uses of AI with respect to lethal decisions -- especially for the military -- and what aren’t.
  • It’s unclear who, if anyone, could be held accountable and/or responsible if a lethal autonomous weapon causes unnecessary and/or unexpected harm.
  • Significant technical challenges exist which most researchers anticipate will take quite a while to solve, including: how to program reasoning and judgement with respect to international humanitarian law, how to distinguish between civilians and combatants, how to understand and respond to complex and unanticipated situations on the battlefield, how to verify and validate lethal autonomous weapons, how to understand external political context in chaotic battlefield situations.
  • Once the weapons are released, contact with them may become difficult if people learn that there’s been a mistake.
  • By their very nature, we can expect that lethal autonomous weapons will behave unpredictably, at least in some circumstances.
  • They will likely be more error-prone than conventional weapons.
  • They will likely exacerbate current human biases putting innocent civilians at greater risk of being accidentally targeted.
  • Current psychological research suggests that keeping a “human in the loop” may not be as effective as many hope, given human tendencies to be over-reliant on machines, especially in emergency situations.
  • In addition to military uses, lethal autonomous weapons will likely be used for policing and border control, again putting innocent civilians at greater risk of being targeted.

So which of these arguments resonates most with you? Or do you have other reasons for feeling concern about lethal autonomous weapons? We want to know what you think! Please leave a response in the comments section below.

Publications discussed in this episode include:

For more information, visit autonomousweapons.org.

var disqus_config = function () {this.page.url = 'https://futureoflife.blackfin.biz/2019/04/02/fli-podcast-why-ban-lethal-autonomous-weapons';this.page.identifier = 20758;}; (function() {var d = document, s = d.createElement('script');s.src = '//futureoflife-org-1.disqus.com/embed.js';s.setAttribute('data-timestamp', +new Date());(d.head || d.body).appendChild(s);})();Please enable JavaScript to view the comments powered by Disqus.

Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram