Mitigating the Risks of AI Integration in Nuclear Launch
FLI seeks to reduce the risk of nuclear war by raising awareness of just how bad such a war would be - namely due to nuclear winter - and by supporting specific measures that take us back from the brink of nuclear destruction. We also educate the public about the inspiring individuals who prevented nuclear war in the past and celebrate scientists who reduced nuclear risk by discovering nuclear winter. Our current policy work is focussed on ensuring that nuclear stability is not undermined by efforts to incorporate AI systems into nuclear weapons command, control and communication (NC3).
Artificial Escalation
AI in nuclear weapons launch
The Stockholm International Peace Research Institute (SIPRI) has outlined three layers of risk around integrating AI systems in NC3. Firstly, AI systems have inherent limitations, often proving unpredictable, unreliable, and highly vulnerable to cyberattacks and spoofing. Secondly, when incorporated into the military domain, AI-powered technologies will accelerate the speed of warfare. This leaves less time for states to signal their own capabilities and intentions, or to understand their opponents'. Thirdly, this risk of AI in warfare becomes even more profound in highly networked NC3 systems. Reliance on AI systems could undermine states' confidence in their retaliatory strike capabilities, or be used to weaken nuclear cybersecurity. These risks are magnified by a lack of past data on nuclear exchanges to train algorithms and a geopolitical context of arms races and nuclear tensions which prioritises speed over safety.
Some application in AI in nuclear systems can, on balance, be stabilising. Nuclear communications, for example, might benefit from the integration of AI systems. According to analysis by the Nuclear Threat Initiative, however, the vast majority of AI applications in NC3 have an uncertain or net destabilising effect on nuclear stability.
The FLI policy team advocates for the responsible integration of AI systems in line with the final report of the U.S. National Security Commission on AI. Our priority is to ensure that nuclear powers implement the Commission’s recommendation that ‘only human beings can authorize employment of nuclear weapons' (page 10).
Our broader approach to nuclear risk
FLI supports measures that reduce the risk of global nuclear escalation and advocates for the solutions as laid out by the Union of Concerned Scientists. These measures include getting the nine nuclear weapon states to commit to a “No First Use” policy, meaning that they will not be the first state to use nuclear weapons.
We believe in taking land-based nuclear weapons off hair-trigger alert, which would greatly reduce the risk of accidental launch due to a malfunctioning warning system. Likewise, we support the extension of the New Strategic Arms Reduction Treaty between the US and Russia until 2026, among other reduction measures.
FLI further backs the end of ‘sole authority’ use of nuclear weapons, to avoid any future scenarios where the fate of humanity lies in the hands of a single individual. In the past, we have survived at least two such scenarios largely due to luck.