Skip to content

Policy work

We aim to improve AI governance over civilian applications, autonomous weapons and in nuclear launch.

Introduction

Improving the governance of transformative technologies

The policy team at FLI works to reduce extreme, large-scale risks from transformative technologies by improving national and international governance of Artificial Intelligence (AI).

FLI has spearheaded numerous efforts to this end. Most notably, in 2017 we created the influential Asilomar AI principles, a set of governance principles signed by thousands of leading minds in AI research and industry. More recently, the UN Secretary General consulted FLI as the civil society ‘co-champion’ for AI recommendations on the Digital Cooperation Roadmap.

In the civilian domain, we advise the European Union on how to strengthen and future-proof their upcoming EU AI Act, and U.S. policymakers on how to best govern advanced AI systems. In the military domain, we advocate for a treaty on autonomous weapons at the United Nations and inform policymakers about the risks of incorporating AI systems into nuclear launch.

Our work

Policy projects

Ban Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

The Elders Letter on Existential Threats

The Elders, the Future of Life Institute and a diverse range of preeminent public figures are calling on world leaders to urgently address the ongoing harms and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.

Realising Aspirational Futures – New FLI Grants Opportunities

We are opening two new funding opportunities to support research into the ways that artificial intelligence can be harnessed safely to make the world a better place.

The Windfall Trust

The Windfall Trust is an ambitious initiative aimed at diligencing and establishing a robust international institution that could provide universal basic assets in the event of a windfall generated by advances in AI.

Mitigating the Risks of AI Integration in Nuclear Launch

Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.

Imagine A World Podcast

Can you imagine a world in 2045 where we manage to avoid the climate crisis, major wars, and the potential harms of artificial intelligence? Our new podcast series explores ways we could build a more positive future, and offers thought provoking ideas for how we might get there.

Educating about Lethal Autonomous Weapons

Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.

Artificial Escalation

Our fictional film depicts a world where artificial intelligence ('AI') is integrated into nuclear command, control and communications systems ('NC3') with terrifying results.

Developing possible AI rules for the US

Our US policy team advises policymakers in Congress and Statehouses on how to ensure that AI systems are safe and beneficial.

Global AI governance at the UN

Our involvement with the UN's work spans several years and initiatives, including the Roadmap for Digital Cooperation and the Global Digital Compact (GDC).

Worldbuilding Competition

The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.

UK AI Safety Summit

On 1-2 November 2023, the United Kingdom convened the first ever global government meeting focussed on AI Safety. In the run-up to the summit, FLI produced and published a document outlining key recommendations.

Future of Life Award

Every year, the Future of Life Award is given to one or more unsung heroes who have made a significant contribution to preserving the future of life.

Future of Life Institute Podcast

A podcast dedicated to hosting conversations with some of the world's leading thinkers and doers in the field of emerging technology and risk reduction. 140+ episodes since 2015, 4.8/5 stars on Apple Podcasts.
Our content

Latest policy papers

Competition in Generative AI: Future of Life Institute’s Feedback to the European Commission’s Consultation

March 2024

European Commission Manifesto

March 2024

Chemical & Biological Weapons and Artificial Intelligence: Problem Analysis and US Policy Recommendations

February 2024

FLI Response to OMB: Request for Comments on AI Governance, Innovation, and Risk Management

February 2024

Geographical Focus

Where you can find us

We are a hybrid organisation. Most of our policy work takes place in the US (D.C. and California), the EU (Brussels) and at the UN (New York and Geneva).

United States

In the US, FLI works to increase federal spending on AI safety research and to strengthen the NIST AI Risk Management Framework.

European Union

In Europe, our focus is on strengthening the EU AI Act and encouraging European states to support a treaty on autonomous weapons.

United Nations

At the UN, FLI works to promote the adoption of a legally-binding instrument on autonomous weapons.
Key partners

Achievements

Some of the things we have achieved

Developed the AI Asilomar Principles

In 2017, FLI coordinated the development of the Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.
View the principles

AI recommendation in the UN digital cooperation roadmap

Our recommendations (3C) on the global governance of AI technologies were adopted in the UN Secretary-General's digital cooperation roadmap.
View the roadmap

Max Tegmark's testimony to the EU parliament

Our founder and board member Max Tegmark presented a testimony on the regulation of general-purpose AI systems in the EU parliament.
Watch the testimony
Our content

Featured posts

Here is a selection of posts relating to our policy work:

Disrupting the Deepfake Pipeline in Europe

Leveraging corporate criminal liability under the Violence Against Women Directive to safeguard against pornographic deepfake exploitation.
February 22, 2024

Exploration of secure hardware solutions for safe AI deployment

This collaboration between the Future of Life Institute and Mithril Security explores hardware-backed AI governance tools for transparency, traceability, and confidentiality.
November 30, 2023

Protect the EU AI Act

A last-ditch assault on the EU AI Act threatens to jeopardise one of the legislation's most important functions: preventing our most powerful AI models from causing widespread harm to society.
November 22, 2023

Miles Apart: Comparing key AI Act proposals

Our analysis shows that the recent non-paper drafted by Italy, France, and Germany largely fails to provide any provisions on foundation models or general purpose AI systems, and offers much less oversight and enforcement than the existing alternatives.
November 21, 2023

Can we rely on information sharing?

We have examined the Terms of Use of major General-Purpose AI system developers and found that they fail to provide assurances about the quality, reliability, and accuracy of their products or services.
October 26, 2023

AI Licensing for a Better Future: On Addressing Both Present Harms and Emerging Threats

This joint open letter by Encode Justice and the Future of Life Institute calls for the implementation of three concrete US policies in order to address current and future harms of AI.
October 25, 2023

Written Statement of Dr. Max Tegmark to the AI Insight Forum

The Future of Life Institute President addresses the AI Insight Forum on AI innovation and provides five US policy recommendations.
October 24, 2023

As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development

This week will mark six months since the open letter calling for a six month pause on giant AI experiments. Since then, a lot has happened. Our signatories reflect on what needs to happen next.
September 21, 2023

Contact us

Let's put you in touch with the right person.

We do our best to respond to all incoming queries within three business days. Our team is spread across the globe, so please be considerate and remember that the person you are contacting may not be in your timezone.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram