Artificial Intelligence

Artificial Intelligence is racing forward. Companies are increasingly creating general-purpose AI systems that can perform many different tasks. Large language models (LLMs) can compose poetry, create dinner recipes and write computer code. Some of these models already pose major risks, such as the erosion of democratic processes, rampant bias and misinformation, and an arms race in autonomous weapons. But there is worse to come.
AI systems will only get more capable. Corporations are actively pursuing ‘artificial general intelligence’ (AGI), which can perform as well as or better than humans at a wide range of tasks. These companies promise this will bring unprecedented benefits, from curing cancer to ending global poverty. On the flip side, more than half of AI experts believe there is a one in ten chance this technology will cause our extinction.
This belief has nothing to do with the evil robots or sentient machines seen in science fiction. In the short term, advanced AI can enable those seeking to do harm – bioterrorists, for instance – by easily executing complex processing tasks without conscience.
In the longer term, we should not fixate on one particular method of harm, because the risk comes from greater intelligence itself. Consider how humans overpower less intelligent animals without relying on a particular weapon, or an AI chess program defeats human players without relying on a specific move.
Militaries could lose control of a high-performing system designed to do harm, with devastating impact. An advanced AI system tasked with maximising company profits could employ drastic, unpredictable methods. Even an AI programmed to do something altruistic could pursue a destructive method to achieve that goal. We currently have no good way of knowing how AI systems will act, because no one, not even their creators, understands how they work.
AI safety has now become a mainstream concern. Experts and the wider public are united in their alarm at emerging risks and the pressing need to manage them. But concern alone will not be enough. We need policies to help ensure that AI development improves lives everywhere – rather than merely boosts corporate profits. And we need proper governance, including robust regulation and capable institutions that can steer this transformative technology away from extreme risks and towards the benefit of humanity.
Featured content on Artificial Intelligence
Posts

Disrupting the Deepfake Pipeline in Europe

Realising Aspirational Futures – New FLI Grants Opportunities

Gradual AI Disempowerment

Exploration of secure hardware solutions for safe AI deployment

Protect the EU AI Act

Miles Apart: Comparing key AI Act proposals

Can we rely on information sharing?

Written Statement of Dr. Max Tegmark to the AI Insight Forum

As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development

Characterizing AI Policy using Natural Language Processing

Superintelligence survey

A Principled AI Discussion in Asilomar

Introductory Resources on AI Safety Research

AI FAQ
Resources

Catastrophic AI Scenarios

Introductory Resources on AI Risks

Global AI Policy

AI Value Alignment Research Landscape
Policy papers

Competition in Generative AI: Future of Life Institute’s Feedback to the European Commission’s Consultation

European Commission Manifesto

Chemical & Biological Weapons and Artificial Intelligence: Problem Analysis and US Policy Recommendations

FLI Response to OMB: Request for Comments on AI Governance, Innovation, and Risk Management

FLI Response to NIST: Request for Information on NIST’s Assignments under the AI Executive Order

FLI Response to Bureau of Industry and Security (BIS): Request for Comments on Implementation of Additional Export Controls

Response to CISA Request for Information on Secure by Design AI Software

Artificial Intelligence and Nuclear Weapons: Problem Analysis and US Policy Recommendations

FLI Governance Scorecard and Safety Standards Policy (SSP)

Cybersecurity and AI: Problem Analysis and US Policy Recommendations

FLI recommendations for the UK Global AI Safety Summit
Videos
Regulate AI Now
The AI Pause. What’s Next?
How to get empowered, not overpowered, by AI
Myths and Facts About Superintelligent AI
Podcasts

Open letters
Open letter calling on world leaders to show long-view leadership on existential threats
Pause Giant AI Experiments: An Open Letter
Foresight in AI Regulation Open Letter
Autonomous Weapons Open Letter: Global Health Community
Lethal Autonomous Weapons Pledge
Autonomous Weapons Open Letter: AI & Robotics Researchers
Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
Other cause areas

Nuclear Weapons
