Contents
Research updates
- New paper: “Safely Interruptible Agents.” The paper will be presented at UAI-16, and is a collaboration between Laurent Orseau of Google DeepMind and Stuart Armstrong of the Future of Humanity Institute (FHI) and MIRI; see FHI’s press release. The paper has received (often hyperbolic) coverage from a number of press outlets, including Business Insider,Motherboard, Newsweek, Gizmodo, BBC News, eWeek, and Computerworld.
- New at IAFF: All Mathematicians are Trollable: Divergence of Naturalistic Logical Updates;Two Problems with Causal-Counterfactual Utility Indifference
- New at AI Impacts: Metasurvey: Predict the Predictors; Error in Armstrong and Sotala 2012
- Marcus Hutter’s research group has released a new paper based on results from a MIRIx workshop: “Self-Modification of Policy and Utility Function in Rational Agents.” Hutter’s team is presenting several other AI alignment papers at AGI-16 next month: “Death and Suicide in Universal Artificial Intelligence” and “Avoiding Wireheading with Value Reinforcement Learning.”
- “Asymptotic Logical Uncertainty and The Benford Test” has been accepted to AGI-16.
General updates
- MIRI and FHI’s Colloquium Series on Robust and Beneficial AI (talk abstracts and slides now up) has kicked off with opening talks by Stuart Russell, Francesca Rossi, Tom Dietterich, and Alan Fern.
- We visited FHI to discuss new results in logical uncertainty, our new machine-learning-oriented research program, and a range of other topics.
News and links
- Following an increase in US spending on autonomous weapons, The New York Times reports that the Pentagon is turning to Silicon Valley for an edge.
- IARPA director Jason Matheny, a former researcher at FHI, discusses forecasting and risk from emerging technologies (video).
- FHI Research Fellow Owen Cotton-Barratt gives oral evidence to the UK Parliament on the need for robust and transparent AI systems.
- Google reveals a hidden reason for AlphaGo’s exceptional performance against Lee Se-dol: a new integrated circuit design that can speed up machine learning applications by an order of magnitude.
- Elon Musk answers questions about SpaceX, Tesla, OpenAI, and more (video).
- Why worry about advanced AI? Stuart Russell (in Scientific American), George Dvorsky (inGizmodo), and SETI director Seth Shostak (in Tech Times) explain.
- Olle Häggeström’s new book, Here Be Dragons, serves as an unusually thoughtful and thorough introduction to existential risk and future technological development, including a lucid discussion of artificial superintelligence.
- Robin Hanson examines the implications of widespread whole-brain emulation in his new book, The Age of Em: Work, Love, and Life when Robots Rule the Earth.
- Bill Gates highly recommends Nick Bostrom’s Superintelligence. The paperback edition is now out, with a newly added afterword.
- FHI Research Associate Paul Christiano has joined OpenAI as an intern. Christiano has also written new posts on AI alignment: Efficient and Safely Scalable, Learning with Catastrophes,Red Teams, and The Reward Engineering Problem.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: A pause didn’t happen. So what did?
Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
Maggie Munro
April 2, 2024
Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes
Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
Maggie Munro
March 4, 2024
Future of Life Institute Newsletter: The Year of Fake
Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
Maggie Munro
February 2, 2024
All Newsletters