Contents
Research updates
- A new paper: “Alignment for Advanced Machine Learning Systems.” Half of our research team will be focusing on this research agenda going forward, while the other half continues to focus on the agent foundations agenda.
- New at AI Impacts: Returns to Scale in Research
- Evan Lloyd represented MIRIxLosAngeles at AGI-16 this month, presenting “Asymptotic Logical Uncertainty and the Benford Test” (slides).
- We’ll be announcing a breakthrough in logical uncertainty this month, related to Scott Garrabrant’s previous results.
General updates
- Our 2015 in review, with a focus on the technical problems we made progress on.
- Another recap: how our summer colloquium series and fellows program went.
- We’ve uploaded our first CSRBAI talks: Stuart Russell on “AI: The Story So Far” (video), Alan Fern on “Toward Recognizing and Explaining Uncertainty” (video), and Francesca Rossi on “Moral Preferences” (video).
- We submitted our recommendations to the White House Office of Science and Technology Policy, cross-posted to our blog.
- We attended IJCAI and the White House’s AI and economics event. Furman on technological unemployment (video) and other talks are available online.
- Talks from June’s safety and control in AI event are also online. Speakers included Microsoft’s Eric Horvitz (video), FLI’s Richard Mallah (video), Google Brain’s Dario Amodei (video), and IARPA’s Jason Matheny (video).
News and links
- Complexity No Bar to AI: Gwern Branwen argues that computational complexity theory provides little reason to doubt that AI can surpass human intelligence.
- Bill Nordhaus, the world’s leading climate change economist, writes a paper on the economics of singularity scenarios.
- The Open Philanthropy Project has awarded Robin Hanson a three-year $265,000 grant to study multipolar AI scenarios. See also Hanson’s new argument for expecting a long era of whole-brain emulations prior to the development of AI with superhuman reasoning abilities.
- “Superintelligence Cannot Be Contained” discusses computability-theoretic limits to AI verification.
- The Financial Times runs a good profile of Nick Bostrom.
- DeepMind software reduces Google’s data center cooling bill by 40%.
- In a promising development, US federal regulators argue for the swift development and deployment of self-driving cars to reduce automobile accidents: “We cannot wait for perfect. We lose too many lives waiting for perfect.”
See the original newsletter on MIRI’s website.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: A pause didn’t happen. So what did?
Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
Maggie Munro
April 2, 2024
Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes
Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
Maggie Munro
March 4, 2024
Future of Life Institute Newsletter: The Year of Fake
Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
Maggie Munro
February 2, 2024
All Newsletters