Contents
Post-fundraiser update: Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we’ll need to make up the remaining $160k gap over the next month if we’re going to move forward on our 2017 plans. We’re in a good position to expand our research staff and trial a number of potential hires, but only if we feel confident about our funding prospects over the next few years.Since we don’t have an official end-of-the-year fundraiser planned this time around, we’ll be relying more on word-of-mouth to reach new donors. To help us with our expansion plans, donate at https://intelligence.org/donate/ — and spread the word!
Research updates
- Critch gave an introductory talk on logical induction (video) for a grad student seminar, going into more detail than our previous talk.
- New at IAFF: Logical Inductor Limts Are Dense Under Pointwise Convergence; Bias-Detecting Online Learners; Index of Some Decision Theory Posts
- We ran a second machine learning workshop.
General updates
- We ran an “Ask MIRI Anything” Q&A on the Effective Altruism forum.
- We posted the final videos from our Colloquium Series on Robust and Beneficial AI, including Armstrong on “Reduced Impact AI” (video) and Critch on “Robust Cooperation of Bounded Agents” (video).
- We attended OpenAI’s first unconference; see Viktoriya Krakovna’s recap.
- Eliezer Yudkowsky spoke on fundamental difficulties in aligning advanced AI at NYU’s “Ethics of AI” conference.
- A major development: Barack Obama and a recent White House report discuss intelligence explosion, Nick Bostrom’s Superintelligence, open problems in AI safety, and key questions for forecasting general AI. See also the submissions to the White House from MIRI, OpenAI, Google Inc., AAAI, and other parties.
News and links
- The UK Parliament cites recent AI safety work in a report on AI and robotics.
- The Open Philanthropy Project discusses methods for improving individuals’ forecasting abilities.
- Paul Christiano argues that AI safety will require that we align a variety of AI capacities with our interests, not just learning — e.g., Bayesian inference and search.
- See also new posts from Christiano on reliability amplification, reflective oracles, imitation + reinforcement learning, and the case for expecting most alignment problems to arise first as security problems.
- The Leverhulme Centre for the Future of Intelligence has officially launched, and is hiring postdoctoral researchers: details.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: A pause didn’t happen. So what did?
Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.
Maggie Munro
April 2, 2024
Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes
Former world leaders call for action on pressing global threats, launching the campaign to #BanDeepfakes, new funding opportunities from our Futures program, and more.
Maggie Munro
March 4, 2024
Future of Life Institute Newsletter: The Year of Fake
Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
Maggie Munro
February 2, 2024
All Newsletters