Skip to content

Using History to Chart the Future of AI: An Interview with Katja Grace

Published:
June 26, 2017
Author:
Tucker Davey

Contents

Click here to see this page in other languages:  Russian 

The million-dollar question in AI circles is: When? When will artificial intelligence become so smart and capable that it surpasses human beings at every task?

AI is already visible in the world through job automation, algorithmic financial trading, self-driving cars and household assistants like Alexa, but these developments are trivial compared to the idea of artificial general intelligence (AGI) – AIs that can perform a broad range of intellectual tasks just as humans can. Many computer scientists expect AGI at some point, but hardly anyone agrees on when it will be developed.

Given the unprecedented potential of AGI to create a positive or destructive future for society, many worry that humanity cannot afford to be surprised by its arrival. A surprise is not inevitable, however, and Katja Grace believes that if researchers can better understand the speed and consequences of advances in AI, society can prepare for a more beneficial outcome.

 

AI Impacts

Grace, a researcher for the Machine Intelligence Research Institute (MIRI), argues that, while we can’t chart the exact course of AI’s improvement, it is not completely unpredictable. Her project AI Impacts is dedicated to identifying and conducting cost-effective research projects that can shed light on when and how AI will impact society in the coming years. She aims to “help improve estimates of the social returns to AI investment, identify neglected research areas, improve policy, or productively channel public interest in AI.”

AI Impacts asks such questions as: How rapidly will AI develop? How much advanced notice should we expect to have of disruptive change? What are the likely economic impacts of human-level AI? Which paths to AI should be considered plausible or likely? Can we say anything meaningful about the impact of contemporary choices on long-term outcomes?

One way to get an idea of these timelines is to ask the experts. In AI Impacts’ 2015 survey of 352 AI researchers, these researchers predicted a 50 percent chance that AI will outcompete humans in almost everything by 2060. However the experts also answered a very similar question with a date seventy-five years later, and gave a huge range of answers individually, making it difficult to rule anything out. Grace hopes her research with AI Impacts will inform and improve these estimates.

 

Learning from History

Some thinkers believe that AI could progress rapidly, without much warning. This is based on the observation that algorithms don’t need factories, and so could in principle progress at the speed of a lucky train of thought.

However, Grace argues that while we have not developed human-level AI before, our vast experience developing other technologies can tell us a lot about what will happen with AI. Studying the timelines of other technologies can inform the AI timeline.

In one of her research projects, Grace studies jumps in technological progress throughout history, measuring these jumps in terms of how many years of progress happen in one ‘go’. “We’re interested in cases where more than a decade of progress happens in one go,” she explains. “The case of nuclear weapons is really the only case we could find that was substantially more than 100 years of progress in one go.”

For example, physicists began to consider nuclear energy in 1939, and by 1945 the US successfully tested a nuclear weapon. As Grace writes, “Relative effectiveness doubled less than twice in the 1100 years prior to nuclear weapons, then it doubled more than eleven times when the first nuclear weapons appeared. If we conservatively model previous progress as exponential, this is around 6000 years of progress in one step previous rates.”

Grace also considered the history of high-temperature superconductors. Since the discovery of superconductors in 1911, peak temperatures for superconduction rose slowly, growing from 4K (Kelvin) initially to about 30K in the 1980s. Then in 1986, scientists discovered a new class of ceramics that increased the maximum temperature to 130K in just seven years. “That was close to 100 years of progress in one go,” she explains.

Nuclear weapons and superconductors are rare cases – most of the technologies that Grace has studied either don’t demonstrate discontinuity, or only show about 10-30 years of progress in one go. “The main implication of what we have done is that big jumps are fairly rare, so that should not be the default expectation,” Grace explains.

Furthermore, AI’s progress largely depends on how fast hardware and software improve, and those are processes we can observe now. For instance, if hardware progress starts to slow from its long run exponential progress, we should expect AI later.

Grace is currently investigating these unknowns about hardware. She wants to know “how fast the price of hardware is decreasing at the moment, how much hardware helps with AI progress relative to e.g. algorithmic improvements, and how custom hardware matters.”

 

Intelligence Explosion

AI researchers and developers must also be prepared for the possibility of an intelligence explosion – the idea that strong AI will improve its intelligence faster than humans could possibly understand or control.

Grace explains: “The thought is that once the AI becomes good enough, the AI will do its own AI research (instead of humans), and then we’ll have AI doing AI research where the AI research makes the AI smarter and then the AI can do even better AI research. So it will spin out of control.”

But she suggests that this feedback loop isn’t entirely unpredictable. “We already have intelligent doing AI research that leads to better capabilities,” Grace explains. “We don’t have a perfect idea of what those things will be like when the AI is as intelligent as humans or as good at AI research, but we have some evidence about it from other places and we shouldn’t just be saying the spinning out of control could happen at any speed. We can get some clues about it now. We can say something about how many extra IQ points of AI you get for a year of research or effort, for example.”

AI Impacts is an ongoing project, and Grace hopes her research will find its way into conversations about intelligence explosions and other aspects of AI. With better-informed timeline estimates, perhaps policymakers and philanthropists can more effectively ensure that advanced AI doesn’t catch humanity by surprise.

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

This content was first published at futureoflife.blackfin.biz on June 26, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about , , ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Frank Sauer on Autonomous Weapon Systems

{“labels”:[],”rewrite”:{“with_front”:true}}
January 6, 2024

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram