Skip to content

Stefano Ermon Interview

Published:
January 27, 2017
Author:
Ariel Conn

Contents

The following is an interview with Stefano Ermon about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Ermon is an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory.

Q: From your perspective what were highlights of the conference?

“I really liked the technical part at the beginning. I saw a lot of really concrete research problems that people can start working on, and I thought that people had made a lot of interesting progress in the last year or so. It was really nice to see all these smart people working on these problems and coming up with questions and partial solutions – it’s like the beginning of a new research area.”

Q: Why did you choose to sign the AI principles that emerged from discussions at the conference?

“It seemed balanced. The only worry is that you don’t want it to be too extreme, but I thought that did a very good job of coming up with principles that I think lots of people can potentially agree on. It identifies some important issues that people should be thinking about more, and if by signing that letter we can get slightly more attention to the problem, then I think that’s a good thing to do.”

Q: Why do you think that AI researchers should weigh in on such issues as opposed to simply doing technical work?

“Because there might actually be a technical solution to some of these problems, but not to all of them. There are some inherent tradeoffs that people will have to discuss and we will have to come up with the right ways to balance everybody’s needs, and the different instabilities of different problems. But on some of the issues I think we should try to do as much as possible by trying to find technological solutions, and I think that would make the discussion more scientific. In this way it’s not purely based on speculation and we don’t leave it to non-experts, but it becomes more grounded on what AI really is.”

Q: Explain what you think of the following principles.

Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
“I think it’s very important that we make sure that AI is really for everybody’s benefit – that it’s not just going to be benefitting a small fraction of the world’s population, or just a few large corporations. And I think there is a lot that can be done by AI researchers just by working on very concrete research problems where AI can have a huge impact. I’d really like to see more of that research work done.”

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“I’m not a fan of wars, and I think it could be extremely dangerous. Obviously I think that the technology has a huge potential, and even just with the capabilities we have today it’s not hard to imagine how it could be used in very harmful ways. I don’t want my contributions to the field and any kind of techniques that we’re all developing to do harm to other humans or to develop weapons or to start wars or to be even more deadly than what we already have.”

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
“I think that it’s always hard to predict the future. At the moment I don’t think there is any consensus on the limits of what AI can do, so it’s better not to make any assumption on what we will not be able to achieve. Think about what people were imagining a hundred years ago, about what the future would look like. At the beginning of the last century they were saying- how will the future look in 100 years? And I think it would’ve been very hard for them to imagine what we have today. I think we should take a similar, very cautious view, about making predictions about the future. If it’s extremely hard, then it’s better to play it safe.”

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
“It’s an incredibly powerful technology. I think it’s even hard to imagine what one could do if we are able to develop a strong AI, but even before that, well before that, the capabilities are really huge. We’ve seen the kind of computers and information technologies we have today, the way they’ve revolutionized our society, our economy, our everyday lives. And my guess is that AI technologies would have the potential to be even more impactful and even more revolutionary on our lives. And so I think it’s going to be a big change and it’s worth thinking very carefully about, although it’s hard to plan for it.”

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
“I think that’s a big immediate issue. I think when the general public thinks about AI safety, maybe they think about killer robots or these kind of apocalyptic scenarios, but there are big concrete issues like privacy, fairness, and accountability. The more we delegate decisions to AI systems, the more we’re going to run into these issues. Privacy is definitely a big one, and one of the most valuable things that these large corporations have is the data they are collecting from us, so we should think about that carefully.”

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
“It seems like a natural thing to do. What else would you do? It’s hard to imagine not to try to achieve this goal. Why would you ever want to develop a highly intelligent system that is designed to harm us? It is something that I think the majority of people would agree on, but the issue, of course, is to define what exactly these values are, because people have different cultures, come from different parts of the world, and have different socioeconomic backgrounds,  so they will have very different opinions on what those values are. That’s really the challenge. But assuming it’s possible to agree on a set of values, then I think it makes sense to strive for those and develop technology that will allow us to get closer to those goals.”

Q: Assuming all goes well, what do you think a world with advanced beneficial AI would look like? What are you striving for with your AI work?

“I think there’s hopefully going to be greater prosperity. Hopefully we’re going to be able to achieve a more sustainable society, we’re going to speed up the scientific discovery process dramatically, we might be able to discover new sources of clean energy, and we might find ways to manage the planet in a more sustainable way. It’s hard to imagine but the potential could be huge to really create a better society.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.blackfin.biz on January 27, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
July 20, 2017

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
July 20, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
April 19, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
April 19, 2017

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram