Skip to content

Yoshua Bengio Interview

Published:
January 19, 2017
Author:
Ariel Conn

Contents

The following is an interview with Yoshua Bengio about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Bengio is a Professor of Computer Science and Operations Research at the University of Montreal and Head of the Montreal Institute for Learning Algorithms (MILA).

Q. From your perspective what were the highlights of the conference?

“The first day, the economists, I really enjoyed.” Especially Jeffrey Sachs’s presentation – Bengio says he tends to agree with a lot of the things Sachs said. Specifically, he liked how the data presented during the economics talks illustrated the effects of automation and how that gives us a hint of what could happen in the future.

“One thing I came with is also … this subject of safe AI came in many discussions, and I would say that these discussions left a strong impression on me. And there was not just discussion about fear, but a lot of interesting technical things, like the presentation by Stuart Russell. He had pretty well thought out proposals. I found that pretty inspiring. In general, the debates about safe AI gave me ideas.”

Paraphrased: One issue that was raised, but not enough, was the question of misuse of AI technology in the future. As the technology becomes more powerful in the future, that may become a bigger issue for the safety of everyone. AI safety can be looked at from two angles – and both were discussed – but most people, like Stuart Russell, talked about the first. That is they looked at how we could be in danger of an AI misusing itself – an AI not doing what we meant. But there’s the other safety issue, which is people not using AI in ethical ways.

“I found, in particular, the discussion about the military use very interesting. Heather tone was quite different from the other presentations, which was good because I think we do need a wake up call. In particular, I hadn’t realized that military were already playing a kind of play of words to obfuscate the actual use of AI in weapons. So that got me concerned.

“How do we model or even talk about human values or human ethics in a way that we can get computers to follow those human morals? I think this is a hard question, and there are different approaches to it.

“Wendell Wallach had some interesting things to say, but he came at the issue from a different perspective. The way these issues will be dealt with through machine learning approaches and deep learning, at least in my group, is probably going to bring a very different color.”

ARIEL: “It’s sounding like before the conference, you weren’t thinking about AI safety quite as much?”

YOSHUA: “True. [The conference] has been very useful for me in the context of doing this grant, and understanding better the community – it’s not a community I really knew before, so it has been very useful.”

Q. Why did you choose to sign the AI principles that emerged from discussions at the conference?

“I think it is important to send a message, even if I didn’t agree with all the wordings as they were. Overall, I think they were quite aligned with what I think. I like the idea of having a collective statement because, overall, our group – I don’t think it’s represented by what the media or decision-makers understand of the issues. So I think it is important to send these kinds of messages, and that we have some authority that we should be using.”

Q. Why do you think that AI researchers should weigh in on such issues as opposed to simply doing technical work?

“I feel very strongly that I’m going to be much more comfortable with my own technical work if I know that I’m acting as a responsible person, in general, with respect to this work. And that means talking to politicians, talking to the media, thinking about these issues, even if they’re not the usual technical things. We can’t leave it in the hands of just the usual people. I think we need to be part of the discussion. So I feel compelled to participate. … I’m really happy to see all these young people caring about these issues.”

Q. Explain what you think of the following principles:

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

“I’m a very progressive person so I feel very strongly that dignity and justice mean wealth is redistributed. And I’m really concerned about AI worsening the effects and concentration of power and wealth that we’ve seen in the last 30 years. So this is pretty important for me.

“I consider that one of the greatest dangers is that people either deal with AI in an irresponsible way or maliciously – I mean for their personal gain. And by having a more egalitarian society, throughout the world, I think we can reduce those dangers. In a society where there’s a lot of violence, a lot of inequality, the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.”

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Bengio signed the open letter on autonomous weapons, so that says it all.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

“I agree with that.”

Bengio had reservations about the explainability or justifiability of decisions based on AI because he thinks that may not be technically feasible in the way that some people would like. “We have to be careful with that because we may end up barring machine learning from publicly used systems, if we’re not careful.” But he agrees with the underlying principle which is that “we should be careful that the complexity of AI systems doesn’t become a tool for abusing minorities or individuals who don’t have access to understand how it works.

“I think this is a serious social rights issue, but the solution may not be as simple as saying ‘it has to be explainable,’ because it won’t be .”

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

“I agree, except ‘assured’ is maybe strong. It may not be possible to be completely aligned. There are a lot of things that are innate, which we won’t be able to get by machine learning, and that may be difficult to get by philosophy or introspection, so it’s not totally clear we’ll be able to perfectly align. I think the wording should be something along the lines of ‘we’ll do our best.’ Otherwise, I totally agree.”

Q. Assuming all goes well, what do you think a world with advanced beneficial AI would look like? What are you striving for with your AI work?

“In the last few years, the vast majority of AI research and machine learning research has been very, very much influenced by the IT industry to build the next gadget, better phone, better search engines, better advertising, etc. This has been useful because we can now recognize images much better and understand languages much better. But I believe it’s high time that – and I see it happening – that researchers in both academia and industry should look at applications of machine learning that are not necessarily going to make a profit, but the main selling point for doing the research is you can have a really positive impact for a lot of people. … I would be delighted if we found applications in other areas like the environment or fighting poverty.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.blackfin.biz on January 19, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
July 20, 2017

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
July 20, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
April 19, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
April 19, 2017

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram