Skip to content

Guruduth Banavar Interview

Published:
January 19, 2017
Author:
Ariel Conn

Contents

The following is an interview with Guruduth Banavar about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Banavar works for IBM as the VP of IBM Research and the Chief Science Officer of Cognitive Computing.

Q. From your perspective what were the highlights of the conference?

“Absolutely the best thing was meeting people,” Banavar explained, saying that he had many “transformative conversations.”

Q. Why did you choose to sign the AI principles that emerged from discussions at the conference?

“The general principles, as they’re laid out, are important for the community to rally around and to use to dig deeper into their research.”

He explained that some are obvious and “naturally based on principles of human rights and social principles.” But some we still need to educate the public and the community about, and some still need more research.

That’s why he signed – he felt that the principles do three things:
1) They resonate with our fundamental rights and liberties.
2) They require education and open discussion in the community.
3) And several require deep research.

Q. Explain what you think of the following principles:

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

“I agreed but it needs rephrasing. This is broader than AI work. Any AI prosperity should be available for the broad population. Everyone should benefit and everyone should find their lives changed for the better. This should apply to all technology – nanotechnology, biotech – it should all help to make life better. But I’d write it as ‘prosperity created by AI should be available as an opportunity to the broadest population.’”

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

“Makes sense. I think about the long-term issue off and on, and the general idea is that intelligence as we understand it today is ultimately the ability to process information from all possible sources and to use that to predict the future and to adapt to the future. It is entirely in the realm of possibility that machines can do that. … I do think we should avoid assumptions of upper limits on machine intelligence because I don’t want artificial limits on how advanced AI can be.”

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

“I strongly believe this. I think this goes back to evolution. From the evolutionary point of view, humans have reached their current level of power and control over the world because of intelligence. … AI is augmented intelligence – it’s a combination of humans and AI working together. And this will produce a more productive and realistic future than autonomous AI, which is too far out. In the foreseeable future, augmented AI – AI working with people – will transform life on the planet. It will help us solve the big problems like those related to the environment, health, and education.”

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

“It’s absolutely crucial that individuals should have the right to manage access to the data they generate. … AI does open new insight to individuals and institutions. It creates a persona for the individual or institution – personality traits, emotional make-up, lots of the things we learn when we meet each other. AI will do that too and it’s very personal. I want to control how persona is created. A persona is a fundamental right.”

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

“This one I particularly care about. The community of AI researchers and developers carries a significant responsibility to think about, incorporate, and not compromise our values. The community needs to take this more seriously. It’s a meta-level principle.

“In all cases, we should take more responsibility in incorporating the right principles into AI activities.”

Q. Assuming all goes well, what do you think a world with advanced beneficial AI would look like? What are you striving for with your AI work?

“I look at the future of the world as a place where AI redefines industry, professions, and experts, and it does so in every field. If one looks at the impact from AI on different fields, each one will be redefined. We will be better equipped to solve the hardest problems, like those of global warming, health, and education.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.blackfin.biz on January 19, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
July 20, 2017

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
July 20, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
April 19, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
April 19, 2017

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram