Skip to content

Joshua Greene Interview

Published:
July 20, 2017
Author:
Ariel Conn

Contents

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an experimental psychologist, neuroscientist, and philosopher. He studies moral judgment and decision-making, primarily using behavioral experiments and functional neuroimaging (fMRI). Other interests include religion, cooperation, and the capacity for complex thought. He is the author of Moral Tribes: Emotion, Reason, and the Gap Between Us and Them.

ARIEL: “The idea behind this is the principles were a start. There were criticisms and comments about them, and we wanted to have a discussion about each individual principle to better understand what’s important to people.”

JOSHUA: “Yes, in crafting these principles, the challenge is to make them general enough that there can be some kind of an agreement, but specific and substantive enough that they’re not just completely empty.

“They address a deep moral tension. I think of people’s values as lying along a continuum. There are individualist values, where it’s really just all about me and my rights and my freedom. In the middle we have tribalist values, where I care not just about me, but my group. And then at the other end you have universalist values. If you ask a lot of people in our world, they’ll say, ‘Oh, of course I’m a universalist.’

“But they mean that only up to a point. They would say, ‘Sure, we shouldn’t do anything that would be terrible for all of humanity, but do I have an obligation to give a significant portion of my money to charity? Well, no, that’s a matter of personal preference.’ There’s a challenging, recurring moral question about the extent to which certain valuable resources ought to be common versus private.

“I see these principles as saying, ‘the incredible power of forthcoming artificial intelligence is not just another resource to be controlled by whoever gets there first, whoever gets the patent, whoever has the code. It is a common good that belongs to everybody.’ We may think about this the way many of us think about environmental issues or healthcare. Many of us believe that people have a moral claim to basic healthcare that goes beyond people’s claims to ordinary consumer goods, which they may or may not be able to afford. Likewise, when it comes to the quality of the Earth’s atmosphere or clean water and other environmental resources, many of us think of this as part of our collective endowment as a species, or perhaps as the living things on this particular planet. It’s not just who gets there first and wants to liquidate these assets; it’s not just whatever political structure happens to be in place that should determine what happens to the environment.

“I think what’s valuable and substantive in these principles is the idea of declaring in advance, ‘we don’t think that AI should be just another resource to be allocated according to the contingencies of history, of who happens to be on top when the power emerges, whose lab it happens to come out of, who happens to get the patent. This is part of our story as a species, the story of the evolution of complexity and intelligence on our planet, and we think that this should be understood as a common resource, as part of our endowment as humanity.’”

ARIEL: “As someone who’s looked at that spectrum you were talking about, do you see ways of applying AI so that it’s benefiting people universally? I would be concerned that whoever develops it is going to also have sort of an ‘I’m the most important’ attitude.”

JOSHUA: “Yeah, humans tend to do that! And I think that’s the big worry. I’ve been thinking a lot about Rawls’ and Harsanyi’s idea of a ‘veil of ignorance’. One of the key ideas in Rawls’s theory of justice and Harsanyi’s foundational defense of rule utilitarianism is that a fair outcome is one that you would choose if you didn’t know who you were going to be. The nice thing right now is that as much as we might place bets on some countries and firms rather than others, we really don’t know where this power is going to land first. (Nick Bostrom makes the same point in Superintelligence.)

“We have this opportunity now to lay down some principles about how this should go before we know who the winners and losers are, before we know who would benefit from saying, ‘Actually, it’s a private good instead of a public good.’ I think what’s most valuable about this enterprise is not just the common good principle itself—which is fairly straightforward. It’s the idea of getting that idea and that expectation out there before anyone has a very strong, selfish interest in flouting it. Right now, most of us comfortably say that we would like humanity’s most powerful creation to be used for the common good. But as soon as that power lands in somebody’s hands, they might feel differently about that. To me, that’s the real practical significance of what we’re doing: establishing a set of norms, a culture, and set of expectations while the veil is still mostly on.”

Q. Explain what you think of the following principles:

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

“Cutting corners on safety is essentially saying, ‘My private good takes precedence over the public good.’ Cutting corners on safety is really just an act of selfishness. The only reason to race forward at the expense of safety is if you think that the benefits of racing disproportionately go to you. It’s increasing the probability that people in general will be harmed—a “common bad”, if you like—in order to raise the probability of a private good for oneself.”

6) Safety: AI systems should be safe and secure throughout their operational lifetime and verifiably so where applicable and feasible.

“Yeah, I think that one’s kind of a no-brainer – not that there’s anything wrong with saying it. It’s the kind of thing that’s good to be reminded of, but no one’s saying, ‘No, I don’t think they should be safe through their whole lifetime, just part of it.’”

ARIEL: “I think sometimes researchers get worried about how technically feasible some of these are.”

JOSHUA: “I guess it depends what you mean by ‘verifiably.’ Does verifiably mean mathematically, logically proven? That might be impossible. Does verifiably mean you’ve taken some measures to show that a good outcome is most likely? If you’re talking about a small risk of a catastrophic outcome, maybe that’s not good enough.

“Like all principles in this domain, there’s wiggle room for interpretation. I think that’s just how it has to be. Good principles are not ones that are locked down and not open to interpretation. Instead, they signal two or more things that need to be balanced. This principle says, ‘Yes, they need to be safe, but we understand that you may never be able to say never.’”

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions with a responsibility and opportunity to shape those implications.

“I think that’s an important one because it’s very easy for people to say, ‘Making sure this doesn’t go bad is someone else’s problem.’ There’s a general problem of diffusion of responsibility. The engineers can say, ‘It’s not my job to make sure that this thing doesn’t hurt people. It’s my job to make sure that it works,’ and management says, ‘It’s not my job. My job is to meet our goals for the next quarterly report.’ Then they say it’s the lawmakers’ decision. And the lawmakers say, ‘Well, it’s this commission’s decision.’ Then the commission is filled with people from corporations whose interests may not be aligned with those of the public.

“It’s always someone else’s job. Saying designers and builders have this responsibility, that’s not trivial. What we’re saying here is the stakes are too high for anyone who’s involved in the design and building of these things to say safety is someone else’s problem.”

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

“I think that’s basically another version of the common good principle. We’re saying in advance, before we know who really has it, that this is not a private good. It will land in the hands of some private person, it will land in the hands of some private company, it will land in the hands of some nation first. But this principle is saying, ‘It’s not yours.’ That’s an important thing to say because the alternative is to say that potentially, the greatest power that humans ever develop belongs to whoever gets it first.”

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems to accomplish human-chosen objectives.

“This is an interesting one because it’s not clear what it would mean to violate that rule. What kind of decision could an AI system make that was not in some sense delegated to the system by a human? AI is a human creation. This principle, in practice, is more about what specific decisions we consciously choose to let the machines make. One way of putting it is that we don’t mind letting the machines make decisions, but whatever decisions they make, we want to have decided that they are the ones making those decisions.

“In, say, a navigating robot that walks on legs like a human, the person controlling it is not going to decide every angle of every movement. The humans won’t be making decisions about where exactly each foot will land, but the humans will have said, ‘I’m comfortable with the machine making those decisions as long as it doesn’t conflict with some other higher level command.’

“The worry is when you have machines that are making more complicated and consequential decisions than where do to put the next footstep. When you have a machine that can behave in an open-ended flexible way, how do you delegate anything without delegating everything? When you have someone who works for you and you have some problem that needs to be solved and you say, ‘Go figure it out,’ you don’t specify, ‘But don’t murder anybody in the process. Don’t break any laws and don’t spend all the company’s money trying to solve this one small-sized problem.’ There are assumptions in the background that are unspecified and fairly loose, but nevertheless very important.

“I like the spirit of this principle. It’s a specification of what follows from the more general idea of responsibility, that every decision is either made by a person or specifically delegated to the machine. But this one will be especially hard to implement once AI systems start behaving in more flexible, open-ended ways.”

ARIEL: “Is that a decision that you think each person needs to make, or is that something that a company can make when they’re designing it, and then when you buy it you implicitly accept?”

JOSHUA: “I think it’s a general principle about the choices of humans and machines that cuts across the choices of consumers and producers. Ideally there won’t be any unknown unknowns about what decisions a machine is making. Unknowns are okay, but as much as possible we’d like them to be known unknowns.”

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

“I think that is a bookend to the common good principle – the idea that it’s not okay to be neutral. It’s not okay to say, ‘I just make tools and someone else decides whether they’re used for good or ill.’ If you’re participating in the process of making these enormously powerful tools, you have a responsibility to do what you can to make sure that this is being pushed in a generally beneficial direction. With AI, everyone who’s involved has a responsibility to be pushing it in a positive direction, because if it’s always somebody else’s problem, that’s a recipe for letting things take the path of least resistance, which is to put the power in the hands of the already powerful so that they can become even more powerful and benefit themselves.”

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

“I see this as a practical distillation of the Asilomar Principles. They are not legally binding. At this early stage, it’s about creating a shared understanding that beneficial AI requires an active commitment to making it turn out well for everybody, which is not the default path. To ensure that this power is used well when it matures, we need to have already in place a culture, a set of norms, a set of expectations, a set of institutions that favor good outcomes. That’s what this is about – getting people together and committed to directing AI in a mutually beneficial way before anyone has a strong incentive to do otherwise.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.blackfin.biz on July 20, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
July 20, 2017

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is […]
April 19, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
April 19, 2017

Patrick Lin Interview

The following is an interview with Patrick Lin about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. […]
April 13, 2017

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram