Skip to content

John C. Havens Interview

Published:
April 19, 2017
Author:
Ariel Conn

Contents

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is the Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. He is the author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines and Hacking H(app)iness – Why Your Personal Data Counts and How Tracking It Can Change the World, and previously worked as the founder of both The H(app)athon Project and Transitional Media.

Q. Explain what you think of the following principles:

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

“I love the word ‘beneficial.’ I think sometimes inherently people think that intelligence, in one sense, is always positive. Meaning, because something can be intelligent, or autonomous, and that can advance technology, that that is a ‘good thing’. Whereas the modifier ‘beneficial’ is excellent, because you have to define: What do you mean by beneficial? And then, hopefully, it gets more specific, and it’s: Who is it beneficial for? And, ultimately, what are you prioritizing? So I love the word beneficial.”

4) Research Culture: A culture of cooperation, trust and transparency should be fostered among researchers and developers of AI.

“I love the sentiment of it, and I completely agree with it. By the way, I should say, I love how these principles provide these one-line, excellent, very pragmatic ideas. So I want to make that as a preface. But, that said, I think defining what a culture of cooperation, trust, and transparency is… what does that mean? Where the ethicists come into contact with the manufacturers, there is naturally going to be the potential for polarization, where people on the creation side of the technology feel their research or their funding may be threatened. And on the ethicists or risk or legal compliance side, they feel that the technologists may not be thinking of certain issues. However, in my experience, the ethicists – I’m being very general, just to make a point – but the ethicists, etc., or the risk and compliance folks may be tasked with a somewhat outdated sense of the word ‘safety.’ Where, for instance, I was talking to an engineer the other day who was frustrated because they were filling out an IRB type form, and the question was asked: Could this product, this robotic product, be used in a military purpose. And when he really thought about it, he had to say yes. Because sure, can a shovel be used in a military purpose? Sure!

“I’m not being facetious; the ethicist, the person asking was well intended. And what they probably meant to say, and this is where accountability, and the certification, and these processes – as much as I know people don’t love processes, but it’s really important – is you build that culture of cooperation, trust, and transparency when both sides say, as it were, ‘Here’s the information we really need to progress our work forward. How do we get to know what you need more, so that we can address that well with these questions?’ You can’t just say, ‘Let’s develop a culture of cooperation, trust, and transparency.’ How do you do it? This sentence is great, but the next sentence should be: Give me a next step to make that happen.”

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards.

“I couldn’t agree more, not just because I’m working with the IEEE Standards Association – full disclosure – but we have to re-invent, we have to help people re-imagine what safety standards mean. If it’s this sort of onerous thing of checklists for compliance, then of course people are going to try to cut corners, because it’s time-consuming and boring. That’s what the impression is; I’m not saying that’s accurate. But if by going over safety, you’re now asking: What is my AI system? How will it interact with end users or stakeholders in the supply chain touching it and coming into contact with it, where there are humans involved, where it’s system to human vs. system to system? Safety is really about asking about people’s values. It’s not just physical safety, it’s also: What about their personal data, what about how they’re going to interact with this? So the reason you don’t want to cut corners is you’re also cutting innovation. You’re cutting the chance to provide a better product or service, because the word ‘safety’ in and of itself should now be expanded in the AI world to mean emotional and wellbeing safety for individuals, where then you’re going to discover all these wonderful ways to build more trust with what you’re doing when you take the time you need to go over those standards.”

ARIEL: “I like that, and I guess I haven’t thought about that. How do you convince people that safety is more than just physical safety?”

JOHN: “Sure. Think of an autonomous vehicle. Right now understandably the priority is: How do we make sure this thing doesn’t run into people, right? Which is a good thing to think about. But I’m going to choose one vehicle, in the future, over another, because I have been given proof that the vehicle does not harvest my physiological and facial and eye tracking data when I get in that car. The majority of them do. Physical safety is one issue, but the safety of how my data is transferred is critical. Sometimes it could be life and death, you know, what if I have a medical condition and the car reads it incorrectly and all that? But it’s also things like, how do we want the data revealed about where we are and about our health, to which actors, and when. So, that’s an example. Where, again, I’m using a larger scope for safety, but it really is important where, especially, we’re moving into a virtual realm, where safety is also about mental health safety. Meaning, if you wear, say, a Facebook Oculus Rift. A lot of people are saying social VR is the future. You’ll check Facebook while you’re in virtual reality. How you’re presented with Facebook stuff right now, not just ads, but posts can be really depressing, right? Depending on the time and place you look at it. That’s not Facebook’s fault, by the way, but the way that those things are presented, the algorithms etc., what they choose can be of their design. And so in terms of mental safety, mental wellbeing, it is also a really critical issue to think about right now.”

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

“Yes. Although I don’t know who wouldn’t say AI systems shouldn’t be safe and secure. So I would say words that further explain ‘safe and secure’ would be great. Meaning, AI systems that are physically safe, that provide increased wellbeing, whatever. And ‘secure throughout their operational lifetime’: I think what’s interesting to me is, ‘throughout their operational lifetime’ is actually the more important part of the sentence, because that’s about sustainability and longevity.

“And my favorite part of the sentence is ‘and verifiably so.’ That is critical. Because that means, even if you and I don’t agree on what ‘safe and secure’ means, but we do agree on verifiability, then you can go, ‘well, here’s my certification, here’s my checklist.’ And I can go, ‘Great, thanks.’ I can look at it, and say, ‘oh, I see you got things 1-10, but what about 11-15?’ Verifiably is a critical part of that sentence.”

ARIEL: “Considering that, what’s your take on ‘applicable and feasible?’”

JOHN: “I think they’re modifiers that kind of destroy the sentence. It’s like, ‘oh, I don’t feel like being applicable, that doesn’t matter here, because that’s personal data, and, you know, based on the terms and conditions.’ Or feasible, you know, ‘it’s an underwater system, it’s going to be too hard to reach in the water.’ ‘Safe and secure where applicable and feasible’ – you have those words in there, and I feel like anyone’s going to find a problem with every single thing you come up with. So I would lose those words if you want them to be more powerful.”

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, for the responsibility and opportunity to shape those implications.

“I like the phrase ‘stakeholders in the moral implications.’ But I think you have to expand beyond the ‘designers and builders’ to say the ‘designers, builders,’ and – however you’d wordsmith this – ‘the organizations or manufacturers of advanced AI systems.’ Because a lot of times, with engineers, what we’ve been finding is that you’re systematically handed blueprints to build something, and so then you are a stakeholder, but you’re not really in control because someone said ‘build this, and you have to do it because I’m telling you to do it, and we have to make our quarterly numbers.’ So, if you have ethical reservations or moral reservations, you can either whistleblow or quit. That’s kind of where we are. So it has to be enlarged, and the responsibility has to fall on the shareholders, the manufacturers, however you want to phrase that, so that it puts responsibility on the whole organization, not just the people whose hands actually touch and build the AI systems. Does that make sense?”

ARIEL: “Yeah, actually this example makes me think of the Manhattan Project, where people didn’t even know what they were doing. I don’t know how often engineers find themselves in a situation where they are working on something and don’t actually know what they’re working on.”

JOHN: “Not being an engineer, I’m not sure. It’s a great point, but I also think then the responsibility falls back to the manufacturers. I understand secrecy, I understand IP… I think there’d have to be some kind of public, ethical, like the Google DeepMind board, which, I know, still doesn’t exist… But if there was some public way of saying, as a company, ‘This is our IP. We’re going to have these engineers who are under our employment produce something and then be completely not responsible, from a legal standpoint, for what they create…’ But that doesn’t happen, unless they’re a private contractor, and they’d still be responsible.

“So, I don’t know. It’s a tough call, but I think it’s a cop-out to say, ‘Well the engineers didn’t know what they were building.’ That means you don’t trust them enough to tell them, you’re trying to avoid culpability and risk, and it means engineers, if they do build something, it’s kind of A or B. They don’t know what they’re building, and it turns out to be horrible in their mind, and they feel really guilty. they do know what they’re building, and they can’t do anything about it. So, it’s a situation that needs to evolve.”

14) Shared Benefit: AI technology should benefit and empower as many people as possible.

“Yes, it’s great. I think if you can put a comma after it, and say, ‘many people as possible,’ something like, ‘issues of wealth, GDP, notwithstanding,’ the point being, what this infers is whatever someone can afford, it should still benefit them. But a couple sentences maybe about the differences between the developed and not developed countries would be really interesting, because I certainly support the idea of it, but realistically right now, GDP drives the manufacture of most things. And GDP is exponential growth, and that favors the companies that can afford it. Which is not necessarily evil, but by definition it means that it will not benefit as many people as possible. So this is purely aspirational, unless you add some modifiers to it.”

16) Human Control: Humans should choose how and whether to delegate positions to AI systems to accomplish human-chosen objectives.

“Yes.”

ARIEL: “So, this is one I think is interesting, because, instinctively, my reaction is to say yes. But then I think of these examples of, say, planes, where we’re finding that the planes’ autonomous systems are actually better at flying than some of the pilots. And do we actually want the pilots to be choosing to make a bad decision with a plane, or do we want the plane to take power away from the pilot?”

JOHN: “Until universally systems can show that humans can be completely out of the loop and more often than not it will be beneficial, then I think humans need to be in the loop. However, the research I’ve seen also shows that right now is the most dangerous time, where humans are told, “Just sit there, the system works 99% of the time, and we’re good.” That’s the most dangerous situation, because then, even if the humans are really well trained, they may go for six weeks or they may go for 6 hours before something negative happens. So I think it still has to be humans delegating first. But in the framework of the context we’ve talked about here, where the systems are probably going to be doing pretty well and humans are in the loop, it’s a good choice to make, plus we should have lots of continued training to demonstrate the system is going to stay useful.”

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than for one state or organization.

“This sentence has a lot in it. Superintelligence – and I know FLI has deep expertise in this – that’s a very tough term. Because it usually infers some kind of sentience, like artificial general intelligence is superintelligence. But without defining what superintelligence means… I would say, define that word a little bit more.

“And then, ‘only be developed in the service of widely shared ethical ideals.’ That part of the sentence is incredibly difficult, because widely-shared ethical ideals… whose ideals? What do you mean by ‘widely-shared?’ Established human rights criteria make it a lot easier to talk about these things, because then you can point to actual UN rules – it doesn’t mean you have to agree with them – but they are widely established.

“And ‘for the benefit of all humanity, rather than for one state or organization,’ yes, again, that works for me. But, what does benefit mean? And also, ‘one state or organization…’ If I’m reading this as a government, what do I do? I’m still New Jersey, I’m still the United States, I’m still Israel. Of course we’re going to have to prioritize our own needs. In general, I think the sentence is fine, it’s just that there’s so much to it to unpack. There could be a lot of modifiers to it that I think would make it stronger. Hopefully that’s helpful.”

ARIEL: “Yeah, this is exactly the kind of discussion that we want. Going back to what you were saying earlier, and even here, I’m curious what you think the next steps would be if we wanted to see these principles actually put into action; the next steps to have principles that are generally accepted as we move forward in the development of AI. Or to make these stronger and easier to follow.”

JOHN: “Well, what I’m finding with my experience at IEEE is that the more you want principles to be accepted – the challenge is to make them universal you risk making them less specific, pragmatic, and potentially strong as they need to be. Which is hard. For example, the Asimov robotics law that says machines shouldn’t harm humans. Most people right away go, ‘Yeah, I agree with that.’ And then someone is like, ‘What about a medical robot that needs to operate on a person?’ And you’re like, ‘Oh yeah, medical robots…’ So it becomes hard in that regard.

“So this is why we’re doing a lot of work to define specifics on what it means to increase positive human benefits with AI on our wellbeing committee. Fortunately, there are already a lot of fantastic metrics along these lines that already exist. For instance, The OECD has the Better Life Index, which contains metrics that measure – quantitatively and qualitatively – metrics beyond GDP measures. This is part of a whole movement, or group of metrics comprising the Beyond GDP Movement. So, for instance, one way you can measure if what you’re building is for the benefit of all humanity, is to say, using these metrics from the OECD or the UN Development Goals, we’re going to try and increase these ten metrics (Beyond GDP metrics). When these ten things are increased in a positive way, that is demonstrably increasing either human wellbeing or positive benefit for humanity. Then you can point to it and say, ‘That’s what we mean by increasing human benefit.’”

ARIEL: “Was there anything else in general about the principles that you wanted to comment on?”

JOHN: “Just, again, I think they’re great in the sense of this is so much more than just Asimov’s Principles, because obviously those were science fiction and very short…”

ARIEL: “And designed to be broken.”

JOHN: “Exactly, a conundrum by design. I really like how you’ve broken it up: Research, Longer-Term Issues, the three sections. And I think, especially in terms of really core things… it’s very meaty, and people can get their teeth around it. In general, I think it’s fantastic.”

Join the discussion about the Asilomar AI Principles!

Read the 23 Principles

This content was first published at futureoflife.blackfin.biz on April 19, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an […]
July 20, 2017

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a […]
July 20, 2017

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a […]
April 19, 2017

Patrick Lin Interview

The following is an interview with Patrick Lin about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. […]
April 13, 2017

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram