Skip to content

Transcript: Life 3.0: Being Human in the Age of Artificial Intelligence

Published:
August 29, 2017
Author:
Tucker Davey

Contents

Ariel: Elon Musk has called it a compelling guide to the challenges and choices in our quest for a great future of life on Earth and beyond, while Stephen Hawking and Ray Kurzweil have referred to it as an introduction and guide to the most important conversation of our time. I’m of course speaking of Max Tegmark’s new book, Life 3.0: Being Human in the Age of Artificial Intelligence.

I’m Ariel Conn with the Future of Life Institute. I’m happy to have Max here with me today. As most of our listeners will know, Max is co-founder and president of FLI. He’s also a physics professor at MIT, where his research has ranged from cosmology to the physics of intelligence, and he’s currently focused on the interface between AI, physics, and neuroscience. His recent technical papers focus on AI, and typically build on physics-based techniques. He is the author of over 200 publications, and also his earlier book, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality.

Max, thanks for talking with me today.

Max: Thanks for talking with me.

Ariel: Obviously, I want to dive right into your book. AI has been in the news a lot lately, and other books have come out about the potential impact of AI. I want to know, what is it about your book that stands out from all these other reading materials, and what makes it an important read for anyone who wants to understand and prepare for our future?

Max: Well, there’s been lots of talk about AI disrupting the job market and also enabling new weapons, but very few scientists talk seriously about what I think is the elephant in the room. What will happen, once machines outsmart us at all tasks? What’s kind of my hallmark as a scientist is to take an idea all the way to its logical conclusion. Instead of shying away from that question about the elephant, in this book, I focus on it and all its fascinating aspects because I want to prepare the reader to join what I think is the most important conversation of our time.

There are so many fascinating questions here. Will superhuman artificial intelligence arrive in our lifetimes? Can and should it be controlled, and if so, by whom? Can humanity survive in the age of AI? And if so, how can we find meaning and purpose if super-intelligent machines provide for all our needs and make all our contributions superfluous?

Another way in which my book is different is that I’ve written it from my perspective as a physicist doing AI research here at MIT, which lets me explain AI in terms of fundamental principles without getting all caught up in the weeds with technical computer jargon. I hope it’s going to be a lot more accessible.

Ariel: What is it about AI that you think is so important to our future?

Max: We’ve traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans, but from my perspective as a physicist, intelligence is simply a certain kind of information processing performed by elementary particles moving around. There’s no law of physics that says that we can’t build machines more intelligent than us in all ways. That makes intelligence in AI incredibly important because it suggests that we’ve only seen the tip of the intelligence iceberg, and that there’s amazing potential to unlock the full intelligence that’s latent in nature, and to use it to help humanity either flourish or flounder.

I think if we succeed in building machines that are smarter than us in all ways, it’s going to be either the best thing ever to happen to humanity or the worst thing. I’m optimistic that we can create a great future with AI, but it’s not going to happen automatically. It’s going to require that we really think things through in advance, and really have this conversation now. That’s why I’ve written this book.

Ariel: So to go back just a little bit, how did you as a physics professor get involved in this? What drew you to artificial intelligence?

Max: Ever since I was a teenager, I felt that the two greatest mysteries of science were the mystery out there, our universe, and the mystery in here, in our heads, the mind. In recent years, my nerdy technical research at MIT has shifted increasing from the cosmos to the physics of intelligence. In my lab we study both intelligent organisms, we look at brains, but also intelligent machines – we do AI research. I find this not just intellectually really fascinating, just for the research of it, but I also think that understanding in a more fundamental level what intelligence is, and how to make it and how to shape it, is one of the most important things we need to do to create a good future with AI. Because if you have something really powerful that you want to use to help you, you have to understand it so you can trust it.

Ariel: What made you start to think that we need to be addressing the safety issues that are surrounding AI? And sort of along the same lines, what prompted you and the others at FLI to establish the AI safety research grants?

Max: First of all, taking a step back through the 13.8 billion years of our cosmic history here, you know eventually life came along, and we started developing more and more powerful technology. And I’m optimistic that we can create a great future with technology as well, but to do that, we have to win this race between the growing power of the technology, and the growing wisdom with which we manage it.

In the past, the technology has always been feeble enough that we could win that wisdom race simply by learning from mistakes. We invented fire, screwed up, invented the fire extinguisher, done. We invented the car, screwed up a bunch of times, invented the seatbelt and the airbag. With more powerful tech, like nuclear weapons, synthetic biology, and now ultimately I think super-human artificial intelligence, we don’t want to learn from mistakes. That’s a terrible strategy. We want to get things right the first time because that might be the only time we have.

That’s, of course, very much the whole idea behind the Future of Life Institute. That’s why we founded our organization. And when we started it, when we had the very first brainstorming meeting right here in our house, the technology that people generally felt deserved the most attention from us was AI because there’s been such incredible progress that AI has a real possibility of transforming our world much faster and much more dramatically than for example, climate change. It’s the biggest technological transformation that is going to hit us the soonest.

There’s a huge upside here if we get this right because everything I love about civilization is a product of intelligence. So if we can amplify our intelligence with artificial intelligence, it opens the potential of solving all these thorny problems that plague us today. Right? But it’s just going to require hard work, and I felt that if I’m spending so much of my time anyway working on this with the Future of Life Institute, I might as well align it with my MIT research also, and do artificial intelligence both on my work time and on my so-called free time.

Ariel: There are still a lot of AI researchers who are telling us not to worry, especially about the long-term risks. I’m curious what your response to them is.

Max: First of all, I’m very fortunate that MIT gives us tenured professors a lot of leeway in choosing what to research, so I’ve been enjoying for that reason, doing AI research in the last few years. I have a wonderful group of students and post-docs I’m really, really proud of. So if someone has any nitpicks about the AI research I do, I’m happy to talk with them about the geeky stuff. But I feel that’s enabled me to learn a great deal about the AI field, which really helps my Future of Life Institute work.

Second, of course there are people who say we shouldn’t worry, and there are also a lot of very, very senior AI researchers who say that we should take these things very seriously. What I do in the book, is I don’t tell people whether they should worry or not. I don’t tell people what they should think. I simply describe the controversy, and the fact of the matter is there are two very basic questions where the world’s leading AI researchers totally disagree.

One of them is timeline: when, if ever, are we going to get super-human general artificial intelligence? Some people think it’s never going to happen or take hundreds of years so that it’s therefore silly to be concerned now. And many others think it’s going to happen in decades, which means we should take it very seriously.

The other controversy, which is equally real, is what’s going to happen if we ever get beyond human-level AI? Some people think it’s going to be pretty much guaranteed to be fine, and that we should think of advanced AI as just a natural next step in evolution. I call this group the digital utopians in my book. Some think machines are just going to be our tools, they’re never really going to be that far beyond humans and we shouldn’t worry for that reason.

And then there are a lot of very serious AI researchers, both leaders in academia and industry, who think that actually, this could be the best thing ever to happen, but it could also lead to huge problems. I think it’s really boring to sit around and quibble about whether we should worry or not. I’m not interested in that. What I’m interested in is asking what concretely can we do today that’s going to increase the chances of things going well because that’s all that actually matters.

That’s why I have, with my Future Life Institute colleagues and my other AI colleagues, put so much energy into brainstorming and making concrete lists of questions that we need to answer, and then working hard to channel research funding into research grants so people can actually tackle those questions so we get the answers by the time we need them.

It’s important to remember that even if we might only need the answers to certain questions in 30 years, it might take 30 years to get the answers because the questions are hard. Right? That’s why it’s so important that we support AI safety research already today, and don’t just start thinking about this the night before some guys on Red Bull switch something on that they don’t understand fully.

Ariel: That brings me to another question that I wanted to ask you. Within the AI safety world, I hear a lot of debate about whether people should focus on just near-term risks or just long-term risks. There seems to be this idea that we need to focus on one or the other. But in your book you cover both, and so I was hoping you could touch on why you think it’s important for us to look at both, even if one might pose a greater risk to humanity than the other.

Max: I think we should obviously focus on both. First of all, this is the most important issue of our time, as I argue in the book. It would be silly to be so stingy with resources that we only focus on some small fraction of the questions. Second, what you’re calling the short-term questions – like how for example, do you make computers that are robust, and do what they’re supposed to do and not crash and don’t get hacked, stuff like that – it’s not only something that we absolutely need to solve in the short term because it saves lives, as AI gets more and more into society, but it’s also a very, very valuable stepping stone toward the tougher questions. I mean seriously, how are you ever going to have any hope of building a superintelligent machine that you’re confident is going to do what you want, if you can’t even build a laptop that does what you want instead of giving you the blue screen of death or the spinning wheel of doom. It’s ridiculous.

Clearly, if you want to go far in one direction, first you take one step in that direction. By getting a lot of researchers galvanized to start tackling these short-term questions about making AI robust, understanding how you can make it trustworthy and transparent, those are things which are also going to help as those researchers keep the momentum and keep going in the same direction with the longer term challenges.

Finally, if you’re going to take on some moonshot, long-term challenge, you’re going to need a lot of really talented researchers educated and interested in these things. How do you get those people? Well you get them by first having a lot of funding and so on for the community to develop by having them work on these more concrete near-term things. You can’t just start in a vacuum in 20 years, snap your fingers and expect that there’s going to be this safety research community there at your service.

Ariel: I want to move into what I think are probably some of the more fun topics in your book. Specifically you mention 12 options for what you think a future world with superintelligence will look like. Now, when I was reading these, I would read what the ideal version of each of these is and think, “Oh that sounds nice.” Then you would talk about the pitfalls of them, and it was hard to be quite as optimistic when you look at the pitfalls.

I was wondering if you could talk about what a couple of the future scenarios are that you think are important for people to consider, and then also what are you hopeful for, and what scares you?

Max: Yeah, I confess, I had a lot of fun brainstorming for these different scenarios. The reason I did this was because I feel that when we as a society envision the future, we almost inadvertently obsess about gloomy stuff. Future visions in Hollywood tend to be dystopic because fear sells more. But if I have a student that comes into my office for career planning, and I ask her, “Hey, where do you want to be in 20 years?” And she says, “Oh I think I might have been run over by a tractor, and maybe I’ll have cancer.” That’s a terrible strategy. I would like her to have a spark in her eyes and tell me, “This is my vision. This is where I want to be in 20 years.” Then we can talk about the pitfalls and how to navigate around them and make a good strategy.

I wrote this because I think we humans need to have that same conversation. Instead of just talking about how to avoid gloomy stuff and cancer and unemployment, and how to avoid wars and whatnot, we really need these positive visions, to think what kind of society would we like to have if we have enough intelligence at our disposal to eliminate poverty, disease, and so on? What are the positive things we’re trying to build?

I’m not claiming to have the answers to this, nor should I. What I want to do with the book is encourage everybody next time they’re at a party with their friends and so on, to talk about not just the usual stuff, but to talk about this. Even when you watch the Presidential election, the kind of things that politicians promise, which is supposed to be positive are just so uninspirational. You know, increase this thing by 5% and blah.

If you think about Kennedy’s moon speech, that was inspirational, but that’s nothing compared to what you can do if we manage to do things right with AI. Where basically, the whole history of human forecasting has been a giant underestimation of what we can do. Right? We thought it was going to take the impossible to do things or take thousands of years. If it turns out that AI can help us solve these challenges in our lifetime or hundreds of years, what do we want to do with them?

I tried to write the whole book in this optimistic spirit to get more people thinking, but since you asked me about what I worry about, I’ll make a few confessions there too. I’m an optimist that we can create a great future, but it’s not the kind of optimism that I have that the sun is going to rise tomorrow. Namely, optimism that it’s going to happen automatically, no matter what we do. It’s what my friend and colleague Eric Brynjolfsson called ‘mindful optimism.’ I’m optimistic that we can create a great future if we really plan and work hard for it.

So what do we need to do? Let me just give you one example: if we have very powerful AI systems, it’s absolutely crucial that their goals are aligned with our goals. Now that involves a lot of questions, which we don’t have the answer to. Like how do you make a computer learn our goals? It’s hard enough to make our kids learn our goals, right? Let alone adopt our goals. How do we make sure that AI will retain those goals if it keeps getting progressively smarter? Kids change their goals a lot as they grow older. Maybe they get less excited about Lego, and more excited about other stuff, right? We don’t want to create machines that at first are very excited about helping us, and then later get as bored with us as our kids get with their Legos. It’s like, “Next!”

Finally, what should the goals be that we want these machines to safeguard? What values? There’s obviously no consensus on Earth for that. Should it be Donald Trump’s goals? Hillary Clinton’s goals? Should it be ISIS’s goals? Whose goals should it be? How should this be decided?

I think this conversation can’t just be left to tech nerds like myself. It has to involve everybody because it’s everybody’s future that’s at stake here.

Ariel: A question that I have is: we talk about AI being able to do all these amazing things, from ending poverty to solving the world’s greatest questions. I’m sort of curious, if we actually create an AI or multiple AI systems that can do this, what do we do then?

Max: That’s one of those huge questions that I think everybody should be discussing. I wrote this book so that people can educate themselves enough about the situation to really contribute to the discussion. If you take a short-term view, suppose we get machines that can just do all our jobs, produce all our goods and services for us. The first challenge is how do you want to distribute this wealth that’s produced? If you come up with some sort of system where everybody gets at least some share of the wealth, maybe because of some taxation and the government helping them out, then everybody basically gets a free vacation for the rest of their life. It sounds a lot better than permanent unemployment sounds.

On the other hand, if I own all the AI technology, and decide not to share any of the stuff with anybody else, causing mass starvation, that’s less fun. So this question about how we should share the bounty of AI is huge. There’s a big cultural divide there, typically between western Europe, where there’s a bit more tradition of higher taxes and trying to have a social safety net, versus the U.S., where there’s great resistance towards that.

A second question is, just because you take care of people materially, doesn’t mean they’re going to be happy. Right? There are many examples in history of princes even, in the middle ages, who had all of the money they needed and then destroyed themselves on opium. How do you create a society where people can flourish and find meaning and purpose in their lives even if they are not necessary as producers? Even if they don’t need to have jobs? Those are questions that cannot be left to tech geeks like myself, again. We need psychologists and so many other people to contribute to this discussion.

I’m optimistic that this too can be solved because I know a very large group of people who seem perfectly happy about not having job, namely kids, but it’s a conversation we need to have.

Ariel: Then moving much, much farther into the future, you have a whole chapter that’s dedicated to the cosmic endowment, and what happens in the next billion years and beyond. Why do you think we should care about something so far into the future?

Max: Yeah, I have to confess, I really unleashed my inner geek and let it run there on that one because I’m a physicist, and I’ve spent so much time thinking about the cosmos. I couldn’t resist the temptation to think about that. But frankly, I think it’s actually really inspirational to contemplate the enormous potential for the future of life. You know? You might have said, “Well, a billion years ago we have some boring microorganisms here on Earth doing their thing. Why don’t we just quit while we’re ahead and just leave life like that forever.” That would have been a bit of a bummer. We’d have entirely missed out on humanity that way. Right? And been stuck with some bacteria.

Life could flourish so much more than it was doing a billion years ago. Yet, today also, we’re in a situation where it’s obvious that our universe is largely dead, and there’s so much more potential for life. The vast majority of the space out there, as far as we can tell with our best telescopes, is not alive. There’s not much happening there. A lot of people think from watching Sci-Fi movies that there are all these intelligent aliens everywhere having a sort of Star Trek existence, but there’s precious little hard evidence for it right now.

I think it’s a beautiful idea if our cosmos can continue to wake up more, and life can flourish here on Earth, not just for the next election cycle, but for billions of years and throughout the cosmos. We have over a billion planets in this galaxy alone, which are very nice and habitable. Then we have about a hundred billion other galaxies out there. I think it’s so pathetic when we quibble about who’s going to have a piece of sand somewhere in the Middle East or whatever on Earth, when there’s just so much more potential if we raise our eyes and think big, for life to flourish. I think if we think big together, this can be a powerful way for us to put our differences aside on Earth and unify around the bigger goal of seizing this great opportunity.

I also think we have a special responsibility as humans because we humans, as far as we know so far, are the only life form in our universe that’s gotten sophisticated enough that we’ve built telescopes and been able to see all the stuff that’s out there. If you think those galaxies out there are beautiful, they’re beautiful because someone is conscious of them and observing them. Right? If we were to just blow it by some really poor planning with our technology and go extinct, and we were to forfeit this entire future where our cosmos could be teeming with life for billions of years, wouldn’t we really have-

Ariel: It would be a lost opportunity, yeah.

Max: Yeah, failed in our responsibility. I think this place and this time that we’re in right now, might be the most significant place in time in the history of our cosmos. I talk about that possibility towards the end of the book. I have no idea if we manage to help life flourish in the future, what these future life forms are going to think about us billions of years from now. But they would certainly not think of us as insignificant because it might have been what we do here on our planet right now in this century, which makes a difference.

Ariel: So I’m going to go in a completely different direction. You mentioned we can appreciate the beauty of the galaxies because we’re conscious of them, and you have an entire chapter dedicated to consciousness as well, which frankly leads to lots and lots of questions. Even how can we tell if something is conscious. But not getting into that, I just want to know, what do you see as both the risks and the benefits of creating either intentionally or not an AI that has consciousness?

Max: First of all, I’m a physicist so as far as I’m concerned, Ariel, you’re a blob of quarks. No offense. I don’t think there’s any secret sauce in your brain, beyond the quarks and other elementary particles there, that explain why you’re so good at processing information in ways that I consider intelligent, and why you have this subjective experience that I call consciousness. I think it’s something to do with the very elaborate patterns in which your quarks and other particles are moving around. So I explore in this book in great detail what it is that makes a blob of matter intelligent. What is it that makes a blob of matter able to remember, compute, learn, and even in some cases, then experience like you experience colors and sounds and emotions and call it consciousness. Right?

I think that there is a lot of confusion in this area. If you worry about some machine doing something bad to you, consciousness is a complete red herring. It doesn’t matter if that machine or robot or whatever … If you’re chased by a heat-seeking missile for example, you don’t give a hoot whether it has a subjective experience or what it feels like to be that missile, or whether it feels like anything. You wouldn’t say to yourself, “Oh I’m not worried about this missile because it’s not conscious.” All you worry about is what the missile does, not how it feels. Right?

Ariel: Yep.

Max: Consciousness is, on the other hand, important for other things. First of all, if you’re an emergency room doctor, and you have an unresponsive patient, it would be really great if someone had a device that could scan this patient and tell you whether they have locked-in syndrome, and there’s someone home or not. Second, in the future, if we create very intelligent machines, if you have a helper robot for example, who you can have conversations with and says pretty interesting things, wouldn’t you want to know if it feels like something to be that helper robot – if it’s conscious, or if it’s just a zombie pretending to have these experiences? If you knew that it didn’t have any feeling or experiences at all, you wouldn’t feel the least bit guilty about switching it off. Right?

Ariel: Mm-hmm.

Max: Or even telling it to do very boring chores, but if you knew that it felt conscious much like you do, that would put it ethically in a very different situation. It could make you feel guilty, wouldn’t it?

Ariel: Yes.

Max: Yeah, and then that raises the question, if you have a helper robot, would you want it to be conscious or not? You might say, “Well I want them to just be in zombie mode,” so you don’t have to feel guilty. On the other hand, maybe it would creep you out a little bit that it keeps acting this way and making you feel that it’s conscious even though it’s just faking it. Maybe you would even like to have a button on it where you could toggle it between zombie mode and conscious mode depending on the circumstances.

And even more importantly, if in the future, we start creating cyborgs or maybe we have intelligent beings that we view in some sense as our descendants – that we’re very proud of what they can do, they have our values, and they go out and do all these great things that we couldn’t do, we feel proud of them as our children – that whole positive outlook would get completely ruined if you also happened to know that they’re actually zombies and don’t experience anything, because then if we humans eventually go extinct and our legacy is continued by them, but there’s nobody experiencing anything. It’s as if our whole universe had died for all intents and purposes.

As far as I’m concerned, it’s not our universe giving meaning to us, it’s we conscious beings giving meaning to our universe. That’s where meaning comes from. If there’s nobody experiencing anything, our whole cosmos just goes back to being a giant waste of space. I think it’s going to be very important for these various reasons to understand what it is about information processing that gives rise to what we call consciousness.

I’m optimistic, as I talk about in the book, that we can figure this out. I even talk about experiments you can do to try to pin things down. I think it should be part of the list of questions that we should should try to answer before any superintelligence.

Ariel: So you actually then, hope that we are able to create an AI that has consciousness?

Max: What I first of all hope we can do is get answers to these big questions. I think we need to answer a lot of these questions before we make any irrevocable decisions we can’t take back.

Ariel: Okay.

Max: So I’m keeping a very open mind as to what the best path forward is. What I think we should really, really do is rally around these tough questions and work really hard on trying to answer them and base our decisions on them. The decision about whether to create some form of superintelligence, and if so, what form, and what goals it should have – that’s the most important decision that humanity will ever make, and we shouldn’t just bumble into this decision without thinking about it. This should be the most premeditated and most carefully researched decision we ever do.

Ariel: All right. I think this sort of segues into the next question that I have, and that’s, I was hoping you could talk a little bit about probability, especially as it relates to risks and hopes for the future. Why and when should we concern ourselves with outcomes that have low probabilities, for example?

Max: First of all, I don’t think, and most of my AI colleagues also don’t think that the probability is very low that we will eventually be able to replicate human intelligence in machines. The question isn’t so much “if,” although there are certainly a few detractors out there, the bigger question is “when.” Even if it’s not going to happen for a hundred years, this is a good time to start talking about it. We’re talking plenty about climate change effects in 100 years, so why shouldn’t we talk about something much more dramatic, which might happen in 100 years.

Second, there are many leading researchers who think it’s going to happen in decades. When we did the last poll at the Asilomar meeting about this, the median guess people had was, you know, some decades from now. Yeah, some people thought hundreds of years. Some people thought sooner, but I think personally, it’s not at all implausible that it might happen within decades, so this is the perfect time to really start working hard on our homework.

If there is somebody who thinks the probability of succeeding in building human-level general AI and beyond is very small, like 1%, what I would say to them is, “Hey, do you have home insurance?” The probability of a house catching fire is less than 1%, but they still buy it. So you could still make that argument that we should buy fire insurance for our civilization just in case.

But as I said, I don’t even think the probability is particularly small. A lot of people, I think, are dismissive about AI progress because they think that there’s somehow some sort of secret sauce involved in human intelligence. But as a physicist, I think we are our particles.

Ariel: That addresses sort of the probability of advanced AI in the future, but what about looking at, say, the different options that you consider for the different directions that humanity could move? Do you think those have an equal probability, or do you think some are more likely than others? How should we be trying to address this if we think one direction is worse than another?

Max: I would like to take on all the challenges in a principled order. Obviously, we’re going to get great job dislocations. I talk in the book about what kind of career advice you should give for your kids right now. We’re obviously on the cusp of an arms race in lethal autonomous weapons. So we should try right now to prevent that from happening, which is what AI researchers overwhelming do want to avoid. They want to use their awesome technology for good, not just create new, unstoppable arms races.

As we look to the bigger questions of human-level intelligence, if you give a society a technology that’s too powerful for their wisdom, it’s kind of like walking into a daycare center and saying, “Hey, here’s a box of hand grenades. Have fun playing with this stuff. I’m not going to give you instructions.”

We’ve always had people who for whatever reason had weird grudges and wanted to kill as many people as possible for their own weirdo reasons. And there will always be such people in the future as well. The difference was in the Stone Age, one lunatic couldn’t do that much damage with a rock and a stick. Whereas today, as we’re seeing for example with nuclear weapons, one lunatic can do a lot of damage. Lethal autonomous weapons takes that up one notch, where you lower the cost of the technology needed for mass destruction from billions of dollars to thousands or hundreds of dollars.

Finally, if we start getting closer to human-level AI, there’s an enormous Pandora’s Box, which I think we want to open very carefully and just make sure that if we build these very powerful systems, they should have enough safeguards built into them already that some disgruntled ex-boyfriend isn’t going to use that for a vendetta, or some ISIS member isn’t going to use that for their latest plot.

This isn’t just sort of pie-in-the-sky dreaming about very far future things. These are things we can work on with safety research right now, today. Think about how preventable September 11 was for example. We had these airplanes with autopilots and computers on board, but nobody had put in enough intelligence in them that they even had a rule saying, “Do not under any circumstances fly into buildings.” That was nothing that the manufacturers ever wanted a plane to do, but the airplane was completely clueless about human values. It’s not that hard to do something like this. That would also stop Andreas Lubitz from flying this Germanwings jet plane into the Alps when he was suicidal by setting the autopilot to 100 meters. It doesn’t take more than a few lines of code to say that under no circumstances should airplanes allow their autopilot to try to fly at lower altitude than the mountains, which are right there in their map database. Right?

This kind of baby ethics, where you take sort of human values that pretty much everybody agrees on, and put that into today’s systems, is a very good starting point, I think, towards ultimately getting more sophisticated about having machines learn and adopt and retain human values.

Ariel: I want to end on an optimistic note because as you said, you’re very optimistic, and the goal of the book is to present a very optimistic future. How can the average concerned citizen get more involved in this conversation, so that we can all have a more active voice in guiding the future of humanity and life?

Max: I think everybody can contribute, and people should figure out what ways they can contribute best. If you are listening to this, and you are an AI researcher, then I would very much encourage you to find out about what some of these cool technical problems are that we need to answer to make our AI systems more trustworthy and safe and spend some time working on it.

If you are someone who has some money to donate, I would encourage you to give some of it to one of the many nonprofit organizations, including the Future of Life Institute, that’s funding AI safety research because right now, almost all the funding is going into just making AI more powerful. Almost none of it is going into developing the wisdom to guarantee it’ll be beneficial.

If you’re a politician, or you have any contact with your local politicians, encourage them also to make sure that funding for AI safety becomes just an integral part of the standard computer science funding that’s provided in your country. If you are any human at all, I would also encourage you to join this conversation and do it in an informed way. I wrote the book precisely for you then, so that you can get the scoop on what’s going on to the point where you can really contribute to this conversation.

As I said, one of the really huge questions is simply, we’re building this more and more powerful ‘rocket engine’ that can steer this rocket and steer humanity into some future, but where do we want to steer it to? What kind of future do we want to aim for? What kind of society do you personally feel excited about envisioning for 50 years down the road, for far and beyond that, for future generations?

Talk to your friends about this. It’s a great party topic. It’s a great conversation really any time, and we set up a website ageofai.org where we’re encouraging everybody to come and share their ideas for how they would like the future to be, also. I hope you Ariel can help me pull out some of the coolest ideas there into a nice synthesis because I think we really need the wisdom of everybody to chart a future worth aiming for. And if we don’t know what kind of future we want, we’re not going to get it.

Ariel: On that note, is there anything else that you want to add that you think we didn’t cover?

Max: I would just add that I find this topic incredibly fascinating and fun, even aside from being important. That’s one of the reasons I had such a fun time writing this book.

Ariel: All right. Well thank you so much for joining us. The book is Life 3.0: Being Human in the Age of Artificial Intelligence, and we highly encourage everyone to visit ageofai.org. We’ll also have that on the website. Max, thank you so much.

Max: Thank you.

[end of recorded material]

This content was first published at futureoflife.blackfin.biz on August 29, 2017.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:
May 4, 2023

Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology

{“labels”:[],”rewrite”:{“with_front”:true}}
April 27, 2023

Liv Boeree on Moloch, Beauty Filters, Game Theory, Institutions, and AI

{“labels”:[],”rewrite”:{“with_front”:true}}
March 16, 2023
March 2, 2023

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram