AI Alignment Podcast: On the Governance of AI with Jade Leung
In this podcast, Lucas spoke with Jade Leung from the Center for the Governance of AI (GovAI). GovAI strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. The center focuses on the political challenges arising from transformative AI, and they seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, where her research focuses on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict.
Topics discussed in this episode include:
- The landscape of AI governance
- GovAI's research agenda and priorities
- Aligning government and companies with ideal governance and the common good
- Norms and efforts in the AI alignment community in this space
- Technical AI alignment vs. AI Governance vs. malicious use cases
- Lethal autonomous weapons
- Where we are in terms of our efforts and what further work is needed in this space
You can take a short (3 minute) survey to share your feedback about the podcast here.
Important timestamps:
0:00 Introduction and updates
2:07 What is AI governance?
11:35 Specific work that Jade and the GovAI team are working on
17:21 Windfall clause
21:20 Policy advocacy and AI alignment community norms and efforts
27:22 Moving away from short-term vs long-term framing to a stakes framing
30:44 How do we come to ideal governance?
40:22 How can we contribute to ideal governance through influencing companies and government?
48:12 US and China on AI
51:18 What more can we be doing to positively impact AI governance?
56:46 What is more worrisome, malicious use cases of AI or technical AI alignment?
01:01:19 What is more important/difficult, AI governance or technical AI alignment?
01:03:49 Lethal autonomous weapons
01:09:49 Thinking through tech companies in this space and what we should do
Two key points from Jade:
"I think one way in which we need to rebalance a little bit, as kind of an example of this is, I'm aware that a lot of the work, at least that I see in this space, is sort of focused on very aligned organizations and non-government organizations. So we're looking at private labs that are working on developing AGI. And they're more nimble. They have more familiar people in them, we think more similarly to those kinds of people. And so I think there's an attraction. There's really good rational reasons to engage with the folks because they're the ones who are developing this technology and they're plausibly the ones who are going to develop something advanced.
"But there's also, I think, somewhat biased reasons why we engage, is because they're not as messy, or they're more familiar, or we see more value aligned. And I think this early in the field, putting all our eggs in a couple of very, very limited baskets, is plausibly not that great a strategy. That being said, I'm actually not entirely sure what I'm advocating for. I'm not sure that I want people to go and engage with all of the UN conversations on this because there's a lot of noise and very little signal. So I think it's a tricky one to navigate, for sure. But I've just been reflecting on it lately, that I think we sort of need to be a bit conscious about not group thinking ourselves into thinking we're sort of covering all the basis that we need to cover."
"I think one thing I'd like for people to be thinking about... this short term v. long term bifurcation. And I think a fair number of people are. And the framing that I've tried on a little bit is more thinking about it in terms of stakes. So how high are the stakes for a particular application area, or a particular sort of manifestation of a risk or a concern.
"And I think in terms of thinking about it in the stakes sense, as opposed to the timeline sense, helps me at least try to identify things that we currently call or label near term concerns, and try to filter the ones that are worth engaging in versus the ones that maybe we just don't need to engage in at all. An example here is that basically I am trying to identify near term/existing concerns that I think could scale in stakes as AI becomes more advanced. And if those exist, then there's really good reason to engage in them for several reasons, right?...Plausibly, another one would be privacy as well, because I think privacy is currently a very salient concern. But also, privacy is an example of one of the fundamental values that we are at risk of eroding if we continue to deploy technologies for other reasons : efficiency gains, or for increasing control and centralizing of power. And privacy is this small microcosm of a maybe larger concern about how we could possibly be chipping away at these very fundamental things which we would want to preserve in the longer run, but we're at risk of not preserving because we continue to operate in this dynamic of innovation and performance for whatever cost. Those are examples of conversations where I find it plausible that there are existing conversations that we should be more engaged in just because those are actually going to matter for the things that we call long term concerns, or the things that I would call sort of high stakes concerns."
We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, Stitcher, iHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.
Transcript
Lucas: Hey, everyone. Welcome back to the AI Alignment Podcast. I'm Lucas Perry. And today, we will be speaking with Jade Leung from the Center for the Governance of AI, housed at the Future of Humanity Institute. Their work strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. They focus on the political challenges arising from transformative AI, and seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, and her research work focusing on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict.
In this episode, we discuss GovAI's research agenda and priorities, the landscape of AI governance, how we might arrive at ideal governance, the dynamics and roles of both companies and states within this space, how we might be able to better align private companies with what we take to be ideal governance. We get into the relative importance of technical AI alignment and governance efforts on our path to AGI, we touch on lethal autonomous weapons, and also discuss where we are in terms of our efforts in this broad space, and what work we might like to see more of.
As a general bit of announcement, I found all the feedback coming in through the SurveyMonkey poll to be greatly helpful. I've read through all of your comments and thoughts, and am working on incorporating feedback where I can. So for the meanwhile, I'm going to leave the survey up, and you'll be able to find a link to it in a description of wherever you might find this podcast. Your feedback really helps and is appreciated. And, as always, if you find this podcast interesting or useful, consider sharing with others who might find it valuable as well. And so, without further ado, let's jump into our conversation with Jade Leung.
So let's go ahead and start by providing a little bit of framing on what AI governance is, the conceptual landscape that surrounds it. What is AI governance, and how do you view and think about this space?
Jade: I think the way that I tend to think about AI governance is with respect to how it relates to the technical field of AI safety. In both fields, the broad goal is how humanity can best navigate our transition towards a world with advanced AI systems in it. The technical AI safety agenda and the kind of research that's being done there is primarily focused on how do we build these systems safely and well. And the way that I think about AI governance with respect to that is broadly everything else that's not that. So that includes things like the social, political, economic context that surrounds the way in which this technology is developed and built and used and employed.
And specifically, I think with AI governance, we focus on a couple of different elements of it. One big element is the governance piece. So what are the kinds of norms and institutions we want around a world with advanced AI serving the common good of humanity. And then we also focus a lot on the kind of strategic political impacts and effects and consequences of the route on the way to a world like that. So what are the kinds of risks, social, political, economic? And what are the kinds of impacts and effects that us developing it in sort of sub-optimal ways could have on the various things that we care about.
Lucas: Right. And so just to throw out some other cornerstones here, because I think there's many different ways of breaking up this field and thinking about it, and this sort of touches on some of the things that you mentioned. There's the political angle, the economic angle. There's the military. There's the governance and the ethical dimensions.
Here on the AI Alignment Podcast, before we've, at least breaking down the taxonomy sort of into the technical AI alignment research, which is getting machine systems to be aligned with human values and desires and goals, and then the sort of AI governance, the strategy, the law stuff, and then the ethical dimension. Do you have any preferred view or way of breaking this all down? Or is it all just about good to you?
Jade: Yeah. I mean, there are a number of different ways of breaking it down. And I think people also mean different things when they say strategy and governance and whatnot. I'm not particular excited about getting into definitional debates. But maybe one way of thinking about what this word governance means is, at least I often think of governance as the norms, and the processes, and the institutions that are going to, and already do, shape the development and deployment of AI. So I think a couple of things that are work underlining in that, I think there's ... The word governance isn't just specifically government and regulations. I think that's a specific kind of broadening of the term, which is worth pointing out because that's a common misconception, I think, when people use the word governance.
So when I say governance, I mean governance and regulation, for sure. But I also mean what are other actors doing that aren't governance? So labs, researchers, developers, NGOs, journalists, et cetera, and also other mechanisms that aren't regulation. So it could be things like reputation, financial flows, talent flows, public perception, what's within and outside the opportune window, et cetera. So there's a number of different levers I think you can pull if you're thinking about governance.
It's probably worth also pointing out, I think, when people say governance, a lot of the time people are talking about the normative side of things, so what should it look like, and how could be if it were good? A lot of governance research, at least in this space now, is very much descriptive. So it's kind of like what's actually happening, and trying to understand the landscape of risk, the landscape of existing norms that we have to work with, what's a tractable way forward with existing actors? How do you model existing actors in the first place? So a fair amount of the research is very descriptive, and I would qualify that as AI governance research, for sure.
Other ways of breaking it down are, according to the research done that we put out, is one option. So that kind of breaks it down into firstly understanding the technological trajectory, so that's understanding where this technology is likely to go, what are the technical inputs and constraints, and particularly the ones that have implications for governance outcomes. This looks like things like modeling AI progress, mapping capabilities, involves a fair amount of technical work.
And then you've got the politics cluster, which is probably where a fair amount of the work is at the moment. This is looking at political dynamics between powerful actors. So, for example, my work is focusing on big firms and government and how they relate to each other, but also includes how AI transforms and impacts political systems, both domestically and internationally. This includes the cluster around international security and the race dynamics that fall into that. And then also international trade, which is a thing that we don't talk about a huge amount, but politics also includes this big dimension of economics in it.
And then the last cluster is this governance cluster, which is probably the most normative end of what we would want to be working on in this space. This is looking at things like what are the ideal institutions, infrastructure, norms, mechanisms that we can put in place now/in the future that we should be aiming towards that can steer us in robustly good directions. And this also includes understanding what shapes the way that these governance systems are developed. So, for example, what roles does the public have to play in this? What role do researchers have to play in this? And what can we learn from the way that we've governed previous technologies in similar domains, or with similar challenges, and how have we done on the governance front on those bits as well. So that's another way of breaking it down, but I've heard more than a couple of ways of breaking this space down.
Lucas: Yeah, yeah. And all of them are sort of valid in their own ways, and so we don't have to spend too much time on this here. Now, a lot of these things that you've mentioned are quite macroscopic effects in the society and the world, like norms and values and developing a concept of ideal governance and understanding actors and incentives and corporations and institutions and governments. Largely, I find myself having trouble developing strong intuitions about how to think about how to impact these things because it's so big it's almost like the question of, "Okay, let's figure out how to model all of human civilization." At least all of the things that matter a lot for the development and deployment of technology.
And then let's also think about ideal governance, like what is also the best of all possible worlds, based off of our current values, that we would like to use our model of human civilization to bring us closer towards? So being in this field, and exploring all of these research threads, how do you view making progress here?
Jade: I can hear the confusion in your voice, and I very much resonate with it. We're sort of consistently confused, I think, at this place. And it is a very big, both set of questions, and a big space to kind of wrap one's head around. I want to emphasize that this space is very new, and people working in this space are very few, at least with respect to AI safety, for example, which is still a very small section that feels as though it's growing, which is a good thing. We are at least a couple of years behind, both in terms of size, but also in terms of sophistication of thought and sophistication of understanding what are more concrete/sort of decision relevant ways in which we can progress this research. So we're working hard, but it's a fair ways off.
One way in which I think about it is to think about it in terms of what actors are making decisions now/in the near to medium future, that are the decisions that you want to influence. And then you sort of work backwards from that. I think at least, for me, when I think about how we do our research at the Center for the Governance of AI, for example, when I think about what is valuable for us to research and what's valuable to invest in, I want to be able to tell a story of how I expect this research to influence a decision, or a set of decisions, or a decision maker's priorities or strategies or whatever.
Ways of breaking that down a little bit further would be to say, you know, who are the actors that we actually care about? One relatively crude bifurcation is focusing on those who are in charge of developing and deploying these technologies, firms, labs, researchers, et cetera, and then those who are in charge of sort of shaping the environment in which this technology is deployed, and used, and is incentivized to progress. So that's folks who shape the legislative environment, folks who shape the market environment, folks who shape the research culture environment, and expectations and whatnot.
And with those two sets of decision makers, you can then boil it down into what are the particular decisions they are in charge of making that you can decide you want to influence, or try to influence, by providing them with research insights or doing research that will in some down shoot way, affect the way they think about how these decisions should be made. And a very, very concrete example would be to pick, say, a particular firm. And they have a set of priorities, or a set of things that they care about achieving within the lifespan of that firm. And they have a set of strategies and tactics that they intend to use to execute on that set of priorities. So you can either focus on trying to shift their priorities towards better directions if you think they're off, or you can try to point out ways in which their strategies could be done slightly better, e.g. they be coordinating more with other actors, or they should be thinking harder about openness in their research norms. Et cetera, et cetera.
Well, you can kind of boil it down to the actor level and the decision specific level, and get some sense of what it actually means for progress to happen, and for you to have some kind of impact with this research. One caveat with this is that I think if one takes this lens on what research is worth doing, you'll end up missing a lot of valuable research being done. So a lot of the work that we do currently, as I said before, is very much understanding what's going on in the first place. What are the actual inputs into the AI production function that matter and are constrained and are bottle-necked? Where are they currently controlled? A number of other things which are mostly just descriptive I can't tell you with which decision I'm going to influence by understanding this. But having a better baseline will inform better work across a number of different areas. I'd say that this particular lens is one way of thinking about progress. There's a number of other things that it wouldn't measure, that are still worth doing in this space.
Lucas: So it does seem like we gain a fair amount of tractability by just thinking, at least short term, who are the key actors, and how might we be able to guide them in a direction which seems better. I think here it would also be helpful if you could let us know, what is the actual research that you, and say, Allan Dafoe engage in on a day to day basis. So there's analyzing historical cases. I know that you guys have done work with specifying your research agenda. You have done surveys of American attitudes and trends on opinions on AI. Jeffrey Ding has also released a paper on deciphering China's AI dream, tries to understand China's AI strategy. You've also released on the malicious use cases of artificial intelligence. So, I mean, what is it like being Jade on a day to day trying to conquer this problem?
Jade: The specific work that I've spent most of my research time on to date sort of falls into the politics/governance cluster. And basically, the work that I do is centered on the assumption that there are things that we can learn from a history of trying to govern strategic general purpose technologies well. And if you look at AI, and you believe that it has certain properties that make it strategic, strategic here in the sense that it's important for things like national security and economic leadership of nations and whatnot. And it's also general purpose technology, in that it has the potential to do what GPTs do, which is to sort of change the nature of economic production, push forward a number of different frontiers simultaneously, enable consistent cumulative progress, change course of organizational functions like transportation, communication, et cetera.
So if you think that AI looks like strategic general purpose technology, then the claim is something like, in history we've seen a set of technology that plausibly have the same traits. So the ones that I focus on are biotechnology, cryptography, and aerospace technology. And the question that sort of kicked off this research is, how have we dealt with the very fraught competition that we currently see in the space of AI when we've competed across these technologies in the past. And the reason why there's a focus on competition here is because, I think one important thing that characterizes a lot of the reasons why we've got a fair number of risks in the AI space is because we are competing over it. "We" here being very powerful nations, very powerful firms, and the reason why competition is an important thing to highlight is that it exacerbates a number of risks and it causes a number of risks.
So when you're in a competitive environment, actors were normally incentivized to take larger risks than they otherwise would rationally do. They are largely incentivized to not engage in the kind of thinking that is required to think about public goods governance and serving the common benefit of humanity. And they're more likely to engage in thinking about, is more about serving parochial, sort of private, interests.
Competition is bad for a number of reasons. Or it could be bad for a number of reasons. And so the question I'm asking is, how have we competed in the past? And what have been the outcomes of those competitions? Long story short, so the research that I do is basically I dissect these cases of technology development, specifically in the US. And I analyze the kinds of conflicts, and the kinds of cooperation that have existed between the US government and the firms that were leading technology development, and also the researcher communities that were driving these technologies forward.
Other pieces of research that are going on, we have a fair number of our researcher working on understanding what are the important inputs into AI that are actually progressing us forward. How important is compute relative to algorithmic structures, for example? How important is talent, with respect to other inputs? And then the reason why that's important to analyze and useful to think about is understanding who controls these inputs, and how they're likely to progress in terms of future trends. So that's an example of the technology forecasting work.
In the politics work, we have a pretty big chunk on looking at the relationship between governments and firms. So this is a big piece of work that I've been doing, along with a fair amount of others, understanding, for example, if the US government wanted to control AI R&D, what are the various levers that they have available, that they could use to do things like seize patents, or control research publications, or exercise things like export controls, or investment constraints, or whatnot. And the reason why we focus on that is because my hypothesis is that ultimately, ultimately you're going to start to see states get much more involved. At the moment, you're currently in this period of time wherein a lot of people describe it as very private sector driven, and the governments are behind, I think, and history would also suggest that the state is going to be involved much more significantly very soon. So understanding what they could do, and what their motivations are, are important.
And then, lastly, on the governance piece, a big chunk of our work here is specifically on public opinions. So you've mentioned this before. But basically, we have a big substantial chunk of our work, consistently, is just understanding what the public thinks about various issues to do with AI. So recently, we published a report of the recent set of surveys that we did surveying the American public. And we asked them a variety of different questions and got some very interesting answers.
So we asked them questions like: What risks do you think are most important? Which institution do you trust the most to do things with respect to AI governance and development? How important do you think certain types of governance challenges are for American people? Et cetera. And the reason why this is important for the governance piece is because governance ultimately needs to have sort of public legitimacy. And so the idea was that understanding how the American public thinks about certain issues can at least help to shape some of the conversation around where we should be headed in governance work.
Lucas: So there's also been work here, for example, on capabilities forecasting. And I think Allan and Nick Bostrom also come at these from slightly different angles sometimes. And I'd just like to explore all of these so we can get all of the sort of flavors of the different ways that researchers come at this problem. Was it Ben Garfinkel who did the offense-defense analysis?
Jade: Yeah.
Lucas: So, for example, there's work on that. That work was specifically on trying to understand how the offense-defense bias scales as capabilities change. This could have been done with nuclear weapons, for example.
Jade: Yeah, exactly. That was an awesome piece of work by Allan and Ben Garfinkel, looking at this concept of the offense-defense balance, which exists for weapon systems broadly. And they were sort of analyzing and modeling. It's a relatively theoretical piece of work, trying to model how the offense-defense balance changes with investments. And then there was a bit of a investigation there specifically on how we could expect AI to affect the offense-defense balance in different types of contexts. The other cluster work, which I failed to mention as well, is a lot of our work on policy, specifically. So this is where projects like the windfall clause would fall in.
Lucas: Could you explain what the windfall clause is, in a sentence or two?
Jade: The windfall clause is an example of a policy lever, which we think could be a good idea to talk about in public and potentially think about implementing. And the windfall clause is an ex-ante voluntary commitment by AI developers to distribute profits from the development of advanced AI for the common benefit of humanity. What I mean by ex-ante is that they commit to it now. So an AI developer, say a given AI firm, will commit to, or sign, the windfall clause prior to knowing whether they will get to anything like advanced AI. And what they commit to is saying that if I hit a certain threshold of profits, so what we call windfall profit, and the threshold is very, very, very high. So the idea is that this should only really kick in if a firm really hits the jackpot and develops something that is so advanced, or so transformative in the economic sense, that they get a huge amount of profit from it at some sort of very unprecedented scale.
So if they hit that threshold of profit, this clause will kick in, and that will commit them to distributing their profits according to some kind of pre-committed distribution mechanism. And the idea with the distribution mechanism is that it will redistribute these products along the lines of ensuring that sort of everyone in the world can benefit from this kind of bounty. There's a lot of different ways in which you could do the distribution. And we're about to put out the report which outlines some of our thinking on it. And there are many more ways in which it could be done besides from what we talk about.
But effectively, what you want in a distribution mechanism is you want it to be able to do things like rectify inequalities that could have been caused in the process of developing advanced AI. You want it to be able to provide a financial buffer to those who've been thoughtlessly unemployed by the development of advanced AI. And then you also want it to do somewhat positive things too. So it could be, for example, that you distribute it according to meeting the sustainable development goals. Or it could be redistributed according to a scheme that looks something like the UBI. And that transitions us into a different type of economic structure. So there are various ways in which you could play around with it.
Effectively, the windfall clause is starting a conversation about how we should be thinking about the responsibilities that AI developers have to ensure that if they do luck out, or if they do develop something that is as advanced as some of what we speculate we could get to, there is a responsibility there. And there also should be a committed mechanism there to ensure that that is balanced out in a way that reflects the way that we want this value to be distributed across the world.
And that's an example of the policy lever that is sort of uniquely concrete, in that we don't actually do a lot of concrete research. We don't do much policy advocacy work at all. But to the extent that we want to do some policy advocacy work, it's mostly with the motivation that we want to be starting important conversations about robustly good policies that we could be advocating for now, that can help steer us in better directions.
Lucas: And fitting this into the research threads that we're talking about here, this goes back to, I believe, Nick Bostrom's Superintelligence. And so it's sort of predicated on more foundational principles, which can be attributed to before the Asilomar Conference, but also the Asilomar principles which were developed in 2017, that the benefits of AI should be spread widely, and there should be abundance. And so then there becomes these sort of specific policy implementations or mechanisms by which we are going to realize these principles which form the foundation of our ideal governance.
So Nick has sort of done a lot of this work on forecasting. The forecasting in Superintelligence was less about concrete timelines, and more about the logical conclusions of the kinds of capabilities that AI will have, fitting that into our timeline of AI governance thinking, with ideal governance at the end of that. And then behind us, we have history, which we can, as you're doing yourself, try to glean more information about how what you call general purpose technologies affect incentives and institutions and policy and law and the reaction of government to these new powerful things. Before we brought up the windfall clause, you were discussing policy at FHI.
Jade: Yeah, and one of the reasons why it's hard is because if we put on the frame that we mostly make progress by influencing decisions, we want to be pretty certain about what kinds of directions we want these decisions to go, and what we would want these decisions to be, before we engage in any sort of substantial policy advocacy work to try to make that actually a thing in the real world. I am very, very hesitant about our ability to do that well, at least at the moment. I think we need to be very humble about thinking about making concrete recommendations because this work is hard. And I also think there is this dynamic, at least, in setting norms, and particularly legislation or regulation, but also just setting up institutions, in that it's pretty slow work, but it's very path dependent work. So if you establish things, they'll be sort of here to stay. And we see a lot of legacy institutions and legacy norms that are maybe a bit outdated with respect to how the world has progressed in general. But we still struggle with them because it's very hard to get rid of them. And so the kind of emphasis on humility, I think, is a big one. And it's a big reason why basically policy advocacy work is quite slim on the ground, at least in the moment, because we're not confident enough in our views on things.
Lucas: Yeah, but there's also this tension here. The technology's coming anyway. And so we're sort of on this timeline to get the right policy stuff figured out. And here, when I look at, let's just take the Democrats and the Republicans in the United States, and how they interact. Generally, in terms of specific policy implementation and recommendation, it just seems like different people have various dispositions and foundational principles which are at odds with one another, and that policy recommendations are often not substantially tested, or the result of empirical scientific investigation. They're sort of a culmination and aggregate of one's very broad squishy intuitions and modeling or the world, and different intuitions one has. Which is sort of why, at least at the policy level, seemingly in the United States government, it seems like a lot of the conversation is just endless arguing that gets nowhere. How do we avoid that here?
Jade: I mean, this is not just specifically an AI governance problem. I think we just struggle with this in general as we try to do governance and politics work in a good way. It's a frustrating dynamic. But I think one thing that you said definitely resonates and that, a bit contra to what I just said. Whether we like it or not, governance is going to happen, particularly if you take the view that basically anything that shapes the way this is going to go, you could call governance. Something is going to fill the gap because that's what humans do. You either have the absence of good governance, or you have somewhat better governance if you try to engage a little bit. There's definitely that tension.
One thing that I've recently been reflecting on, in terms of things that we under-prioritize in this community, because it's sort of a bit of a double-edged sword of being very conscientious about being epistemically humble and being very cautious about things, and trying to be better calibrated and all of that, which are very strong traits of people who work in this space at the moment. But I think almost because of those traits, too, we undervalue, or we don't invest enough time or resource in just trying to engage in existing policy discussions and existing governance institutions. And I think there's also an aversion to engaging in things that feel frustrating and slow, and that's plausibly a mistake, at least in terms of how much attention we pay to it because in the absence of our engagement, the things still going to happen anyway.
Lucas: I must admit that as someone interested in philosophy I've resisted for a long time now, the idea of governance in AI at least casually in favor of nice calm cool rational conversations at tables that you might have with friends about values, and ideal governance, and what kinds of futures you'd like. But as you're saying, and as Alan says, that's not the way that the world works. So here we are.
Jade: So here we are. And I think one way in which we need to rebalance a little bit, as kind of an example of this is, I'm aware that a lot of the work, at least that I see in this space, is sort of focused on very aligned organizations and non-government organizations. So we're looking at private labs that are working on developing AGI. And they're more nimble. They have more familiar people in them, we think more similarly to those kinds of people. And so I think there's an attraction. There's really good rational reasons to engage with the folks because they're the ones who are developing this technology and they're plausibly the ones who are going to develop something advanced.
But there's also, I think, somewhat biased reasons why we engage, is because they're not as messy, or they're more familiar, or we feel more value aligned. And I think this early in the field, putting all our eggs in a couple of very, very limited baskets, is plausibly not that great a strategy. That being said, I'm actually not entirely sure what I'm advocating for. I'm not sure that I want people to go and engage with all of the UN conversations on this because there's a lot of noise and very little signal. So I think it's a tricky one to navigate, for sure. But I've just been reflecting on it lately, that I think we sort of need to be a bit conscious about not group thinking ourselves into thinking we're sort of covering all the bases that we need to cover.
Lucas: Yeah. My view on this, and this may be wrong, is just looking at the EA community, and the alignment community, and all that they've done to try to help with AI alignment. It seems like a lot of talent feeding into tech companies. And there's minimal efforts right now to engage in actual policy and decision making at the government level, even for short term issues like disemployment and privacy and other things. The AI alignment is happening now, it seems.
Jade: On the noise to signal point, I think one thing I'd like for people to be thinking about, I'm pretty annoyed at this short term v. long term bifurcation. And I think a fair number of people are. And the framing that I've tried on a little bit is more thinking about it in terms of stakes. So how high are the stakes for a particular application area, or a particular sort of manifestation of a risk or a concern.
And I think in terms of thinking about it in the stakes sense, as opposed to the timeline sense, helps me at least try to identify things that we currently call or label near term concerns, and try to filter the ones that are worth engaging in versus the ones that maybe we just don't need to engage in at all. An example here is that basically I am trying to identify near term/existing concerns that I think could scale in stakes as AI becomes more advanced. And if those exist, then there's really good reason to engage in them for several reasons, right? One is this path dependency that I talked about before, so norms that you're developing around, for example, privacy or surveillance. Those norms are going to stick, and the ways in which we decide we want to govern that, even with narrow technologies now, those are the ones we're going to inherit, grandfather in, as we start to advance this technology space. And then I think you can also just get a fair amount of information about how we should be governing the more advanced versions of these risks or concerns if you engage earlier.
I think there are actually probably, even just off the top off of my head, I can think of a couple which seemed to have scalable stakes. So, for example, a very existing conversation in the policy space is about this labor displacement problem and automation. And that's the thing that people are freaking out about now, is the extent that you have litigation and bills and whatnot being passed, or being talked about at least. And you've got a number of people running on political platforms on the basis of that kind of issue. And that is both an existing concern, given automation to date. But it's also plausibly a huge concern as this stuff is more advanced, to the point of economic singularity, if you wanted to use that term, where you've got vast changes in the structure of the labor market and the employment market, and you can have substantial transformative impacts on the ways in which humans engage and create economic value and production.
And so existing automation concerns can scale into large scale labor displacement concerns, can scale into pretty confusing philosophical questions about what it means to conduct oneself as a human in a world where you're no longer needed in terms of employment. And so that's an example of a conversation which I wish more people were engaged in right now.
Plausibly, another one would be privacy as well, because I think privacy is currently a very salient concern. But also, privacy is an example of one of the fundamental values that we are at risk of eroding if we continue to deploy technologies for other reasons : efficiency gains, or for increasing control and centralizing of power. And privacy is this small microcosm of a maybe larger concern about how we could possibly be chipping away at these very fundamental things which we would want to preserve in the longer run, but we're at risk of not preserving because we continue to operate in this dynamic of innovation and performance for whatever cost. Those are examples of conversations where I find it plausible that there are existing conversations that we should be more engaged in just because those are actually going to matter for the things that we call long term concerns, or the things that I would call sort of high stakes concerns.
Lucas: That makes sense. I think that trying on the stakes framing is helpful, and I can see why. It's just a question about what are the things today, and within the next few years, that are likely to have a large effect on a larger end that we arrive at with transformative AI. So we've got this space of all these four cornerstones that you guys are exploring. Again, this has to do with the interplay and interdependency of technical AI safety, politics, policy of ideal governance, the economics, the military balance and struggle, and race dynamics all here with AI, on our path to AGI. So starting here with ideal governance, and we can see how we can move through these cornerstones, what is the process by which ideal governance is arrived at? How might this evolve over time as we get closer to superintelligence?
Jade: It may be a couple of thoughts, mostly about what I think a desirable process is that we should follow, or what kind of desired traits do we want to have in the way that we get to ideal governance and what ideal governance could plausibly look like. I think that's to the extent that I maybe have thoughts about it. And they're quite obvious ones, I think. Governance literature has said a lot about what consists of both morally sound, politically sound, socially sound governance processes or design of governance processes.
So those are things like legitimacy and accountability and transparency. I think there are some interesting debates about how important certain goals are, either as end goals or as instrumental goals. So for example, I'm not clear where my thinking is on how important inclusion and diversity is. As we're aiming for ideal governance, so I think that's an open question, at least in my mind.
There are also things to think through around what's unique to trying to aim for ideal governance for a transformative general purpose technology. We don't have a very good track record of governing general purpose technologies at all. I think we have general purpose technologies that have integrated into society and have served a lot of value. But that's not for having had governance of them. I think we've been come combination of lucky and somewhat thoughtful sometimes, but not consistently so. If we're staking the claim that AI could be a uniquely transformative technology, then we need to ensure that we're thinking hard about the specific challenges that it poses. It's a very fast-moving emerging technology. And governments historically has always been relatively slow at catching up. But you also have certain capabilities that you can realize by developing, for example, AGI or super intelligence, which governance frameworks or institutions have never had to deal with before. So thinking hard about what's unique about this particular governance challenge, I think, is important.
Lucas: Seems like often, ideal governance is arrived at through massive suffering of previous political systems, like this form of ideal governance that the founding fathers of the United States came up with was sort of an expression of the suffering they experienced at the hands of the British. And so I guess if you track historically how we've shifted from feudalism and monarchy to democracy and capitalism and all these other things, it seems like governance is a large slowly reactive process born of revolution. Whereas, here, what we're actually trying to do is have foresight and wisdom about what the world should look like, rather than trying to learn from some mistake or some un-ideal governance we generate through AI.
Jade: Yeah, and I think that's also another big piece of it, is another way of thinking about how to get to ideal governance is to aim for a period of time, or a state of the world in which we can actually do the thinking well without a number of other distractions/concerns on the way. So for example, conditions that we want to drive towards would mean getting rid of things like the current competitor environment that we have, which for many reasons, some of which I mentioned earlier, it's a bad thing, and it's particularly counterproductive to giving us the kind of space and cooperative spirit and whatnot that we need to come to ideal governance. Because if you're caught in this strategic competitive environment, then that makes a bunch of things just much harder to do in terms of aiming for coordination and cooperation and whatnot.
You also probably want better, more accurate, information out there, hence being able to think harder by looking at better information. And so a lot of work can be done to encourage more accurate information to hold more weight in public discussions, and then also encourage an environment that is genuine, epistemically healthy deliberation about that kind of information. All of what I'm saying is also not particularly unique, maybe, to ideal governance for AI. I think in general, you can sometimes broaden this discussion to what does it look like to govern a global world relatively well. And AI is one of the particular challenges that are maybe forcing us to have some of these conversations. But in some ways, when you end up talking about governance, it ends up being relatively abstract in a way, I think, ruins technology. At least in some ways there are also particular challenges, I think, if you're thinking particularly about superintelligence scenarios. But if you're just talking about governance challenges in general, things like accurate information, more patience, lack of competition and rivalrous dynamics and what not, that generally is kind of just helpful.
Lucas: So, I mean, arriving at ideal governance here, I'm just trying to model and think about it, and understand if there should be anything here that should be practiced differently, or if I'm just sort of slightly confused here. Generally, when I think about ideal governance, I see that it's born of very basic values and principles. And I view these values and principles as coming from nature, like the genetics, evolution instantiating certain biases and principles and people that tend to lead to cooperation, conditioning of a culture, how we're nurtured in our homes, and how our environment conditions us. And also, people update their values and principles as they live in the world and communicate with other people and engage in public discourse, even more foundational, meta-ethical reasoning, or normative reasoning about what is valuable.
And historically, these sort of conversations haven't mattered, or they don't seem to matter, or they seem to just be things that people assume, and they don't get that abstract or meta about their values and their views of value, and their views of ethics. It's been said that, in some sense, on our path to superintelligence, we're doing philosophy on a deadline, and that there are sort of deep and difficult questions about the nature of value, and how best to express value, and how to idealize ourselves as individuals and as a civilization.
So I guess I'm just throwing this all out there. Maybe not necessarily we have any concrete answers. But I'm just trying to think more about the kinds of practices and reasoning that should and can be expected to inform ideal governance. Should meta-ethics matter here, where it doesn't seem to matter in public discourse. I still struggle between the ultimate value expression that might be happening through superintelligence, and the tension between that, and how are public discourse functions. I don't know if you have any thoughts here.
Jade: No particular thoughts, aside from to generally agree that I think meta-ethics is important. It is also confusing to me why public discourse doesn't seem to track the things that seem important. This probably is something that we've struggled and tried to address in various ways before, so I guess I'm always cognizant of trying to learn from ways in which we've tried to improve public discourse and tried to create spaces for this kind of conversation.
It's a tricky one for sure, and thinking about better practices is probably the main way at least in which I engage with thinking about ideal governance. It's often the case that people, when they look at the cluster of ideal governance work though like, "Oh, this is the thing that's going to tell us what the answer is," like what's the constitution that we have to put in place, or whatever it is.
At least for me, the maun chunk of thinking is mostly centered around process, and it's mostly centered around what constitutes a productive optimal process, and some ways of answering this pretty hard question. And how do you create the conditions in which you can engage with that process without being distracted or concerned about things like competition? Those are kind of the main ways in which it seems obvious that we can fix the current environment so that we're better placed to answer what is a very hard question.
Lucas: Coming to mind here is also, is this feature that you pointed out, I believe, that ideal governance is not figuring everything out in terms of our values, but rather creating the kind of civilization and space in which we can take the time to figure out ideal governance. So maybe ideal governance is not solving ideal governance, but creating a space to solve ideal governance.
Usually, ideal governance has to do with modeling human psychology, and how to best to get human being to produce value and live together harmoniously. But when we introduce AI, and human beings become potentially obsolete, then ideal governance potentially becomes something else. And I wonder, if the role of, say, experimental cities with different laws, policies, and governing institutions might be helpful here.
Jade: Yeah, that's an interesting thought. Another thought that came to mind as well, actually, is just kind of reflecting on how ill-equipped I feel thinking about this question. One funny trait of this field is that you have a slim number of philosophers, but specially in the AI strategy and safety space, it's political scientists, international relations people, economists, and engineers, and computer scientists thinking about questions that other spaces have tried to answer in different ways before.
So when you mention psychology, that's an example. Obviously, philosophy has something to say about this. But there's also a whole space of people have thought about how we govern things well across a number of different domains, and how we do a bunch of coordination and cooperation better, and stuff like that. And so it makes me reflect on the fact that there could be things that we already have learned that we should be reflecting a little bit more on which we currently just don't have access to because we don't necessarily have the right people or the right domains of knowledge in this space.
Lucas: Like AI alignment has been attracting a certain crowd of researchers, and so we miss out on some of the insights that, say, psychologists might have about ideal governance.
Jade: Exactly, yeah.
Lucas: So moving along here, from ideal governance, assuming we can agree on what ideal governance is, or if we can come to a place where civilization is stable and out of existential risk territory, and where we can sit down and actually talk about ideal governance, how do we begin to think about how to contribute to AI governance through working with or in private companies and/or government.
Jade: This is a good, and quite large, question. I think there are a couple of main ways in which I think about productive actions that either companies or governments can take, or productive things we can do with both of these actors to make them more inclined to do good things. On the point of other companies, the primary thing I think that is important to work on, at least concretely in the near term, is to do something like establish the norm and expectation that as developers of this important technology that will have a large plausible impact on the world, they have a very large responsibility proportional to their ability to impact the development of this technology. By making the responsibility something that is tied to their ability to shape this technology, I think that as a foundational premise or a foundational axiom to hold about why private companies are important, that can get us a lot of relatively concrete things that we should be thinking about doing.
The simple way of saying its is something like if you are developing the thing, you're responsibly for thinking about how that thing is going to affect the world. And establishing that, I think is a somewhat obvious thing. But it's definitely not how the private sector operates at the moment, in that there is an assumed limited responsibility irrespective of how your stuff is deployed in the world. What that actually means can be relatively concrete. Just looking at what these labs, or what these firms have the ability to influence, and trying to understand how you want to change it.
So, for example, internal company policy on things like what kind of research is done and invested in, and how you allocate resources across, for example, safety and capabilities research, what particular publishing norms you have, and considerations around risks or benefits. Those are very concrete internal company policies that can be adjusted and shifted based on one's idea of what they're responsible for. The broad thing, I think, to try to steer them in this direction of embracing, acknowledging, and then living up this greater responsibility, as an entity that is responsible for developing the thing.
Lucas: How would we concretely change the incentive structure of a company who's interested in maximizing profit towards this increased responsibility, say, in the domains that you just enumerated.
Jade: This is definitely probably one of the hardest things about this claim being translated into practice. I mean, it's not the first time we've been somewhat upset at companies for doing things that society doesn't agree with. We don't have a great track record of changing the way that industries or companies work. That being said, I think if you're outside of the company, there are particularly levers that one can pull that can influence the way that a company is incentivized. And then I think we've also got examples of us being able to use these levers well.
The fact that companies are constrained by the environment that a government creates, and governments also have the threat of things like regulation, or the threat of being able to pass certain laws or whatnot, which actually the mere threat, historically, has done a fair amount in terms of incentivizing companies to just step up their game because they don't want regulation to kick in, which isn't conducive to what they want to do, for example.
Users of the technology is a pretty classic one. It's a pretty inefficient one, I think, because you've got to coordinate many, many different types of users, and actors, and consumers and whatnot, to have an impact on what companies are incentivized to do. But you have seen environmental practices in other types of industries that have been put in place as standards or expectations that companies should abide by because consumers across a long period of time have been able to say, "I disagree with this particular practice." That's an example of a trend that has succeeded.
Lucas: That would be like boycotting or divestment.
Jade: Yeah, exactly. And maybe a slightly more efficient one is focusing on things like researchers and employees. That is, if you are a researcher, if you're an employee, you have levers over the employer that you work for. They need you, and you need them, and there's that kind of dependency in that relationship. This is all a long way of saying that I think, yes, I agree it's hard to change incentive structures of any industry, and maybe specifically so in this case because they're very large. But I don't think it's impossible. And I think we need to think harder about how to use those well. I think the other thing that's working in our favor in this particular case is that we have a unique set of founders or leaders of these labs or companies that have expressed pretty genuine sounding commitments to safety and to cooperativeness, and to serving the common good. It's not a very robust strategy to rely on certain founders just being good people. But I think in this case, it's kind of working in our favor.
Lucas: For now, yeah. There's probably already other interest groups who are less careful, who are actually just making policy recommendations right now, and we're broadly not in on the conversation due to the way that we think about the issue. So in terms of government, what should we be doing? Yeah, it seems like there's just not much happening.
Jade: Yeah. So I agree there isn't much happening, or at least relative to how much work we're putting into trying to understand and engage with private labs. There isn't much happening with government. So I think there needs to be more thought put into how we do that piece of engagement. I think good things that we could be trying to encourage more governments to do, for one, investing in productive relationships with the technical community, and productive relationships with the researcher community, and with companies as well. At least in the US, it's pretty adversarial between Silicon Valley firms and DC.
And that isn't good for a number of reasons. And one very obvious reason is that there isn't common information or common understand of what's going on, what the risks are, what the capabilities are, et cetera. One of the main critiques of governments is that they're ill-equipped in terms of access to knowledge, and access to expertise, to be able to appropriately design things like bills, or things like pieces of legislation or whatnot. And I think that's also something that governments should take responsibility for addressing.
So those are kind of law hanging fruit. There's a really tricky balance that I think governments will need to strike, which is the balance between avoiding over-hasty ill-informed regulation. A lot of my work looking at history will show that the main ways in which we've achieved substantial regulation is as a result of big public, largely negative events to do with the technology screwing something up, or the technology causing a lot of fear, for whatever reasons. And so there's a very sharp spike in public fear or public concern, and then the government then kicks into gear. And I think that's not a good dynamic in terms of forming nuanced well-considered regulation and governance norms. Avoiding the outcome is important, but it's also important that governments do engage and track how this is going, and particularly track where things like company policy and industry-wide efforts are not going to be sufficient. So when do you start translating some of the more soft law, if you will, into actual hard law.
That will be a very tricky timing question, I think, for governments to grapple with. But ultimately, it's not sufficient to have companies governing themselves. You'll need to be able to consecrate it into government backed efforts and initiatives and legislation and bills. My strong intuition is that it's not quite the right time to roll out object level policies. And so the main task for governments will be just to position themselves to do that well when the time is right.
Lucas: So what's coming to my mind here is I'm thinking about YouTube compilations of congressional members of the United States and senators asking horrible questions to Mark Zuckerberg and the CEO of, say, Google. They just don't understand the issues. The United States is currently not really thinking that much about AI, and especially transformative AI. Whereas, China, it seems, has taken a step in this direction and is doing massive governmental investments. So what can we say about this assuming difference? And the question is, what are governments to do in this space? Different governments are paying attention at different levels.
Jade: Some governments are more technological savvy than others, for one. So I pushed back on the US not ... They're paying attention on different things. So, for example, the Department of Commerce put out a notice to the public indicating that they're exploring putting in place export controls on a cluster of emerging technologies, including a fair number of AI relevant technologies. The point of export controls is to do something like ensure that adversaries don't get access to critical technologies that, if they do, then that could undermine national security and/or domestic industrial base. The reasons why export controls are concerning is because they're a) a relatively outdated tool. They used to work relatively well when you were targeting specific kind of weapons technologies, or basically things that you could touch and see. And the restriction of them from being on the market by the US means that a fair amount of it won't be able to be accessed by other folks around the world. And you've seen export controls be increasingly less effective the more that we've tried to apply to things like cryptography, where it's largely software based. And so trying to use export controls, which are applied at the national border, is a very tricky thing to make effective.
So you have the US paying attention to the fact that they think that AI is a national security concern, at least in this respect, enough to indicate that they're interested in exploring export controls. I think it's unlikely that export controls are going to be effective at achieving the goals that the US want to pursue. But I think export controls is also indicative of a world that we don't want to slide in, which is a world where you have rivalrous economic blocks, where you're sort of protecting your own base, and you're not contributing to the kind of global commons of progressing this technology.
Maybe it goes back to what we were saying before, in that if you're not engaged in the governance, the governance is going to happen anyway. This is an example of activity is going to happen anyway. I think people assume now, probably rightfully so, that the US government is not going to be very effective because they are not technically literate. In general, they are sort of relatively slow moving. They've got a bunch of other problems that they need to think about, et cetera. But I don't think it's going to take very, very long for the US government to start to seriously engage. I think the thing that is worth trying to influence is what they do when they start to engage.
If I had a policy in mind that I thought was robustly good that the US government should pass, then that would be the more proactive approach. It seems possible that if we think about this hard enough, there could be robustly good things that the US government could do, that could be good to be proactive about.
Lucas: Okay, so there's this sort of general sense that we're pretty heavy on academic papers because we're really trying to understand the problem, and the problem is so difficult, and we're trying to be careful and sure about how we progress. And it seems like it's not clear if there is much room, currently, for direct action, given our uncertainty about specific policy implementations. There are some shorter term issues. And sorry to say shorter term issues. But, by that, I mean automation and maybe lethal autonomous weapons and privacy. These things, we have a more clear sense of, at least about potential things that we can start doing. So I'm just trying to get a sense here from you, on top of these efforts to try to understand the issues more, and on top of these efforts, for example, like 80,000 Hours has contributed. And by working to place aligned persons in various private organizations, what else can we be doing? What would you like to see more being done on here?
Jade: I think this is on top of just more research. But that would be the first thing that comes to mind, is people thinking hard about it seems like a thing that I want a lot more of, in general. But on top of that, what you mentioned, I think, the placing people, that maybe fits into this broader category of things that seems good to do, which is investing in building our capacity to influence the future. That's quite a general statement. But something like it takes a fair amount of time to build up influence, particularly in certain institutions, like governments, like international institutions, et cetera. And so investing in that early seems good. And doing things like trying to encourage value aligned sensible people to climb the ladders that they need to climb in order to get to positions of influence, that generally seems like a good and useful thing.
The other thing that comes to mind as well is putting out more accurate information. One specific version of things that we could do here is, there is currently a fair number of inaccurate, or not well justified memes that are floating around, that are informing the way that people think. For example, the US and China are in a race. Or a more nuanced one is something like, inevitably, you're going to have a safety performance trade off. And those are not great memes, in the sense that they don't seem to be conclusively true. But they're also not great in that they put you in a position of concluding something like, "Oh, well, if I'm going to invest in safety, I've got to be an altruist, or I'm going to trade off my competitive advantage."
And so identifying what those bad ones are, and countering those, is one thing to do. Better memes could be something like those are developing this technology are responsible for thinking through its consequences. Or something even as simple as governance doesn't mean government, and it doesn't mean regulation. Because I think you've got a lot of firms who are terrified of regulation. And so they won't engage in this governance conversation because of it. So there could be some really simple things I think we could do, just to make the public discourse both more accurate and more conducive to things being done that are good in the future.
Lucas: Yeah, here I'm also just seeing the tension here between the appropriate kinds of memes that inspire, I guess, a lot of the thinking within the AI alignment community, and the x-risk community, versus what is actually useful or spreadable for the general public, adding in here ways in which accurate information can be info-hazardy. I think broadly in our community, the common good principle, and building an awesome future for all sentient creatures, and I am curious to know how spreadable those memes are.
Jade: Yeah, the spreadability of memes is a thing that I want someone to investigate more. The things that make things not spreadable, for example, are just things that are, at a very simple level, quite complicated to explain, or are somewhat counterintuitive so you can't pump the intuition very easily. Particularly things that require you to decide that one set of values that you care about, that's competing against another set of values. Any set of things that brings nationalism against cosmopolitanism, I think, is a tricky one, because you have some subset of people. The ones that you and I talk to the most are very cosmopolitan. But you also have a fair amount of people who care about the common good principle, in some sense, but also care about their nation in a fairly large sense as well.
So there are things that make certain memes less good or less spreadable. And one key thing will be to figure out which ones are actually good in the true sense, and good in the pragmatic to spread sense.
Lucas: Maybe there's a sort of research program here, where psychologists and researchers can explore focus groups on the best spreadable memes, which reflect a lot of the core and most important values that we see within AI alignment, and EA, and x-risk.
Jade: Yeah, that could be an interesting project. I think also in AI safety, or in the AI alignment space, people are framing safety in quite different ways. One framing of that, which like it's a part of what it means to be a good AI person, is to think about safety. That's an example of one that I've seen take off a little bit more lately because that's an explicit act of trying to mainstream the thing. That's a meme, or an example of a framing, or a meme, or whatever you want to call it. And you know there are pros and cons of that. The pros would be, plausibly, it's just more mainstream. And I think you've seen evidence of that be the case because more people are more inclined to say, "Yeah, I agree. I don't want to build a thing that kills me if I want it to get coffee." But you're not going to have a lot of conversations about maybe the magnitude of risks that you actually care about. So that's maybe a con.
There's maybe a bunch of stuff to do in this general space of thinking about how to better frame the kind of public facing narratives of some of these issues. Realistically, memes are going to fill the space. People are going to talk about it in certain ways. You might as well try to make it better, if it's going to happen.
Lucas: Yeah, I really like that. That's a very good point. So let's talk here a little bit about technical AI alignment. So in technical AI alignment, the primary concerns are around the difficulty of specifying what humans actually care about. So this is like capturing human values and aligning with our preferences and goals, and what idealized versions of us might want. So, so much of AI governance is thus about ensuring that this AI alignment process we engage in doesn't skip too many corners. The purpose of AI governance is to decrease risks, to increase coordination, and to do all of these other things to ensure that, say, the benefits of AI are spread widely and robustly, that we don't get locked into any negative governance systems or value systems, and that this process of bringing AIs in alignment with the good doesn't have researchers, or companies, or governments skipping too many corners on safety. In this context, and this interplay between governance and AI alignment, how much of a concern are malicious use cases relative to the AI alignment concerns within the context of AI governance?
Jade: That's a hard one to answer, both because there is a fair amount of uncertainty around how you discuss the scale of the thing. But also because I think there are some interesting interactions between these two problems. For example, if you're talking about how AI alignment interacts with this AI governance problem. You mentioned before AI alignment research is, in some ways, contingent on other things going well. I generally agree with that.
For example, it depends on AI safety taking place in research cultures and important labs. It requires institutional buy-in and coordination between institutions. It requires this mitigation of race dynamics so that you can actually allocate resources towards AI alignment research. All those things. And so in some ways, that particular problem being solved is contingent on us doing AI governance well. But then, also to the point of how big is malicious use risk relative to AI alignment, I think in some ways that's hard to answer. But in some ideal world, you could sequence the problems that you could solve. If you solve the AI alignment problem first, then AI governance research basically becomes a much narrower space, addressing how an aligned AI could still cause problems because we're not thinking about the concentration of power, the concentration of economic gains. And so you need to think about things like the windfall clause, to distribute that, or whatever it is. And you also need to think about the transition to creating an aligned AI, and what could be messy in that transition, how you avoid public backlash so that you can actually see the fruits of you having solved this AI alignment problem.
So that becomes more the kind of nature of the thing that AI governance research becomes, if you assume that you've solved the AI alignment problem. But if we assume that, in some world, it's not that easy to solve, and both problems are hard, then I think there's this interaction between the two. In some ways, it becomes harder. In some ways, they're dependent. In some ways, it becomes easier if you solve bits of one problem.
Lucas: I generally model the risks of malicious use cases as being less than the AI alignment stuff.
Jade: I mean, I'm not sure I agree with that. But two things I could say to that. I think, one, intuition is something like you have to be a pretty awful person to really want to use a very powerful system to cause terrible ends. And it seems more plausible that people will just do it by accident, or unintentionally, or inadvertently.
Lucas: Or because the incentive structures aren't aligned, and then we race.
Jade: Yeah. And then the other way to sort of support this claim is, if you look at biotechnology and bio-weapons, specifically, bio-security/bio-terrorism issues, so like malicious use equivalent. Those have been far less, in terms of frequency, compared to just bio-safety issues, which are the equivalent of accident risks. So people causing unintentional harm because we aren't treating biotechnology safely, that's cause a lot more problems, at least in terms of frequency, compared to people actually just trying to use it for terrible means.
Lucas: Right, but don't we have to be careful here with the strategic properties and capabilities of the technology, especially in the context in which it exists? Because there's nuclear weapons, which are sort of the larger more absolute power imbuing technology. There has been less of a need for people to take bio-weapons to that level. You know? And also there's going to be limits, like with nuclear weapons, on the ability of a rogue actor to manufacture really effective bio-weapons without a large production facility or team of research scientists.
Jade: For sure, yeah. And there's a number of those considerations, I think, to bear in mind. So it definitely isn't the case that you haven't seen malicious use in bio strictly because people haven't wanted to do it. There's a bunch of things like accessibility problems, and tacit knowledge that's required, and those kinds of things.
Lucas: Then let's go ahead and abstract away malicious use cases, and just think about technical AI alignment, and then AI/AGI governance. How do you see the relative importance of AI and AGI governance, and the process of AI alignment that we're undertaking? Is solving AI governance potentially a bigger problem than AI alignment research, since AI alignment research will require the appropriate political context to succeed? On our path to AGI, we'll need to mitigate a lot of the race conditions and increase coordination. And then even after we reach AGI, the AI governance problem will continue, as we sort of explored earlier that we need to be able to maintain a space in which humanity, AIs, and all earth originating sentient creatures are able to idealize harmoniously and in unity.
Jade: I both don't think it's possible to actually assess them at this point, in terms of how much we understand this problem. I have a bias towards saying that AI governance is the harder problem because I'm embedded in it and see it a lot more. And maybe ways to support that claim are things we've talked about. So AI alignment going well, or happening at all, is sort of contingent on a number of other factors that AI governments are trying to solve, so social political economic context needs to be right in order for that to actually happen, and then in order for that to have an impact.
There are some interesting things that are made maybe easier by AI alignment being solved, or somewhat solved, if you are thinking about the AI governance problem. In fact, it's just like a general cluster of AI being safer and more robust and more transparent, or whatever, makes certain AI governance challenges just easier. The really obvious example here that comes to mind is the verification problem. The inability to verify what certain systems are designed to do and will do causes a bunch of governance problems. Like, arms control agreements are very hard. Establishing trust between parties to cooperate and coordinate is very hard.
If you happen to be able to solve some of those problems in the process of trying to tackle this AI alignment problem. And that makes AI governance a little bit easier. I'm not sure which direction it cashes out, in terms of which problem is more important. I'm certain that there are interactions between the two, and I'm pretty certain that one depends on the other, to some extent. So it becomes imminently really hard to govern the thing, if you can't align the thing. But it also is probably the case that by solving some of the problems in one domain, you can help make the other problem a little bit tractable and easier.
Lucas: So now I'd like to get into lethal autonomous weapons. And we can go ahead and add whatever caveats are appropriate here. So in terms of lethal autonomous weapons, some people think that there are major stakes here. Lethal autonomous weapons are a major AI enabled technology that's likely to come on the stage soon, as we make some moderate improvements to already existing technology, and then package it all together into the form of a lethal autonomous weapon. Some take the view that this is a crucial moment, or that there are high stakes here to get such weapons banned. The thinking here might be that by demarcating unacceptable uses of AI technology, such as for autonomously killing people, and by showing that we are capable of coordinating on this large and initial AI issue, that we might be taking the first steps in AI alignment, and the first steps in demonstrating our ability to take the technology and its consequences seriously.
And so we mentioned earlier how there's been a lot of thinking, but not much action. This seems to be an initial place where we can take action. We don't need to keep delaying our direction action and real world participation. So if we can't get a ban on autonomous weapons, maybe it would seem that we have less hope for coordinating on more difficult issues. And so the lethal autonomous weapons may exacerbate global conflict by increasing skirmishing at borders, decrease the cost of war, dehumanize killing, taking the human element out of death, et cetera.
And other people disagree with this. Other people might argue that banning lethal autonomous weapons isn't necessary in the long game. It's not, as we're framing it, a high stakes thing. Just because this sort of developmental step in this technology is not really crucial for coordination, or for political military stability. Or that coordination later would be born of other things, and that this would just be some other new military technology without much impact. So curious here, to gather what your views, or the views of FHI, or the Center for the Governance of AI, might have on autonomous weapons. Should there be a ban? Should the AI alignment community be doing more about this? And if not, why?
Jade: In terms of caveats, I've got a lot of them. So I think the first one is that I've not read up on this issue at all, followed it very loosely, but not nearly closely enough, that I feel like I have a confident well-informed opinion.
Lucas: Can I ask why?
Jade: Mostly because of bandwidth issues. It's not because I have categorized them ahead of something not worth engaging in. I'm actually pretty uncertain about that. The second caveat is, definitely don't claim to speak on behalf of anyone but myself in this case. The Center for the Governance of AI, we don't have a particular position on this, nor the FHI.
Lucas: Would you say that this is because the Center for the Governance of AI, would it be for bandwidth issues again? Or would it be because it's de-prioritized.
Jade: The main thing is bandwidth. Also, I think the main reason why it's probably been de-prioritized, at least subconsciously, has been the framing of sort of focusing on things that are neglected by folks around the world. It seems like there are people at least with sort of somewhat good intentions tentatively engaged in the LAWS (lethal autonomous weapons) discussion. And so within that frame, I think de-prioritization because it's not obviously neglected compared to other things that aren't getting any focus at all.
With those things in mind, I could see a pretty decent case for investing more effort in engaging in this discussion, at least compared to what we currently have. I guess it's hard to tell, compared to alternatives of how we could be spending those resources, giving it's such a resource constrained space, in terms of people working in AI alignment, or just bandwidth, in terms of this community in general. So briefly, I think we've talked about this idea that there's this fair amount of path dependency in the way that institutions and norms are built up. And if this is one of the first spaces, with respect to AI capabilities, where we're going to be getting or driving towards some attempt at international norms, or establishing international institutions that could govern this space, then that's going to be relevant in a general sense. And specifically, it's going to be relevant for sort of defense and security related concerns in the AI space.
And so I think you both want to engage because there's an opportunity to seed desirable norms and practices and process and information. But you also possibly want to engage because there could be a risk that bad norms are established. And so it's important to engage, to prevent it going down something which is not a good path in terms of this path dependency.
Another reason I think that is maybe worth thinking through, in terms of making a case for engaging more, is that applications of AI in the military and defense spaces, possibly one of the most likely to cause substantial disruption in the near-ish future, and could be an example of something that I call the high stakes concerns in the future. And you can talk about AI and its impact on various aspects of the military domain, where it could have substantial risks. So, for example, in cyber escalation, or destabilizing nuclear security. Those would be examples where military and AI come together, and you can have bad outcomes that we do actually really care about. And so for the same reason, engaging specifically in any discussion that is touching on military and AI concerns, could be important.
And then the last one that comes to mind is the one that you mentioned. This is an opportunity to basically practice doing this coordination thing. And there are various things that are worth practicing or attempting. For one, I think even just observing how these discussions pan out is going to tell you a fair amount about how important actors think about the trade offs of using AI versus sort of going towards more safe outcomes or governance processes. And then our ability corral interest around good values or appropriate norms, or whatnot, that's a good test of our ability to generally coordinate when we have some of those trade offs around, for example, military advantage versus safety. It gives you some insight into how we could be dealing with similarly shaped issues.
Lucas: All right. So let's go ahead and bring it back here to concrete actionable real world things today, and understanding what's actually going on outside of the abstract thinking. So I'm curious to know here more about private companies. At least, to me, they largely seem to be agents of capitalism, like we said. They have a bottom line that they're trying to meet. And they're not ultimately aligned with pro-social outcomes. They're not necessarily committed to ideal governance, but perhaps forms of governance which best serve them. And as we sort of feed aligned people into tech companies, how should we be thinking about their goals, modulating their incentives? What does DeepMind really want? Or what can we realistically expect from key players? And what mechanisms, in addition to the windfall clause, can we use to sort of curb the worst aspects of profit-driven private companies?
Jade: If I knew what DeepMind actually wanted, or what Google actually thought, we'd be in a pretty different place. So a fair amount of what we've chatted through, I would echo again. So I think there's both the importance of realizing that they're not completely divorced from other people influencing them, or other actors influencing them. And so just thinking hard about which levers are in place already that actually constrain the action of companies, is a pretty good place to start, in terms of thinking about how you can have an impact on their activities.
There's this common way of talking about big tech companies, which is they can do whatever they want, and they run the world, and we've got no way of controlling them. Reality is that they are consistently constrained by a fair number of things. Because they are agents of capitalism, as you described, and because they have to respond to various things within that system. So we've mentioned things before, like governments have levers, consumers have levers, employees have levers. And so I think focusing on what those are is a good place to start. Anything that comes to mind is, there's something here around taking a very optimistic view of how companies could behave. Or at least this is the way that I prefer to think about it, is that you both need to be excited, and motivated, and think that companies can change and create the conditions in which they can. But one also then needs to have a kind of hidden clinic, in some ways.
On both of these, I think the first one, I really want the public discourse to turn more towards the direction of, if we assume that companies want to have the option of demonstrating pro-social incentives, then we should do things like ensure that the market rewards them for acting in pro-social ways, instead of penalizing their attempts at doing so, instead of critiquing every action that they take. So, for example, I think we should be making bigger deals, basically, of when companies are trying to do things that at least will look like them moving in the right direction, as opposed to immediately critiquing them as ethics washing, or sort of just paying lip service to the thing. I want there to be more of an environment where, if you are a company, or you're a head of a company, if you're genuinely well-intentioned, you feel like your efforts will be rewarded, because that's how incentive structures work, right?
And then on the second point, in terms of being realistic about the fact that you can't just wish companies into being good, that's when I think the importance of things like public institutions and civil society groups become important. So ensuring that there are consistent forms of pressure, and keep making sure that they feel like their actions are being rewarding if pro-social, but also that there are ways of spotting in which they can be speaking as if they're being pro-social, but acting differently.
So I think everyone's kind of basically got a responsibility here, to ensure that this goes forward in some kind of productive direction. I think it's hard. And we said before, you know, some industries have changed in the past successfully. But that's always been hard, and long, and messy, and whatnot. But yeah, I do think it's probably more tractable than the average person would think, in terms of influencing these companies to move in directions that are generally just a little bit more socially beneficial.
Lucas: Yeah. I mean, also the companies were generally made up of fairly reasonable well-intentioned people. I'm not all pessimistic. There are just a lot of people who sit at desks and have their structure. So yeah, thank you so much for coming on, Jade. It's really been a pleasure. And, yeah.
Jade: Likewise.
Lucas: If you enjoyed this podcast, please subscribe, give it a like, or share it on your preferred social media platform. We'll be back again soon with another episode in the AI Alignment series.