CONDOLEEZZA RICE:

Unless we can improve the prospects for citizens, you’re going to continue to have fearful populations that are susceptible to the siren song of populism, nativism, isolationism, and protectionism. I call them the four horsemen of the apocalypse. And if you’re going to deal with the four horsemen of the apocalypse, somebody’s got to deal with the lack of opportunity. And so whether it’s in healthcare or in education, I would say keep pushing that dimension of it.

REID:

Hi, I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know what happens if, in the future, everything breaks humanity’s way.

ARIA:

We’re speaking with visionaries in many fields, from art to geopolitics and from healthcare to education.

REID:

These conversations showcase another kind of guest. Whether it’s Inflection’s Pi or OpenAI’s GPT-4, each episode we use AI to enhance and advance our discussion.

ARIA:

In each episode, we seek out the brightest version of the future and learn what it’ll take to get there.

REID:

This is Possible.

ARIA:

Listener note: This conversation with Dr. Rice was recorded before Hamas’ October 7 terrorist attack and subsequent conflicts in Israel, Gaza, and the Middle East, so we will not be discussing them today.

REID:

So far, the Possible podcast has focused largely on domestic public policy issues, from criminal justice to education — but today we’re zooming out to the international stage.

ARIA:

We live in an increasingly globalized world where developments such as climate change or pandemics aren’t limited to national borders. Issues that might have previously seemed local are actually global. And there’s already a lot in motion on the international stage — whether that’s straining relations with China, or sanctions and military aid in the Russia-Ukraine conflict.

REID:

That’s right. And at the time of this recording, we’re seeing news about North Korea bolstering its attack capabilities with Kim Jong Un visiting Russia and launching ballistic missiles towards its eastern seas. These issues are complicated. And then, we have this rapidly developing technology in AI that is capable of bridging or deepening divides.

ARIA:

And just as in other conversations on AI, there will be people who see AI as either bringing catastrophe or helping usher in harmony. But it’s important to avoid setting up camp around either extreme and to remember that outcomes in international affairs depend on humans and how we handle this technology.

REID:

Exactly. That’s why we want to know: what does an experienced player on the international stage think? That’s why we’re talking to Condoleezza Rice. Condoleezza Rice is a renowned diplomat, scholar, and educator. She became the first African American woman to serve as Secretary of State in 2005. But before that, she provided national security guidance to President George W. Bush in the aftermath of the 9/11 terrorist attacks from 2001 to 2005. Now she’s the director of the Hoover Institution at Stanford University, a public policy research think tank.

ARIA:

So what I loved is sitting down with Condoleezza was really talking about AI in this global understanding. She’s had decades and decades of experience — whether that’s the nuclear age, figuring out how we bring countries together to agreements. And so she really understands like, how do you play the game and how can we both avoid nuclear war? But also as we’re thinking about AI, a lot of people are concerned about sort of the same arms race. And I honestly think there’s no one who is smarter on international cooperation than Condoleezza Rice.

REID:

Yes. I think, I mean, among the many amazing things in the Condi conversation, some of them were a global perspective, some of them were the question around how do we navigate this in a way to strengthen and deepen our alliances. But also, you know, have the world order of the kinds of things that we want to have — stability, peace, collaboration. And then what are the incentive designs, and what are the realpolitik designs that are required to make that? And that balance of clear-eyed and clear hearted thinking on these things, together with a deeply considered view of technology and of what AI is bringing — versus, you know, hysterics in one direction or the other.

ARIA:

I thought what was really interesting is you can tell that Condi is deeply a humanist. You know, when we were talking about how do you solve for global tensions or how do you solve for populous tensions in the United States, you might expect her to immediately go to, you know, a big solution — or we need more weapons, or we need more intelligence. And you know, she went back to, well, how do we get our population educated? How do we bring opportunity to every American? How do we use technology to level the playing field? And so I just thought it was so interesting that she can be sort of thinking on this global stage, but also thinking really nuanced about people’s everyday lives and how to improve them.

REID:

Looking into the future is always a little bit like looking through a glass darkly, and she tends to go, look, these are actual guideposts that are really important. Whether it’s a question about how we think about how the ball’s moving, what our relationship with our allies are, what the nature of actually, you know, kind of competition and conflict between nation states, the different actors are — but also then how to align those in really good ways. And that kind of, you know, glimpse into the future is a really good thing.

ARIA:

Absolutely. And here’s our conversation with Condoleezza Rice.

ARIA:

So for those of you who may not know in our audience, can you say a bit about your work at the Hoover Institution at Stanford, of course, as well as places that have brought you closer to the conversation about AI such as being on the board of C3 AI?

CONDOLEEZZA RICE:

Yes. Well, for me the great thing about being a part of Stanford University — and the Hoover Institution is a part of Stanford University — is that Stanford has, as an institution really from its founding, been young, and it’s been at the lead edge of so much. You know, you think about Silicon Valley, and you think about the early inventions that created Silicon Valley. And so the way that I think about what we’re doing at Hoover is that as a policy institute that is trying to think about the world’s biggest, most gnarly, most difficult problems, we’ve really tried to up our game in terms of technology. So I am co-chairing with the Dean of Engineering, Jennifer Widom, a new project called the Stanford Emerging Technology Review. Now, what we’re doing is we’re taking the top ten, as we see them, transformative technologies.

CONDOLEEZZA RICE:

So AI, nano, quantum, material sciences, and quantum synthetic biology. And we’re saying to people who are in the labs — and at Stanford that means I can walk 15 minutes to talk to the people — first of all, explain to the layman what this is, and secondly, where is it going? And then, as a policy institute, to try to help policy makers think about what are the implications for democracy? What are the implications for the economy? What are the implications for sustainability? What are the implications for national security? And the reason we want to do that is everybody in Washington can now spell AI. Alright? But I’m not sure that they fully understand what it means and what its implications are. And the technology is running ahead at a very rapid pace. Our institutions are struggling to keep up. And if we don’t marry better the understanding of policy makers — who are responsible for the protection of those institutions — with the people who are at the front leading edge, then we’re either going to see policymakers fearful and starting to impede progress, or we’re going to see that we’re going to wake up one day and think, “oh my goodness, how did we get here?”

REID:

Not surprising, because Condi, you and I have known each other for years, and I’ve always found you to be very foresighted and clearly looking at what the trends in the future are. So it does not surprise me that you’re already on, you know, well into this. What did you notice so far around the kind of AI and the intersection for foreign affairs? Obviously we have all of these like, “oh, governments have to collaborate to regulate” — which if it could coordinate well, it’d work, but I’m always a little skeptical of the plans that kind of have the equivalence of the UN, you know, being a highly efficient, effective organization. What do you see as kind of the road ahead here? What are the key things that we should be paying attention to in order to achieve, you know, kind of all the benefits and avoid the challenges, or minimize the challenges?

CONDOLEEZZA RICE:

The first thing is they, people have to understand it better. And that understanding has to include understanding what the technologies actually can do and what they can’t, and what they might be able to do in the future and what they can’t do in the future. And you know, the people who do quantum, for instance, just to talk about that — will tell you that this is something that we are way ahead of thinking it’s, its applications from where the technology actually is. And yet AI, I think we’re way behind in thinking about what the applications might be. And so trying to sync those time frames — if you will, for policymakers and the creators — I think is very much important. Secondly, we’ve got a different set of circumstances with AI than we had. The one that I hear people compare to all the time is, well, we managed with nuclear weapons and the splitting of the atom.

CONDOLEEZZA RICE:

You know, the movie Oppenheimer is interesting because when the… Just a couple decades after thermonuclear weapons came into being, people were talking about having 25 nuclear powers. Well, we have someplace between eight and nine, depending on how you count them. And so people say, “Okay, so we controlled that technology. As dangerous as that was. So maybe we should think about some control regimes for AI in the same way.” The problem is, this is actually a domain that is not owned by the government. This is a domain in which private sector actors are actually way out ahead of what the government is producing, can do. And so this private-public sector conversation is even more important and more difficult. I know that, you know, there’ve been some meetings with the leading AI producers. I, that’s great if it’s understanding — but don’t expect it anytime soon to produce a regime for understanding what we do and do not want to do with AI.

CONDOLEEZZA RICE:

And that’s within the United States. The British are having a kind of AI governance conference, and it’s mostly with kind of like-minded countries. That’s great, but don’t expect a regime out of that. And then if you say, what then is the role of the Chinese in that governance, now you’re way outside the area where you could expect to have some kind of governance structure grow up. And so I think this is a fundamentally different set of issues. And to your point, there’s so much that could be good that what we have to do is make sure people understand it well enough that they don’t simply get fearful. So you hear all the time, what about deep fakes? Alright, so yes, that’s a problem. But what can the technologists tell us about what might be possible to do about deep fakes so that we don’t try to constrain innovation in the fear of that one thing?

CONDOLEEZZA RICE:

And I don’t have much hope, Reid, for the “international” community doing this. I’ll let you in on a little secret. There is no such thing as the international community. What there is is a bunch of member states, and you try to get countries together around common interests. Well, one common interest might be — are there things we don’t want to do with AI? Do we want to think about ways of preventing mass casualties, for instance? So there’s some points here that we could begin to discuss, but we don’t, right now, have the means to do that.

ARIA:

It’s so interesting, to your point, so many people have been drawing the connection between the nuclear powers and sort of what we did there for containment. And now people are talking about, is containment possible, but also the upside is truly enormous. And so, as you said, if we spend too much time containing, stifling innovation, we’re not going to get that positive. So specifically in the US, like what do you think the role of the US government is in terms of supporting the development of AI and ensuring that we get the innovation that we want in that space?

CONDOLEEZZA RICE:

Well, I’m a small government type, and whenever I think about the role of the US government in technology and development, hair goes up on the back of my neck because I think our great strength has been distributed innovation. It’s been that the US government did not try to, in a sense, direct or largely constrain the way innovation is going to take place. I’ll tell you an interesting story in this regard. So one of the smartest human beings I’ve ever known is Bill Perry. So Bill Perry was the under secretary for research and engineering for the Carter administration. And he tells the story of how, around 1978, he testified before Congress, and somebody asked him about personal computing, and he said, “I see absolutely no reason anybody would need a personal computer.” And Bill tells that story because here, he, a technologist, a, you know, a PhD in mathematics, doesn’t see it. And he used it always as an example of why the US government shouldn’t pick winners and losers. When I think about the role of the US government, I think about it instead as what support to infrastructure for innovation can the US government bring? And I would say three elements. One is, don’t get in the way of talent. Right now, our immigration policies are such that we could get in the way of talent. The sad thing is we don’t produce enough engineers in the United States to do what we have done, which is to lead the world in innovation.

CONDOLEEZZA RICE:

But we are a place where, if you’re a really bright software engineer, you might want to come and be American. So let’s make sure that those people can. The second point is in funding for fundamental research. One of the great strengths that the United States has had is through the NSF, the National Science Foundation, or even through the Defense Department. And not to mention the NIH, that’s a piece that I think the federal government could really help with. And then third, there is the whole infrastructure question. You know, when we find a discontinuity or dysfunction. Like we’ve learned now we are way behind in terms of the high end of semiconductor production. I mean, there’s the R&D — which we’re still reasonably good — but the government should try to do something like that. So I was a big supporter of the CHIPS Act, and I think that’s a good thing to do.

CONDOLEEZZA RICE:

But that infrastructure — what’s going back to AI, the generating power that is required to do generative AI, for instance — right now really doesn’t exist outside of industry. And I think we need to ask ourselves as a country, do we want that to be the case? Do we want it? I have enormous respect for, you know, Satya at Microsoft, and for the folks at Google, and I think they’re good folks. But do we really want the GPUs to exist only in the private sector, or do we want to do something about the national infrastructure? We did something about the cloud through a national cloud, but do we want to do something about this issue? And I think an undervalued asset is the National Labs. You know, they have some of the best people. But we still tend to think of them as related only to energy — which is why you had the fusion breakthrough in a National Lab. But is there more that could be done with the National Labs in this infrastructure-of-research piece?

ARIA:

I love that. We actually had Dr. Kim Budil on the show early last season. And to your point, it was phenomenal.

REID:

It’d be interesting if we could do National Labs or something else — you know, my own thinking has been, are there other ways we could experiment with private-public partnership stuff? Because part of the challenge is it’s moving forward at a clip. Like you even think about, could the government get itself to putting thousands of, you know, GPUs together when the computers that are being built right now, the data centers, are hundreds of thousands? It’s not just the R&D of that — but also the infrastructure expertise, also the software expertise. These are teams of hundreds kind of working on this — which, most of these labs outside of like CERN, et cetera, don’t tend to be hundreds, you know, kind of working together. And so getting the government to just do it would be great. But I wondered, given the speed of which the motion is, to say, actually, in fact, it’s different conceptions of public-private partnership that we need to make happen — because we do want this to be useful to society beyond the very principled and high-minded contribution of companies like Microsoft, Google, Open AI, and others as ways of doing this — we want it to have a broader constituency to it, which is part of the reason why it’s like, okay, how should we be playing? But it strikes me that the principal way to do that is by working with the companies. One of the examples that I’ve been kind of thinking about is, you just say, “well, would we like to have a return to our manufacturing industry across a large number of things?” That plan has to be AI. AI is being developed by these companies. Maybe these companies can help with that, right? As part of a public-private partnership for creating the technological base. I would love it if the National Labs could actually do this.

CONDOLEEZZA RICE:

Well, I think you’re right. I don’t think they can do it without partners. I don’t think they can do it in whole. But I do think they’re underexploited as a part of the possible solution. The question is, what can, what would the private sector be willing to do, right? So they’re running very, very fast, and are they running very, very fast towards proprietary? Yeah, probably. Then we’re understanding that. But what might be a piece of that that actually does, as you said, for the societal, for the social good? Because when you think about the possibilities of AI in healthcare, you think about the possibilities of AI in education, you think about the possibilities of AI in defense — we want the innovation to keep pushing forward. We want to try to understand what it is, where it’s going, what are its implications for these various areas? And then we want to make sure, and this is a point I make all the time, we want to make sure the United States keeps this lead. And I worry a little bit that our, we don’t have as much going on in Europe as we might. I know there is a lot going on in the UK. And the reason I say the kind of democratic block, if you will, why I want that, is, just do a thought experiment that the nuclear age is won either by the Nazis or by the Soviets instead of by the United States. Authoritarian regimes are a problem, and an authoritarian regime that will use these technological breakthroughs to transform in completely different ways, toward more social control, toward suppression of minorities — who knows? —  toward the creation of tailored DNA-led pandemics, right? So I think we want to trust —  it’s not that democracies always do this right. But they have enough alternative voices, and enough ways to identify if something’s going off kilter, that I trust democracies more than I trust authoritarians.

ARIA:

I think people are really uncomfortable when it comes to AI and war, when it comes to AI and defense. We both are sort of nervous about what that means — the changing battlefield. But a lot of people don’t even want to talk about sort of the positive uses that could save lives or could change the topography of the battlefield, because that, you know, is unseemly, or that doesn’t feel right. But you know, the DOD has said artificial intelligence is expected to transform all sectors of society, including war. And so, how do you see AI transforming sort of the battlefield of war? And, you know, it could be both for good and for bad.

CONDOLEEZZA RICE:

Well, let me talk first about efficiency, because I think that’s one place that AI will make a difference. I will just say, you know, you remember Aria, that dinner that we were at, somebody asked me, “will this become a weapon of war?” And I said, “every technology becomes a weapon of war.” And so, we know it’s going to happen, so let’s figure out how we’re going to deal with it, rather than, you know, I always feel sorry for the people who’re constantly talking about not letting space become militarized. Space is militarized. So let’s talk about how we might deal with the fact that this will become a weapon of war. So there’s, first of all, on the side of the American military and allied militaries, there’s a question of efficiency. And there’s, to my mind, there’s no doubt that the potential to take, you know, thousands of orders and not have somebody by hand or even by spreadsheet or by Excel going through them, is going to make it much more efficient both for determining what you need to train and how you train.

CONDOLEEZZA RICE:

So I think efficiency will be one of the big — there will be beneficiaries in the military around efficiency. I think there will be beneficiaries in the use of AI for predictive maintenance. For instance, one of the things that we do a lot of at C3 AI is kind of telling you where you can expect to have breakdowns given that you — and by the way, that’s not just in the military, that’ll be in the industry more broadly, but I think you’re going to see a lot of that. So training, efficiency, maintenance, across the board, you’re going to get a lot of impact of AI in defense. I think everybody would welcome that, because hopefully it brings down costs and so forth. Where people get nervous is when you start entering or thinking, how might AI affect the actual battlefield? There might be cases where AI can help you to make, to distinguish, because if you’re in a counterterrorism situation, you’re a soldier in counterterrorism or somebody from the Special Forces, everything around you is threat, and you are going to react to that threat.

CONDOLEEZZA RICE:

If you can start to help distinguish what is threatening and what isn’t through machine learning — to look at billions of cases that help you to identify what’s threat and what isn’t — that might actually help. In an environment in which one thing that we know is that the counterterrorism response — which is everything’s threat — is not good because the villager who you just, you know, shot somebody in the village who actually could have been an ally, we call that counterinsurgency. You go in and you make league with the villagers who also don’t like the terrorist. Is there some way that we get better distinguishing, better alignment, better differentiation, so that we can train people better? I think there, you might want AI on the battlefield. Where people get, again, really nervous is do I really want AI in my nuclear launch codes?

CONDOLEEZZA RICE:

Well, when I think about some of the cases that I studied where you got a false alarm, and everybody got geared up through that false alarm, maybe I want some kind of AI assistant to the decision maker, so that that decision is 20 minutes. It’s still going to be 20 minutes. In some cases, if it’s a submarine, it might be 10 minutes. But maybe I could use a little bit of assistance in saying, “no, actually that’s not a, that’s not a real threat. That was a bird flying into the radar,” which is actually a real case, right? And so I think this is more positive than most people in my line of work think. But I also think it has to, we have to think carefully about what we want it to do and what we want it to not, do not want it to do. Now, there is one piece from the nuclear age that I think might help.

CONDOLEEZZA RICE:

We decided early on—really after the Cuban Missile Crisis with the Soviet Union—that we did not want to have an accidental nuclear war. And so we got a whole set of prescriptions and protocols and transparency. On the day of 9/11, once I got to the bunker, the first thing that occurred to me is “somebody needs to get in touch with Vladimir Putin.” Because we’re going on alert. We don’t want them to go into a spiral of alerts. We alert, they alert, we alert, they alert. Pretty soon you’re at DEFCON 1, which is war. And so Putin was actually trying to reach President Bush. So I got on the phone with him. I said, “The President’s trying to get to a safe location, Mr. President, our forces are going up on alert.” He said, “Ours are coming down. We’re canceling exercises.”

CONDOLEEZZA RICE:

That’s the way we managed not to get into an accidental war. Now, could you internationally, say with big powers, alright, yeah, we have this conflict. We disagree about the South China Sea, we disagree about Taiwan — but the last thing we want to do is stumble into war. And AI might either help us not stumble into war, or it might accelerate our stumble into war. So are there some things, some rules of the road, by the way, that we might want to write about things that we do want to do and that we don’t want to do? So I think there are lots of possibilities here, but we don’t have, really, governance structures that help us have those conversations.

REID:

This kind of increased transparency, this kind of increased communication, is one of the things that I’ve been kind of noodling on. You know, and what the lessons are from nuclear, because within the cyber realm we have some new challenges that are distinct from nuclear, but have some similar parallels that we need to navigate. I wonder if there’s anything that is either lessons from the past or things you’ve been thinking about to establish this governance. Is it a G20 effort? Is it a G7 effort? Is it, you know, and, and what kinds of things should be happening to try to make sure we don’t, by lack of care, create accidental conflagrations?

CONDOLEEZZA RICE:

Yeah, I think you’re not going to like my answer, which is that—the real truth about the nuclear age is that we came awfully close before we decided to do something about potential for accident. And people said, okay, this is no way to live. We, we’re going to fix this. It’s hard to get people’s attention until something happens. You know, we — right now in cyber, I think we’re in a mutually-assured-destruction world. “You Do it to me. I’ll do it to you.” And I do think that has prevented some of the worst excesses in cyber that everybody was worried about. You know, would you try to take down somebody’s grid? Would you try to freeze their financial system? But it’s kind of living at the edge to use mutually assured destruction in these areas, because the potential for miscalculation is very, very high. My fear is that until we have something that broaches a place that none of us want to be, that we might continue to whistle past the graveyard.

CONDOLEEZZA RICE:

But I would certainly hope that these conversations might start with like-minded countries. You mentioned the G7. You know, I know NATO has had some discussions. You know, something interesting is going on and less cyber activity around the Ukraine war than a lot of people expected. You know, certainly cyber attacks against NATO countries have not reached levels that we thought they would. I was Secretary of State in 2007 when the Russians basically shut down the Estonians with a massive cyber attack. They just shut them down. We really haven’t seen anything like that. So is there something in the calculus that’s saying, “okay, that might actually be an Article 5, an attack upon one is an attack upon all”? So we’re in that very delicate world where people seem to be self regulating a bit. I would love to see it if we could take that self-regulation or that anticipation of, “I’d do it to you, you’ll do it to me,” and try to formalize it into something that looks more like what we’ve done with the Soviet Union about nuclear weapons.

ARIA:

So obviously mutually assured destruction is one way to try to not have a nuclear or cataclysmic war. Probably not our preferred method. A lot of people are talking about how sort of, you know, diplomacy and diplomatic methods are stuck in the past and that we need sort of a new way to do diplomacy in this new era. Are there ways that AI could help with diplomacy? Could like, could each country have a hagglebot that is involved in diplomacy between a country?

CONDOLEEZZA RICE:

Well, I may be old fashioned on this one, and maybe it’s because I don’t want Secretaries of State to go out of existence. But, I actually think in terms of preparation AI could help a lot. I think about, you know, all the time we put together, try to do various scenarios on what might get the Russians to do this or the North Koreans to do that. And maybe using historical data and training, training the model on the, you know, vast experience with the, with the Russians, or the vast experience with the Chinese, or the vast experience with the North Koreans, you could enable more efficient, more effective negotiations. I think that’s entirely possible. The one place that I don’t think it really works is, you know, basically to get to what I called interest overlap, I had to go in the room and listen. I just had to listen. All the previous experience, all the previous training of a model on what they’d said in the past, would not have allowed me to walk into the room with Sergey Lavrov and listen and hear, “oh, if I did this, they might be willing to do that.”

CONDOLEEZZA RICE:

And that’s the piece that I think the human listener just has an advantage. And I could also read the emotion in the room around certain things. But in preparation for a diplomatic engagement, I could see where it could be extremely helpful. By the way, I’d also love for ChatGPT or whatever comes after it to just write the cables so that my officers out in the…Could be out in the, instead of writing long cables about what they’ve learned about British politics, let the machine do that, and then they can go out and actually spend time with people.

ARIA:

I mean, I think you’re talking about what a lot of us have been saying is that AI makes the human need for EQ even greater. And it needs for us to be more human and to have the copilot help us—to your point, like we’re talking about international diplomacy and war—but to be just a better listener and a better human and a better partner. And so how can we have AI do that? It’s not sort of about the fancy technology. It’s about us becoming more human, which I think is really interesting. Even in these high stakes games, that’s so critical.

CONDOLEEZZA RICE:

Absolutely. And sometimes it could help with speed. When you’re in a negotiation and you need pretty quickly to run numbers or something or to, to say, you know, how would this work? Would this work in this circumstance? And you’re, you’re saying, “I’ll have to get back to you.” You know, it might be useful to have the model telling you, “oh, here would be the implications of that.” I could see this assistance being very, very useful. And then allowing the minister or the secretary to spend the time, as you said, on the, the human piece of it, the EQ piece of it.

REID:

I do think that the general dystopia that people do is the transference of all authority and decisioning to the machine and what could go wrong there. And I do think that we want to keep this kind of the humans at these unexpected or critical junctures or other kinds of things. And I think it’s part of the reason, you know, kind of, I wrote the book, I’m doing these podcasts. I think it’s an “elevate humanity” in these things.

CONDOLEEZZA RICE:

That’s why I focus on this notion of instinct. It’s where the training of the model won’t tell you what the reaction ought to be. So during the Georgia War in 2008…

ARIA:

Let me jump in for a moment. Pi, could you explain?

Pi:

So the Russo-Georgian war was a relatively short war lasting only five days, but it had significant political and economic consequences for both countries. The war began when Russia invaded Georgia, claiming that it was protecting the breakaway regions of South Ossetia and Abkhazia. The Georgian government, however, accused Russia of aggression and imperialism, and the conflict escalated quickly. The US was actively involved in the conflict, both diplomatically and militarily. The US strongly condemned Russia’s invasion of Georgia and sent military advisors to assist the Georgian government. Diplomatically, the US worked with other nations to pressure Russia to end the war and withdraw its troops from Georgia. The war ended with a ceasefire agreement brokered by French President Nicolas Sarkozy, which required Russia to withdraw its troops from Georgia and created a buffer zone between the two countries.

CONDOLEEZZA RICE:

Sergey Lavrov called me, Russian foreign minister, and he says, “We have three conditions for ending the war.” I said, “okay.” He said, “The first is the Georgians need to go back to their barracks.” I said, “Fine, done.” He said, “The second is they need to sign a no-first-use pledge.” “Fine.” He said, “The third is, and this one will just be between us,” he said, “Mikheil Saakashvili, the Georgian president, has to go.” Now, this is totally unexpected. And I say to him, “The American Secretary of State and the Russian Foreign Minister don’t have a secret conversation about overthrowing a democratically-elected president of Georgia. And what I’m going to do, Sergey, when I get off this phone, is I’m going to call everybody that I can get a hold of—the Germans, the French, the British, I’m going to the UN with my people—and I’m going to tell them that the Russian condition for ending the war is the overthrow of the Georgian President.”

CONDOLEEZZA RICE:

And he says, “you can’t do that. This is secret.” I said, “I don’t have this secret conversation.” I’m not sure there’s an assist for that. I didn’t think that through. There was just something in me at that moment that says, “tell him that you are not going to have a private conversation about this, and you’re going to go out and tell everybody that’s what he suggested.” That’s instinct. That’s not briefing. That’s not a lot of thought about it. And so, it’s just kind of an interesting place where the human, aided, could be, I think, even more effective.

ARIA:

I mean, that is Condi being a badass is what that is. [Laugh] That was an amazing story.

REID:

Indeed.

ARIA:

Obviously, I’m so excited about all the potential positives of AI, but one thing that has at least convinced me to say that the US should never slow down, and that we need to keep going, and that, you know, a pause doesn’t make sense, is sort of the US-China dynamic. How do you think about China vis-à-vis the US in this new age of AI?

CONDOLEEZZA RICE:

I think we’ve never had an, I’m going to call it an adversary, like China—because I think the relationship is largely adversarial. There’s some areas of cooperation, but it’s largely adversarial. And China is challenging with, technologically. You know, we are decoupling from China technologically because their concept of the internet, and our concept of the internet, are irreconcilable. Rightly, people are worried that investment in certain Chinese companies will simply feed the PLA to come back and haunt us in Taiwan or the South China Sea. I think China’s a particular kind of challenge. The piece that I’m most worried about—and maybe this because I’m a university person—is I really don’t want to get to the place that we’re decoupled from the Chinese people who might study in our universities. One of the great things about knowledge is it really is kind of borderless. We probably should try to keep the Chinese from getting certain high-end chips.

CONDOLEEZZA RICE:

And I’m told that they’re having trouble doing generative AI because we are denying the highest end chips. I’m perfectly happy to do that. But I’d like to see the international scientific community continue to operate so that it just doesn’t become proprietary in another sense of, if you are a Chinese student at Stanford University or at MIT I’m going to be suspicious of the fact that you’re Chinese, and I’m going to say, “you can’t be in that lab.” That would be a terrible cost, I think. Let’s try to keep as much openness as we possibly can. And I mentioned earlier, now, if a Chinese graduate student finishes here and wants to stay here, now I’m not quite in the “let’s staple a green card to their diploma,” but I’m someplace close to that. Talent is so important here.

REID:

What would be some of the innovations you would hope creators would be making that would help the geopolitical landscape? Right, so, it’s kind of like, you know, look here, look here, go in this direction.

CONDOLEEZZA RICE:

I’m not sure I would think about it globally. I would think about it more, what it does for populations. So if I think of why globalization is under such attack these days—to the point that people talk about nearshoring or are reshoring supply chains or cutting people out of labs and so forth, you know, the kind of populism that is so dominant around the world now where it’s, you know—if I contrast the way we dealt with 9/11, which is everybody’s problem, and we’re going to keep terrorists from doing the things and it’s going to be cooperation in terms of intelligence, and we’re going to stop terrorist financing and we’re going to stop suspicious cargo. I mean, real cooperation. We’re going to unify the way airports look, so that whether you’re in Dubai or Mexico City or New York, you pretty much know what to do. And I look at Covid, and it’s my PPE and my vaccines, and it’s kind of the revenge of the sovereign state.

CONDOLEEZZA RICE:

And I think you’re seeing the revenge of the sovereign state. And I think that’s largely because there are large parts of populations that aren’t benefiting from globalization. And so I would go back to what can you do for education, right? What, what can you do out in the, you know, boonies of some state where a kid is not going to get well-educated to make that possible? What are you going to do for the tutoring for the kid who’s not going to get to go to a very, how are you going to help the teacher? I know it may sound like small ball, but unless we can improve the prospects for citizens, you’re going to continue to have fearful populations that are susceptible to the siren song of populism, nativism, isolationism, and protectionism. I call them the four horsemen of the apocalypse. And if you’re going to deal with the four horsemen of the apocalypse, somebody’s got to deal with the lack of opportunity. And so whether it’s in healthcare or in education, I would say keep pushing that dimension of it. And then you will have a basis in these societies for leaders to want to realize the benefits of cooperation and globalization—which we once, 30 years ago, people didn’t question.

REID:

I completely agree. Alright, we have a couple of these rapid fire questions that we ask everybody. So I will open. Is there a movie, song, or book that fills you with optimism for the future?

CONDOLEEZZA RICE:

I just saw Chevalier, and the story of, you know, the son of a French slave owner and the African maid who turns out to be one of the great musicians of all time—a violinist and great fencer. And even though it didn’t turn out so well for him after the French Revolution took place, I just found it—it’s a true story by the way—and I just found it kind of inspiring to think about where you find talent. That you don’t always find talent at MIT and Stanford, with all due respect to my institution. But man, there’s talent everywhere out there. And so, it kind of inspired me to think about that. Where, where else can we find talent?

ARIA:

Absolutely. They always say talent is distributed evenly and opportunity is not. So how can we search it out? The next question—and this can be serious or not—what is a question that you wish people would ask you more often?

CONDOLEEZZA RICE:

I wish people would ask me more often, how do you define being human? How do you know what it is to be human? And particularly given our subject matter. You know, I’m deeply religious, and so part of my answer is about creation and God and being human. There’s something pretty special about being human. And I would like that question asked more often.

ARIA:

Could you leave us with a final thought, because we love your optimism, on what you think is possible to achieve in the next, you know, 15 years if everything breaks humanity’s way? And what’s the first step to get there?

CONDOLEEZZA RICE:

Well, I think about 15 years ago and how we wouldn’t have been having this conversation. And so I think the possibilities are pretty limitless. But if I could write the script, it would be one in which some of the persistent problems of society are actually making progress forward. That we don’t have the sense that we’re kind of stuck asking the same questions about inequality, the same questions about access to high-quality education, the same questions about access to healthcare, the same questions about do people really want to participate in their democracy and how can they. And that technology has been supportive of, and an assistant to, better answers to those questions than we have now. Because even though I think if you look at the, as it said, the long arc of history, human beings have made a lot of progress. And nothing makes me angrier than people who say, “well, we haven’t made any progress.” Oh, come on. We’ve made a lot of progress. But it is true that we do seem to be stuck asking the same questions over and over and over again. And so maybe this time around on some of those perennial questions, we can actually make real progress and technology can help us. That’s my wishlist.

ARIA:

Fantastic. What a wonderful wishlist to end on.

REID:

Possible is produced by Wonder Media Network, hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Edie Allard, Sara Schleede, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, and Ben Relles. Special thanks to Eryn Witcher Tillman and Little Monster Media Company.