This transcript is generated with the help of AI and is lightly edited for clarity.

//

ARIA:
Reid, it is lovely to see you today. First question, I want to talk about a fascinating academic paper out of King’s College London that came out last week and it’s looking at how frontier AI models behave in simulated nuclear crises. We often compare the AI age to the nuclear age, although there are many differences. And in this case researchers ran 21 war game scenarios involving territorial disputes, regime survival threats, Cold War-style standoffs, and they had models like GPT-5, Claude Sonnet 4, Gemini 3. They played against each other as decision makers. And what they saw was across over 300 turns of play and nearly 800,000 words of reasoning, the models chose, unfortunately, nuclear escalation almost every time. Tactical nuclear weapons were deployed in 95% of situations, they had full strategic nuclear war happen 76% of the time.

ARIA:
And what’s maybe more surprising is that the models weren’t behaving randomly. They generated long chains of reasoning explaining their choices, invoking classic deterrence logic like rationality of irrationality. I think the most important thing perhaps is that none of these models ever chose accommodation or surrender in any scenario. And so recently in the news we’ve seen the recent interaction between OpenAI and Anthropic and the DoD. And so clearly using AI in times of war and on battlefields is top of mind. What do you think this says about how AI might impact, like, nation-level conflict?

REID:
It’s a great question. The depth of it actually illustrates a bunch of different issues to go into and I will touch lightly, as opposed to deeply on each one of them. But there’s a bunch of really important issues. The first is, you know, to some degree when we do all this kind of academic or other kind of theory and game theorizing world is to play to trying to highlight where we may need to add into our human responses. And that’s what generates all of the analysis text, political science, et cetera, which the AI agents essentially train on, you know, to some degree, well played [out] in this very old movie which I recommend to people, WarGames.

ARIA:
It’s so fantastic, even I’ve seen it.

REID:
Yes, and so it’s kind of like, look, it’s actually trained on kind of pure rationality of a whole bunch of, kind of, human text. But the problem is of course those human texts are trying to balance out against a set of human biases where those human biases actually in fact have some very important plays in how actually in fact we need to have a compassionate, a humanist future. And so, you know, in this classic punchline (spoiler for those people who haven’t seen it) is the human character gets the AI to play a bunch of no win scenario games to realize that there’s no win scenarios and it isn’t actually best played out by nuclear escalation in this particular case. Part of that is like where is the value of the human in the loop in these circumstances?

REID:
And what a lot of people don’t realize is we actually had a number of different circumstances where we were very close to nuclear war where a human made a decision of saying, oh actually, in fact, I don’t believe what I’m seeing in my sensors because I don’t think that people would be that crazy and stupid. There was a Russian colonel who did that. There’s a stack of things that it’s actually important to realize in these circumstances where it’s, hey, what’s the actual context of the circumstance? What’s actually happening? What might be the irrational … but smart thing to do in terms of this, and is to de-escalate and downgrade. Actually I was just at a kind of a gaming scenario thing when 9/11 was happening. I was listening to Condoleezza Rice talk about this.

REID:
But when 9/11 was happening, it was like, one of the things that she had learned from some earlier war game scenarios that she had gotten prioritized on her calendar. She was like, I don’t need to do this kind of war game scenario. But was the — When we were going to higher alert because of 9/11, it was like, one of the things she said was we gotta call the Russians and we gotta tell them that we’re going to higher alert. And it’s not because of you. We don’t need to have this escalating thing. And the Russians said, yes, we understand this because we’ve done the same thing too. We understand you’re going to a higher alert and we are going to a lower alert right now because we understand exactly what’s happening here. And we don’t want to create World War III, right?

REID:
Just out of this mechanics of higher alert and higher alert. So you get to, it’s like, well, but the AI people would say, hey, we can train this to not do this. And it’s like, well, you can and you can’t in this context. Alert! You can in some ways, for doing this. But it’s this kind of contextual awareness of this circumstance. And then of course the fact that you might say that winning the game in this case is de-escalating because even if we’re surrendering or even if we’re doing something else, it’s what’s the game that we’re playing? And the game that we’re playing is actually in fact survival of the human race and actually, in fact, navigating that is really important. So anyway, that’s my first short answer to your first short point.

REID:
The other ones, I think, are people paying attention to the details of actually how Anthropic was responding to the Department of War. It was not saying we will never help in any of these circumstances. It was saying in our judgment about where the capabilities of these models are right now. And we are experts in the development of the capabilities [of] models. There were two different points. There was one, surveillance, mass surveillance on U.S. citizens which is we as a private company are not going to provide capabilities for this. And you know, just full stop, and we’re a private company, we’re a country of freedom and liberties. And we’re saying that’s kind of a freedom and liberty we have. And the second one is to say, on the making autonomous lethal decisions, we don’t think that they’re there yet for that technology.

REID:
We think that provisioning it is actually in fact bad and we are the experts on this stuff. Now, to some degree, Hegseth responded in a way that is validating of the Anthropic detailed point. They weren’t saying we’re not patriotic. They’re not saying that we’re not supporting our country in wars and in times of war in order to do this. They’re saying we as the experts are doing this. And it’s like, well, Hegseth is going, no, I’m the big dog here. You do what I tell you to do. You know, da, da. And it’s like, no, that’s actually not the — Like you’re failing the Rorschach test in what the conversation is, which is the conversation is that technology actually in fact ready yet? It’s Hegseth as being the stupid war game thing as like, no, I’m going to escalate. I’m going to nuke them first.

REID:
And you’re like, oh my God, that’s super scary. Because actually the Anthropic people here are actually being highly reasonable in what they’re doing. And what Hegseth et al is trying to position them as is being anti-patriotic. They’re saying we have a better decisioning than the Department of War, etc. And actually you know, I think if you go to rank and file within the Department of War, I think you’ll find them going, no, we understand what’s being said here, which is that the technology is not ready. Right. And that’s actually, in fact, the point of what’s going on. And it makes the — it makes Hegseth himself, that crazy AI computer that escalates into bad scenarios where it makes the decisioning framework within the Department of War much smarter.

REID:
And so it’s, I think it’s important for those of us who are smart people to say, no, this is not a politicization of red versus blue, of, you know, of patriotism versus not. Because actually, if you look at the details of what Anthropic is doing, it’s actually saying we’re actually in fact talking about whether when the technology is ready, we are in fact believers in a democracy, believer in elected office. Hegseth wasn’t elected in terms of this and being smart about what this means for impact on, you know, what can be like ending of human lives. And we think the technology is not ready for that in an independent theater point of view as experts in this technology. And so I thought that was an important part of that discussion that was, you know, badly covered in the general discourse.

ARIA:
You’re all used to founders and CEOs, like, boasting about their technology and sort of saying that it can do more than it actually can, you know, saying, oh, no, it’s ready, it’s ready. And so when a founder CEO says, oh, no, my technology is not ready, we should probably trust them. They probably know the best. If they’re the ones saying, it’s not ready yet, we should pause. You also mentioned Condoleezza Rice. And for those of you who hadn’t listened, she was on Possible last year, a phenomenal episode. She understands, you know, geopolitics and war perhaps better than most. And one of the–

REID:
Better than the vast majority. (laughs) Not most.

ARIA:
(laughs) The vast majority, everyone but five. Or maybe she’s the best.

REID:
Yes, she’s in the set of the best.

ARIA:
And so one of the things she said, sort of similar to what you were saying, is that the personal relationships mattered. When she got on the phone with someone, it’s because she had a relationship with them. That sort of, this soft diplomacy mattered. It wasn’t always, you know, puffing up your chest and saying, I have nuclear weapons. It was, okay, let’s talk this through. So thinking about these AI models that continue to escalate or lead to nuclear war. Like, do you think that’s a problem in the models themselves? Or are they just reflecting back that perhaps the nuclear deterrence of the last 30 years was actually much more, you know, sort of aggressive and bellicose than we might have thought?

REID:
There’s two different problems in the current models themselves. One is that the models are reflecting only the set of things that we have generated as a kind of textual analysis and reasoning that we are using to modify our own behavior, but they’re not including us and our behavior in that. It’s like that sometimes you need to not be blinded by, like, a human fear response or anger response. And you need to be, you know, only doing this. And that’s what this stuff is doing. Because it’s the body, the corpus of the material is meant to be training human beings. And it’s one of these places where you go, well, actually, in fact, you know, these models are not human beings.

REID:
The fact that they’re embodying this kind of how we train human beings suggests that this is the kind of area where you’d want the humans to still very much be in the loop. And it’s a very precise point for doing it. Now the question is to say, well, could you train AIs on, you know, in ways that are not only, you know, kind of human in the loop? And the answer is maybe, but the nature of the technology is something you’d have to look at. This is one of the things that I’ve kind of learned from being in a number of conversations, you know, at the Vatican about, you know, what is AI since the, you know, Pope Francis and I brief Pope Francis on AI.

REID:
And one of the reasons why it’s very useful for us as technologists to be in those conversations was like, well, you know — and we were talking about actually paroling and kind of use of the justice system. But it’s like there’s a role for mercy in human systems. Are you training your models to have a good sense of mercy in them? And, you know, I think we normally kind of think of efficiency. We think of truth, we think of, you know, weighing all the evidence and all this kind of thing, which is, you know, extremely important parts of this. But you go, well, actually, in fact, you know, would you want to have, you know, call it criminal judgment without some notion of mercy in it? You’d go, well, you know, not really, and can we train it? That’s an interesting question, etc.

REID:
And so I think that’s another part that this highlights. In as I think, really important for this stuff because she goes, it’s not that I believe that there’s never a place for war or violence or conflict, you know, in it. It’s not, you know, a complete pacifism, but the notion of the fact that you want to be driven by a North Star of we’re trying to reduce human suffering. We’re not trying to prove that, you know, I am the most alpha or I am the most biggest man, et cetera, and that the notion of paying attention to, you know, reducing human suffering across the board is an important thing and reducing human violence across the board is a really good thing.

REID:
And that you’re making decisions that way not by being, you know, unwilling to step up in the wrong circumstances because sometimes you have people, you know, like, I think the Ayatollah Khomeini has been a source of violence and terrorism, you know, in a very bad way across the world and including in the Middle East, as I think, an important thing. But you also need to be paying attention to, you know, like, that whenever you engage in this, you’re causing a lot of people to die and suffer in, you know, like, what’s the way that you minimize the amount of deaths, period. Not, hey, it’s only “all of those people who we don’t care about” deaths.

REID:
And how do you navigate that is, I think, very important and where that would play into how you’re building AI models, what the notion of compassion is, what the notion of mercy is, what the notion of minimizing deaths and suffering overall is, is important. And, you know, like, that’s where you’d want all of this dialogue to be as opposed to like the, you know, I am the big man here. You do what I tell you to do.

ARIA:
Yeah. I mean, it’s just so like, whether we are patriotic Americans and we want the best for our country. To your point, this mercy and compassion, we certainly want it in, you know, geopolitics and international relations. But as we are training this AI, like that has to be included. We can’t just build sort of smart machines. It must be something that takes into consideration those things as well. So let’s head to a slightly lighter note than the nuclear war and bombing. Last week, a company posted a job listing for something called an Agentic AI Developer Advocate. And this quote unquote employee isn’t necessarily a person. The company is looking for someone to build an AI agent trained on certain workflows that can create content, run growth experiments, and provide product feedback.

ARIA:
The company will pay $10,000 a month to the developer who maintains that agent. And so, in other words, instead of hiring a person for a role, they’re effectively hiring a piece of software that someone built and is maintaining. And there are humans in the loop. They want humans to check all of the content that’s going out the door. And so my question for you is, is this just a stunt or are we actually seeing, like, the early version of what a new labor market will look like? Because we’re all trying to predict that labor market. And is it this? Where people build agents that work for them and are hired out to companies?

REID:
Well, it’s definitely a clever stunt. So credit to the stuntiness. But it partially is a clever stunt, not just because it deals with a bunch of current human fears and concerns and everything else, and so therefore gets a whole bunch of attention because of it, but also because it’s clearly versions of, call it some new labor market. Right. The notion of having, kind of, AI agents doing work is already present.

REID:
But I think it’s a, like, this is a lens. Part of the reason why the question’s good is it’s a lens to how certain parts of work and labor are moving towards.

REID:
Now, part of it is, I think that we as a society and we as industries and we as companies and we as technologists need to be kind of saying, hey, look, it is a natural part of building productivity. Building productivity is part of how we create, you know, progress in society and part of how we make that happen. But we do, of course, need to be paying, you know, a lot of good attention to, you know, how do we have human amplification? It’s part of the reason why I did Superagency. Superagency wasn’t just for, hey, look, be a little bit more positively minded. Bloomer versus gloomer, bloomer versus doomer, you know, as ways of doing this. But it’s also to say to the zoomer technologist to say, hey, pay some attention to being bloomers as well.

REID:
And to be thinking about how do you build this stuff in relevant ways? Because it’s not to say, make your technology less useful, less economically impactful, less never do replacement. Of course, sometimes you do replacements where you replaced a bunch of grooms, people who are doing horse work when we start building cars, that kind of thing. But also be paying attention to a lot of amplification in terms of what we can do. Now that also gets to, you know, kind of questions around, you know, society and industry. Because, you know, say, well, even if we’re building mass amounts of productivity amplication, what are the ways that we can be helping human beings economically but also in finding meaning in the stuff that they’re doing.

REID:
Now, part of the difference that I have from some technologists who are like, oh my God, this is going to be here in a couple of years and so forth is I actually think that there is actually in fact a whole bunch of pacing human organization things.

REID:
One, that’s not necessarily the universe and two, it’s not like AI robotic factories are going to be building other robots, you know, any time really soon. And there’s a lot of human in a loop and human combination. And we should be building AI agents to help human beings in those circumstances and help navigate. And it’s also one of the reasons why for, you know, gosh, I’ve now lost track of the number of years that I’ve been trying to advocate for the creation of key human assistance agents running there so that even as we have job transformations coming, like everyone can see some of the benefit of AI.

REID:
Like if you have a medical assistant, an educational assistant, a legal assistant and they’re all there and all of a sudden you’re like, hey, I’m a working class person but I get benefit and my children get benefit from the medical assistant, from the educational assistant, from the legal assistant. And so even though jobs are changing and this, by the way, you know, a lot of white collar jobs going to be changing is like, I mean you’ve been seeing benefit. Even though most people do not like to see job transitions, they don’t like to see like, you know, the people who are doing all the horses did not like to see cars coming. The people who are doing all of the hand looms did not like to see the power loom.

REID:
But by the way, the embracing of those societies is part of what makes you have an economically prosperous society, part of what makes sure that your children are part of a — and your grandchildren are part of an economically prosperous society. And so those job transitions are hard and not easy. And you say, well, let’s at least slow them down. It’s like, well, slowing them down means that other people, this is part of the reason why in super agencies said, look, England did not invent the industrial revolution, they just embraced it more thoroughly. Which is the reason why they had a centuries long empire whereas France which had, you know, kind of four times the population, or China which had 10 times the population, right? Did not do this.

REID:
And it’s one of the reasons why it’s important to say, hey, in the economic prosperity that’s coming from having AI agents doing work, we want to be embracing that as a society because it’s actually in fact really important relative to the future prosperity of our society. And he’s like, well, but there’s all these you know, kind of wealth division issues, like how does it divide between us and those people, between these companies and those companies and you know, not just big tech companies. But the short answer is you want to have the companies that are generating that wealth because then you can then address the distribution issues. If you try to mess up the fact that you’re leading in it, then you don’t have anything to distribute. Right.

REID:
And it’s important to be saying, hey, be foot-forward on the economic things and then of course be also addressing distributional issues.

ARIA:
Well, when you’re talking about this job transition and sort of thinking about units of work, you already have people saying like, wage work is for suckers, salary is for suckers. You know, the thing that we want to do is to put our capital to work for us. And in the future, is, you know, is it going to be that the economic shift is going to be from a unit of a person’s time to a person’s software? Like already we have someone who is working at CVS or Starbucks. Like they put in an hour of work to get their paycheck. There’s nothing they can do to get more money except that hour. But we have other people who are sort of augmenting or they’re having AI help them out.

ARIA:
And so in the future is that just going to get even greater where you have the people, you know, you have the nanny that’s coming to watch the kid and she’s being paid hourly or he’s being paid hourly, but then the person who’s creating software where AI agents have sort of a multiplication effect that the other folks don’t have.

REID:
Yes, we are on a trend in that direction. We’ve already been on a trend direction. It’s actually part of the thing that kind of led me to essentially the path that I was on. Both my parents are lawyers, both, kind of, highly paid. And I was like, well actually, in fact, you want to choose career paths and work that’s not, you know, kind of by the hour. Like it’s, how do you — you could put this in various colorful ways. How do you make money when you’re sleeping because you know, have capital, own assets, etc and done things. There is a question about if there’s a limit to how much you can charge per hour when you’re kind of growing.

REID:
And it’s part of the reason why, you know, kind of thinking about, you know, the first — my first book, The Startup of You, being entrepreneurial is thinking about, like, not just being hourly. Now I think there will still be a lot of people to whom their jobs are hourly. And that’s what happens. But I think it’s moving much more in that direction. And of course, you know, what we want is we want more of people to have this kind of benefit of, you know, amplification beyond just the kind of hourly benefit. Now in a bunch of different, kind of, middle class and upper middle class jobs, you get a little bit of this from the derivative of your 401k being invested in the stock market and all the rest.

REID:
It’s one of the things that I love about the Silicon Valley building ethos, which is, include everybody on the equity table in terms of what you’re building. I mean, you know, assistance and everybody else actually, in fact get some equity too. And I think that everybody gets some equity is, I think, an important lens by which Silicon Valley kind of says, hey, this is the way we operate, this is the way we create. It’s actually one of the important general lessons from Silicon Valley. Not just technology, not just, you know, great ambition, but that kind of shared equity is, I think, another thing that really matters. But we want to move more of it to it. Now, it may be that, hey, I’m doing nanny, I don’t know if there’s an equity arrangement, maybe there is. Training AI, et cetera.

REID:
But like that kind of thing is we, generally speaking, want to move in that direction.

ARIA:
One of the things we always talk about this job transition is manufacturing. Every politician is paying lip service to how do we, you know, reshore jobs to the United States? And you know, my personal opinion, none of the policies we’ve done over the last year were actually pointing that direction. We’re actually losing manufacturing jobs. But you have long said that the only way that this is going to be successful is if we’re using AI and if these are sort of highly skilled jobs. And so the question is, we saw last week it was announced that Bob McGrew, who used to be OpenAI’s chief research officer, he’s launched a startup called Arda and it’s trying to automate manufacturing using AI. The company is reportedly raising about 70 million at a $700 million valuation. And it’s backed by firms like Founders Fund and Accel.

ARIA:
And the idea is to connect frontier AI systems to the physical world. A lot of people are talking about this and trying to do it. And in particular, this platform analyzes video from factory floors, uses that data to train robots and software systems so they can coordinate the entire production process from design to finished goods coming off the line. And one of the reasons they want to do this is geopolitical. They want to make manufacturing dramatically more automated so that it is economically viable to move production back to the US and Europe instead of relying on China or Vietnam or some other countries that actually can produce goods more cheaply. So people reacted to this sort of the same way they reacted to the developer advocate role that we just talked about.

ARIA:
And they were anxious because this sounds like the beginning of full job replacement. Instead of creating more manufacturing jobs for Americans, we’re saying, oh no, we’re going to onshore, but we’re going to onshore with robots. What do you think about that take and do we think that this is actually possible or are we still years and years out from these AI robots taking our jobs?

REID:
First, I acknowledge the worries. Kind of the classic thing is the American auto worker. When companies were trying to figure out how to have an increase in their profit margins and everything else, you know, offshore the jobs to, kind of, other kinds of places, it’s among the things that had an effect. Now part of it is I think that the auto workers and the company weren’t as aligned as well as the Germans do, saying we should collectively be collaborating together about how do we not — It’s not an oppositional behavior between company and union thing. It’s like we should be, like, working this together, to work the problem together to increase our product capability everywhere and to both share some pain in doing this as opposed to, no, you have the pain. No, no, you have the pain.

REID:
And you know, part of what happens with offshoring capabilities is if you — you know, capital can always fly, capital can have offshoring. And that’s actually, in fact, you know, kind of difficult to do and change in this regard. And so I understand the anxiety for it, especially given kind of a, you know, there is, you know, a bunch of capital that says, well, we don’t care about you. You know, we’re not going to share anything. We’re just going to do it all ourselves. And so it’s not just “trust us”. I do think it should be aligned this way, but it needs to be aligned together because the industries that will succeed in global competition are the ones that are AI amplified.

REID:
If you go, oh, I’m going to slow down or break your AI thing, which probably presumably you know, can do in at least a number of circumstances. You may delay. I mean, look at Hollywood as a little parallel to this, is, you know, you’re not allowed to use AI in the writing room. And basically, of course, what happens is everybody probably 90% goes home, uses AI at home, brings in the thing themselves and does that, but is making themselves a lot less efficient relative to other kinds of things versus saying, how do we use AI in the writing room and how do we make ourselves work well in the circumstance and what are the ways that we navigate in order to do this?

REID:
And so I think that this is a similar kind of thing here, which is to say, hey, look, we need to do this. Let’s do this maximally, collaboratively, right? And that doesn’t mean that I’m not my job. Just some jobs won’t change, some jobs won’t go away, some jobs won’t be created, but you need to do that. And frankly, we won’t have, in the US, a successful manufacturing industry without AI. This is precisely when I was kind of going around to people saying, would you like to return to the manufacturing industry? And they go, yes, you go, great. You should be trying to figure out how to use these AI companies and kind of shape the AI companies to help make this happen, because this is the only way that this is actually going to frankly happen.

REID:
And then if you say, well, but I want to be working in the manufacturing industry, it’s like, great. You need to be working, figuring out, and people need to be working with, how do you collaborate to make that happen? So I look at Bob’s company as a potentially very patriotic company for, kind of, what’s happening here. I think we need to have more of that. We need to be having the tech companies helping the other sectors in the American jobs do this. And by the way, a little bit like the developer advocate, it may be that, hey, what we’re really doing is helping manage, trained, evolve the AI in various ways. They say, well, but there’s too many, few, too few jobs doing that manufacturing. It’s like, well, then let’s get into a lot more different, kind of, manufacturing circumstances.

REID:
It’s a little like people say, well, I’m going to be a lot more productive, the company is going to be a lot smaller. Okay, let’s have a lot more companies, right? There’s no, there’s no necessarily restriction on the number of companies. There’s no, you know, and there’s many different kinds of jobs where there’s essentially infinite demand. Now it’s a little bit of the problem that you have to think of in, you know, one of the benefits we had post World War II is we had the only manufacturing, you know, kind of base that hadn’t been bombed out by the war. So you could have a high school degree then, you know, own a house and two boats and, you know, and you know, send your kids to college.

REID:
Well, that, you know, the bar is higher now because there’s manufacturing everywhere in the world, not just the many cheaper places. Like, you know, China has done a great job of creating an enormously efficient manufacturing industry. And, you know, it’s kind of like you got to win the — It’s like saying, hey, we’re, you know, playing the Olympics. We got to play just equally with everybody else or understand that a bunch of the different games are playing equally. We can’t say that, hey, everybody else has lead weights on their feet. And now we’re running the race. And so you have to kind of play into that and play to how do we increase our performance in terms of how we’re operating?

ARIA:
Makes so much sense. Reid, really appreciate it. Thanks so much.

REID:
Always fun.

REID:
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.

ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil and Ben Relles.