This transcript is generated with the help of AI and is lightly edited for clarity.
///
REID
As Americans should be focused on how do you make artificial intelligence into American intelligence? Ten years from now, we will still have doctors and doctors deploying expertise. Now, the expertise will no longer be, “Well, I spent years memorizing everything and I am the walking oracle.” It’s the, “No, no, I am the expert thinker and navigator of these set of tools.” One of the more subtle ways to think about what is AI today is it’s an intelligent group of experts’ consensus opinion — it’s what it broadly does when it has enough data to do that. That’s when AI is broadly at its best right now.
///
ARIA
Reid, great to be here today. Excited to hear your reactions and riffs on what’s going on in the world. And last week, NVIDIA’s market cap crossed $5 trillion, which is greater than the GDP of India, Japan, and the UK according to the IMF. Jensen Huang says NVIDIA went from 95% market share in China to 0%. And he says, and I quote, “I can’t imagine any policymaker thinking that’s a good idea.” And the reason for that was the combined impact of U.S. export restrictions and Chinese countermeasures. And so, obviously, as the CEO of a company, you’re not going to be excited about your market share going from 95% to 0%. The U.S. did this because of national security issues. They want to make sure that the U.S. is strong in AI. Would love to hear what you think about this.
ARIA
Like, is this an appropriate course of action, and should we be reversing this decision?
REID
Well, if the entire game of winning or the most important part of the game were, like, sale of chips, then obviously it’s a bad idea. But actually, in fact, that’s only a small part of the game. I mean, it’s an important but small part of the game. And what’s really important is the development of the AI software tech, of how these models do everything from accelerate science — you know, in the case of Sid and my Manas AI — help cure cancer, you know, et cetera. And it’s those outputs. And that, I think, is the thing where the chips are the driving determinant for being able to do that.
REID
And when you want to say, “Hey, we want to maintain the competitive edge for our risk investment or multi decade investment in our companies and doing that,” actually, in fact, you know, kind of keeping the compute to us and our allies and whatnot, is actually, in fact, a very good way to do that. And that’s the important part of the game. So, you know, I think, not surprisingly, given that you know what NVIDIA and Jensen’s business is — we want to sell as many chips at as high as price possible to as many people as possible — that his answer is, “I should be able to sell chips to everyone.” And that’s not actually, in fact, the whole game. Now, that being said, I do think that these questions around, like, will you have a Chinese internet
REID
Or actually, you have a Chinese internet and English internet and everything else internet. You also have Chinese AI, US AI, and hopefully, like, European AI and all the rest as a kind of way of playing this out. And I think that’s—actually, I don’t think that’s a problem. It’s a natural thing that’s going to happen anyway, and I don’t think that’s, per se, a problem. Now, I do think that the really key thing that, as Americans, we should be focused on—how do you make artificial intelligence into American intelligence—is, “How do we have this AI revolution most help, like, the entire swath of American industry?” “How do we get it deployed? How do we get productivity amplifications? How do we get entrepreneurial success with it?” REID
And all of that is, I think, what is actually, in fact, most important from an American policy perspective. And, you know, frankly, when you line up the kind of, whereas China have strengths and where do we have strengths, the compute infrastructure is one of the few areas where we actually have kind of a decisive edge. The Chinese have much looser data privacy — things can throw on the kind of, like, “they don’t care about Hollywood IP, they don’t care about, you know, private data, they don’t care about” — like, all that stuff can be thrown into that. They have an amazing large population that acts with the vigor of immigrants, and they produce more, you know, STEM graduates than the rest of the world combined.
REID
And given that, you know, we, as the U.S., seem to be determined to try to persuade all of the smart Indians from now no longer coming here, we’re even more disadvantaged in the STEM arena. So, like, when you list out the advantages, the compute advantage is one of our very few advantages that we need to hold on to in the competition for creating AI, remaking industry, et cetera. And, you know, the other thing that I think is also, unfortunately, by default, a Chinese advantage that I think we want to try to make up ground on as much is adoption and deployment. I think, actually, in fact, one of the really key things — it’s not just scale compute, scale data, scale teams, but also scale deployment.
REID
And it’s one of the things that we need to happen. Now by natural action — that will be new blitzscaling companies that will start small and get very big using it. But we also, of course, want as much of traditional, experienced, established American industry, you know, adopting this curve really well and effectively. And I think that’s where, kind of, you know, the other kinds of things we need to be doing in order to get U.S. industry ahead. And that’s part of the reason why they have the compute most oriented towards helping U.S. and Western democracy industries.
ARIA
Well, some people might say that this is sort of the same thing as a lot of people have been talking about — how U.S. soybean exports, for instance, to China have recently gone to zero. So, people are sort of feeling bad for the soybean farmers because they have this huge market that they can’t export to anymore. Jensen’s sort of saying the same thing: “I had this huge market that I can’t export to anymore.” Do you think those are the same? And, sort of, what does our government owe these U.S. companies when they’re saying, “We’re actually going to sort of disappear one of your main markets”?
REID
Well, like I said, you know, a five-trillion-dollar company that has no problem selling its entire book at very high prices — that’s different than the soybean farmers. So, I think the general establishment of tariff warfare will, in fact, harm a variety of American businesses. And the problem is, well, who else is going to buy the soybeans? And, you know, what you have is, well, we established tariff warfare, the soybeans are not being sold overseas, then you have to have government bailouts — which is, you know, the, you know, your taxpayer, the person who is, you know, kind of working two jobs, making things, you know, ends meet, paying for the soybeans that otherwise would have been sold overseas, bringing in, you know, kind of commerce money to make things happen. So, the soybeans — they’re not at all the same case.
REID
But I do think that, you know, if, for example, it was the, “Hey, we’re establishing this,” and there wasn’t already, frankly, infinite demand for NVIDIA chips, then that would be a different thing. But there isn’t infinite demand for soybeans. And it’s one of the reasons why, you know, I think that one of our business successes comes from our international trade relations — like, basically, prosperous countries have good export regimes. And it’s true that our tech industry, one of the things that’s underappreciated — generally speaking, the company, one of our industries — is that we get massive amounts more revenue from the rest of the world than we do here. That’s part of what creates prosperity for us. And, by the way, NVIDIA can still sell chips to many other places other than China, and so can still do that.
REID
But it’s also, like, there’s, you know, for the foreseeable future, there is just massive demand for NVIDIA chips. So, not selling them to China actually, in fact, doesn’t create any real dent in any U.S. business interest, including NVIDIA’s.
ARIA
I was about to say, every tech startup we talk to seems to want to get their hands on more NVIDIA chips. So, they have a willing market, that’s for sure. If you look sort of long term, does this decoupling of the U.S. and China lead to sort of two incompatible AI internets? Like, are there term harms that could come from this — sort of us separating out from one of our trading partners?
REID
Playing the world industry competitive game by restricting chip flow to China — as do they build their own native chip industry — and then suddenly, do we have, you know, no longer the advantageous leads with chips that we have? Now, China already has that as a top mission anyway, and has had it on a number of different fronts since at least 10 years — I think it’s, like, 2014 or something. I do think we will have a bifurcated technology industry. And I think that bifurcated technology industry has all kinds of complications to it — mostly geopolitical, and in economic might, in geopolitical influence, and all the rest.
REID
And it’s one of the reasons why I think it’s important for every country, including ours, to say, hey, it’s a good thing for us to actually have our tech industry have a lot of global reach, global customers, global platform establishment. And it’s one of the things that we actually benefit enormously as a country — all of our industries — from that. And I think it’s one of the things that everyone should have that is a kind of intrinsic strategy. It’s like, how do we succeed as a, you know, kind of a global industrial and economic power as part of doing it.
REID
Now, that being said, there’s all kinds of natural geopolitical things that tend to go to, you know, kind of technology sovereignty, digital sovereignty, et cetera — to breaking up the internet, to saying, “Hey, I don’t want to use your AI, I want to use my own AI.” And I think that’s part of the wrestle now. The thing I worry about in, you know, kind of the anti-globalist and isolationist tendencies in the U.S. is that we are much better off the more our partners and friends and the world — you know, the world uses U.S. tech. It gives us an enormous amount of economic power, enormous soft power — it’s a good thing. And if you’re driving them to say, “Okay, don’t use us, use the Chinese, use others,” you’re losing a competitive race. And so, I think the issue is not so much—
REID
—I think we naturally bifurcated in various ways. You know, just go ask DeepSeek about Tiananmen Square — you know, there’s a highlight point. Have people trust our ecosystem more than the other available ecosystems to them. Because then you both — you know, we as the world, but also we as broad American industry — benefit from that a lot.
ARIA
Not to be flippant, but every year since ChatGPT was released, a group of scientists and technologists have gotten together to put out a pause letter, saying that AI has gotten so good that we need to pause now and look at safety concerns. I know you are someone who cares about safety, but you also care about AI development. A few weeks ago, we had over 800 public figures, including AI godfathers like Geoffrey Hinton and Yoshua Bengio, also the Apple co-founder Steve Wozniak, and other business, political, and cultural figures. They put out this letter that says that, again, we should pause before we have super-intelligent AI, because we have to think about the safety concerns. Is this just a retread of previous letters, or is something different this time? What do you think about this recent open letter?
REID
Well, on the earlier pause letter, as you know, I thought that one of the things that was a mistake about it is you say, hey, we issue a pause letter — then the people who care about the values, let’s say human alignment, risk, etc., pause and slow down. The people who don’t, do not. And therefore you’ve just increased the level of risk that the AI systems are developing. And you have to account for that level of that possible outcome in calling for a pause. And then they say, “Well, but the other people understand that they should do that too.” And it’s like, no, actually, if I understand human beings at all, as we divide into groups and we compete with each other. And so, other groups say, “Hey, great, you pause, and I won’t.”
REID
And, and matter of fact, there were original signatories of the original pause letter. They’re like, “Yeah, everyone else should pause — well, I don’t,” in order to… to — to you know, build Grok, you know, kind of as an example. And I think that the—so the important thing is to say, like, the pace of development will continue afoot, and we are not going to get to a collective agreement. I mean, just take, for example, climate change, where you can see impact in weather, impact on temperature, and other kinds of things — you can’t even get to a global agreement on that. And then the intangibles of where the stakes of, like, who develops, you know, AI — call it super capabilities — early, are huge. That will continue afoot. So, I think it’s a wrong strategy for this.
REID
It’s like, let’s wait until we have a global committee decide that it’s okay. I don’t think people are going to bind into that.
GEMINI AD
This podcast is sponsored by Google. Hey folks, I’m Amar, product and design lead at Google DeepMind. We just launched a revamped vibe coding experience in AI Studio that lets you mix and match AI capabilities to turn your ideas into reality faster than ever. Just describe your app and Gemini will automatically wire up the right models and APIs for you. And if you need a spark, hit “I’m feeling lucky,” and we’ll help you get started. Head to ai.studio/build to create your first app.
REID
The question of contributing — what are risks that we should try to mitigate, what are ideas for mitigating those risks, to make those ingrained in what we’re doing? You know, I think, actually, in fact, one of the things that’s frequently mistaken in this dialogue is that, actually, tool sets for increased safety and increased, you know, management are also possible in the future. Like, one of the things that we—when you just study the evolution of GPT-3, GPT-4, GPT-5 — they actually much more naturally align to training, to being aligned with our societal and human interests. It’s one of the things that comes out of the scale, so you get an increased tool set for it. And then, you know, the other thing is, like, you know, the classic thing is, well, what do you mean by super-intelligent, right?
REID
GPT-4 was already super-intelligent — that no individual or group of individuals had — not the least of which was speed, but also breadth of knowledge and so forth. And so they said, “Well, but, you know, it’s when it’s like, you know, kind of, you know, deity intelligence.” And you’re like, “Well, okay.” By the way, one of the things I think is useful — it’s an exponential curve. So, it could show up next year and we wouldn’t know, because exponential curves lead to magic. And it’s like, okay, you know, I understand the concerns there. I’ve helped build exponential curves. I, you know, at least understand that much math. Now, usually it’s an exponential curve to what? So, for example, if an exponential curve to increasing super-intelligence capabilities that are like savants — which is what we’re seeing happening now —
REID
That’s actually not necessarily, you know, that deeply alarming. I do worry much more about AI in the hands of humans than AI by itself. I tend to feel that the worry and the focus on, you know, “the robots are coming” is, like, actually, in fact, how we navigate, you know, artificial intelligence being amplification intelligence across a whole bunch of human beings. Like, what happens with, you know, rogue states, terrorists, criminals, et cetera, as ways of doing that. And so, like, focusing on what this means for a human, you know, deployment and use, I think — including, by the way, stupid human use. It’s like, for example, if someone said, “Hey, we should take the nuclear defense grid, and we should… we should make it all controlled by the AI!”
REID
And I was like, there have been movies about this all the way from Dr. Strangelove into The Terminator and all the rest. (laughs) It’s like, no, no, no — let’s not do that. Right?
ARIA
(laughs) It didn’t work out too well.
REID
That being said, obviously, you know, a number of these folks — you know, Yoshua Bengio and I have talked about this a number of times — enormous respect for him, deep thinker — you know, is correct in that kind of, that metaphor of people holding different parts of the elephant — you know, the trunk and the tusks and legs and the tail and the ears — like, hey, there’s an issue here. He is correct. There’s an issue here. And then what tends to happen is, well, does a precautionary principle mean that you should do nothing until the issue is there? It’s like, actually, in fact, you should do the most intelligent things to navigate the issue.
REID
And the question is whether or not the most intelligent things are to try to wrap ourselves in knots by pausing for, you know, massive committee groupthink, or by trying to be as smart as possible in navigating the path and making sure that intelligence gets into the things that are building it. And, you know, like, people say, well, the Chinese are only excited by our own race conditions. And I was like, well, no — many years ago, you know, Xi Jinping said by 2030 he wants China to be the dominant AI country in the world. This is before that. It’s just — that’s just — it’s not clear data seeing and reasoning and thinking target. That’s just how these things play. So, I think that the issue is to say, hey, let’s pay attention to what are major risks.
REID
Let’s try to generate ideas for how we navigate them. The ideas may not be perfect, but navigate them within the time envelope that we are kind of building them to. Which, obviously, you know, if you could wave a wand and just make everyone move two-thirds more slowly, you know, that might be the right outcome. By the way, it also means that many more people will die because the AI-driven medicines won’t be there, the AI, you know, doctors won’t be there, et cetera. But, you know, maybe that’s better. But that’s… that’s the kind of experiment that says, hey, what if I waved a wand? And you’re like, well, like, you know — like that one.
ARIA
It’s a fun thought experiment, but it’s not going to happen. I think one of the areas that we can all agree that sort of magical super-intelligence would be really positive is with healthcare, medicine, doctors. And you were actually recently invited to speak — actually, to debate — at a conference recently, and the debate was whether, in the future, doctors will even exist because of the rapid improvement and transformation of AI. So, the conference was Chatham House rules, so we can’t play video, but I would love to hear your point of view. Do you think that, you know, doctors are going to exist in, you know, three, five, ten years with the rapid advancement of AI?
REID
Well, for people who read Superagency, they probably wouldn’t be surprised to understand that I was arguing on the “doctors will continue to exist”. And, once more, doctors will continue to exist — not in what people assume. It’s like, oh, they’re the emotional hand-holders, so, sure, the AI does all the thinking, and the doctor now goes, “So, Aria, how are you feeling? Does that feel right for you?” No, no — actually as thinkers, as diagnosticians, as experts. But doctors’ jobs will have to change a lot. We are already, in many domains, in a place where the majority of doctors, if they have an instinct that’s different than the AI diagnosis, the doctor is likely to be wrong — because they don’t have the compute capability of, “I looked at a trillion and a half words, I did an intense amount of synthesis on them.
REID
And I’m doing that retrieval, looking at databases and cross-checking things as I’m doing it.” However, that doesn’t mean that the AI is perfect. And, actually, in fact, I think part of what doctors as experts and diagnosticians end up becoming is the dance partners, collaborators, directors, tool users of AI to help get to the right outcome — both for translating what the patient actually, in fact, needs, helping the patient discover what they really need and how to articulate that, parsing the information that comes back. Like, one of the examples we used in the debate is to say, you’ve got somebody who says, “Well, you have this cancer, you should start chemo tomorrow.” And it’s like, well, actually, in fact, my life’s goal is to walk my child down the aisle in the wedding that’s happening in two weeks.
REID
I was like, okay, great, we will start it the day after or two days after the wedding, depending, and we will try to do everything possible to set up before then. And that’s the right way to understand this. Even though, from a medical-outcomes point of view, like, the right thing is to start chemo tomorrow as an instance. And, you know, part of it — you know, I’ve been advocating for years now that governments should try to get a medical assistant that’s available 24/7 on smartphones to all of your citizens. It’s an enormous elevation of human quality of life and, you know, is doable now in very straightforward ways — that you should use ChatGPT or your favorite frontier model, you know, Copilot, Gemini, Claude, etc., as a second opinion to cross-check things.
REID
I actually know of people whose lives have been saved by doing that because, again, it’s this diagnostician expert. But the thesis that it will completely replace doctors is — I think — incorrect on a number of things. One, yes, if you had human or AI already on radiology film, you choose AI. But actually, by the way, it’s still better to have AI plus human, right? So, AI plus human is better. Then you say, “Well, but the economics won’t say that it’s like, actually, in fact, there’s always a role for an AI plus human for something as urgent as medical care” So — now, by the way, in many areas, like impoverished areas, there may be no room for AI plus human because of the… or very little plus human… because of the cost. But that means we’re massively improving those areas by having AI be there.
ARIA
Right.
REID
Look, how is AI trained? AI is trained through a lot of human expertise in terms of doing that. It’s one of the things that doctors bring in — it’s like, okay, you know, like, one of the more subtle ways to think about what is AI today — this is a little bit counter to some of the super-intelligence thinking — is it’s an intelligent group of experts’ consensus opinion. It’s what it broadly does when it has enough data to do that. That’s when AI is broadly at its best right now. Now, that’s good for medical things, because that’s what you want your very first medical judgment to be. But, by the way, the consensus opinion misses things. And one of the things that, you know, in talking to doctors, is like, actually, in fact, like, this goes — it reads the whole chart, it reads 30 other parallel cases.
REID
It brings all — and it brings in questions and connections that a doctor doesn’t have the individual compute for. It’ll make it happen. And sometimes it’s that outlier case of recognizing that that’s actually, in fact, kind of really important. And that’s part of the reason why I think 10 years from now we will still have doctors and doctors deploying expertise. Now, the expertise will no longer be, “Well, I spent years memorizing everything and I am the walking oracle.” It’s the, “No, no, I am the expert thinker and navigator of these set of tools to getting great health and healing outcomes for people — working with them and working with the tools in order to make it happen.”
ARIA
Well, that’s what I was going to say. I think a generation of med students just breathed a sigh of relief. But if you are in medical school right now, what advice would you give them for what to focus on? Because their job is going to change.
REID
So, one, start using it. Right? There’s a general thing for everybody — medical students, journalists, you know, scientists, accountants, managers, software engineers, lawyers, educators — start using it. Like, and more or less, I think that if you’re articulating an opinion on it and you actually haven’t been using it — and I mean using it with some seriousness, not like, “Oh, I did a query once,” right? — you don’t have intellectual integrity in terms of how you’re thinking about it, however smart you are and everything else. Because it’s like a person who’s never driven a car saying, “Well, I have a certain amount of point of views on how one should drive a car.” And you’re like, no, you’ve got to have some of the experience — some. So, I think that’s one. Two is the medical institutions themselves, which tend to not want to change.
REID
Like, for example, you begin to learn as, like, oh, actually, in fact, when you’re doing coding, the combination of GPT-5 for reasoning and Claude code for the detailed coding — that combination is the current Pareto best optimum for all coding stuff. And you don’t really know that unless you’re actually, in fact, engaging in it in some kind of systematic way, talking to each other, and so forth. You know, where, you know, Claude is better at creative writing than, you know, ChatGPT. Gemini does certain kinds of science queries better than any other model. You know, Copilot is kind of better integrated to the set of things you’re doing in the enterprise. You know, all of these things kind of matter for understanding what the tool set is.
REID
And, you know, like, I think today, if, as a doctor, like, for example, I was saying as a patient you should use GPT or your favorite frontier model as a second opinion — doctors should do that too. Like, I actually think the question is, you should be thinking of, if I’m not doing that, do I have, like — frequently because of this thing — do I have 100% confidence in my diagnosis? Do I have 100%? It’s like, okay, well, what’s the cost of using, you know, kind of, Copilot, Gemini, etc., you know, as my second opinion? And the answer is, not high.
ARIA
Close to zero. Reid, thank you so much. Appreciate it.
REID
Always a pleasure.
REID
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.
ARIA
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.
GEMINI AD
This podcast is supported by Google. Hey folks, Steven Johnson here, co-founder of NotebookLM. As an author, I’ve always been obsessed with how software could help organize ideas and make connections. So, we built NotebookLM as an AI-first tool for anyone trying to make sense of complex information. Upload your documents, and NotebookLM instantly becomes your personal expert, uncovering insights and helping you brainstorm.
*Editor’s note: The original audio incorrectly says “China” instead alongside India and the UK. It should instead say “Japan, so we’ve updated this transcript to reflect that.

