This transcript is generated with the help of AI and is lightly edited for clarity.
ARIA:
Reid, delighted to be here with you today. Let’s jump right into some AI questions. So Jensen Huang recently said that AI is a five layer cake. The idea is that people talk about AI like it’s just a chatbot or a model when it’s really a full stack. So you have energy, chips, infrastructure, models and applications. And his argument is that every flashy AI application at the top pulls on everything beneath it.
All the way down to the power plant. And so it’s not surprising that Jensen, of all people, would be saying this, but his comments suggest that the AI race may not ultimately be won just by whoever has the best app or even the best foundation model. It may be won by whoever controls the deepest layers of the stack.
Compute power or data centers and the [industrial base] required to support all of it. And so in other words, what looks like a software boom may really be an infrastructure and maybe even geopolitical buildout in disguise. And so I would love to hear from you. Do you agree with this framing? And when you hear AI described as a five layer cake, does that change how you think about where the real power in the industry sits?
REID:
Well, obviously Jensen and NVIDIA have been doing amazing work. And you know, one of the things I think Jensen is very good at is, you know, arguing, you know, his position very strongly. So it’s like no, no, what’s most relevant is the people who are producing the chips. Right. Let me tell you why. But you know, by the way, the chips are super important.
So I agree with the five layer cake. There’s actually even some additional complexity around data and all the rest. And I think that the fact is that when you think about kind of geopolitical power, actually, in fact, compute capabilities and compute infrastructure [are] probably actually something that’s now highly relevant to geopolitical power. And, you know, people think, oh, do I have the supercomputer to train a model.
And digital sovereignty is one part of that. And I think that’s potentially navigable. Like, it doesn’t have to be that, you know, each significant country has its own, you know, $100 billion computer for training its own model. But you will still need compute for inference. You’ll need, you know, some kind of digital sovereignty in not having models be able to potentially be rug pulled from your nation’s industries, national security apparatus, etc., is, you know, is kind of part of where to work.
So I think it’s absolutely right that the geopolitical. Now the thought that the, the that’s where the real economics are. It’s a place where real economics are. But it’s a little bit like arguing well the internet’s going to be a geopolitical thing. And so it’s ISPs. ISP’s are real control. And they give you a look around like, no, not really at all in terms of how this operates.
And I’m not suggesting these are exactly the same because among other things, you know, the ISP’s are much more, you know, commodities and, you know, kind of hook up a whole bunch. But it’s the reason why it’s not necessarily the most foundational part of the layer is not necessarily where the most value or power accrues. I do think that the that all of the five layers that he describes are actually, in fact, pretty important.
And I think that historically, when you get to where the most economic value ends up, you know, kind of accruing the kind of the area where there is the most, you know, kind of, you know, economic power in these kind of things tend to be closer to the top of the layer. You know, and so like, for example, it’s like it’s like Google, right, makes its money from AdWords.
And yes, it powers the whole thing from having a, you know, a very deep computational stack. It’s kind of the most of a complete to the five layer cake, you know, from a search realm. And there’s trying to do that from the AI realm. It’s not necessarily it’s because, oh, because we have all this lower level of chips that this, this, that this particularly plays out.
So I think that it’s, it’s, it’s there’s real power at every level, more so than just the ISP’s. And I think it’s an important thing for, you know, kind of countries, industries to think about all these and additional things like data, like data is one of the things where there’s a lot of vague claims and actually is going to play out in particularly interesting ways.
ARIA:
Well, obviously, this is not an investment advice podcast, but if you had to choose only one layer of the cake to invest, what would you choose?
REID:
Well, it’s a complicated answer, in part because I’m a software guy, so it’s like, you know, what do I do? Yes, for me, that’s the one part. But also, by the way, it’s tend to be the well, you know, what kind of investing game are you doing? Like you’re investing in startups, investing in startups that have to do with compute, have to do with power, have to do with data centers, you know, kind of are all, you know, hazardous investments from the viewpoint of intensity of capital, ease of failure, you know, etc.
So it can be done. There are good there are good things that happen there. But like tons and tons of very [high-cost] failures. One of the benefits of software is it software tends to be more capital efficient now. Right. It’s a little tricky when you get to the to the you know, AI model construction, which is closer to the high capital, you know, kind of ways of doing it.
But that’s the reason why I like I tend to be in applications and models and you know, other places where high, you know, high capital efficiency for how you play them.
ARIA:
Well, so let’s move from the economic to the more philosophical. So many of our listeners probably saw that The New York Times, they published a blind quiz, and they were asking readers to compare human writing versus AI writing. And the result definitely hit a nerve online. And so this was taken by more than 86,000 people, and readers slightly preferred the AI passages overall.
And what was interesting is the reaction was sort of in two directions, like one camp said. This proves that most writing people encounter every day is already generic enough that AI can beat it. Other camps said the quiz misses the point. Like short decontextualized passages are exactly where AI performs the best, and the real value of human writing is it’s voice is investigative reporting is structure. Its taste over long arcs is long novels is writing style.
And so what do you think? Like people are actually reading into these human versus AI writing debates? Why does this strike such a nerve?
REID:
Well, I’ll answer your direct question first, and then I’ll go to the rest. Look, it strikes a nerve because people are fearful of replacement in multiple ways or fearful of replacement for economic job and safety and security. They’re fearful of replacement from a, you know, kind of the the notion of purpose in terms of what like my significance is like what makes me like what makes me as a writer or me as a human being, me as a, you know, etc., etc. unique.
And so all of a sudden you go to, oh, wait a minute, you know, what’s going on here? So it’s that whole range of things is why people have such intensity on this discussion. And by the way, it’s one of the reasons it’s important because, you know, I as a, you know, as you know, I kind of think of myself as a techno humanist.
So I think that, you know, what is human is super important and what is and, and but I also think of us as homo techne, the way that we evolve through our technology. It’s, you know, everything from fire and, you know, agriculture that causes us to be, you know, be able to aggregate in the cities, you know, all the way up into, you know, creation and machinery and power and books and all the rest.
And that’s how we evolve and change changes our, our, our, our, our way that we think of ourselves, the way we think of the world, the way we have epistemology. You know, the whole world changes through the lens of a microscope, you know, as a, as a kind of an instance and AI is, I think, another one, which, of course, has the the greatest, you know, kind of challenge so far because it’s like, well, you know, we think of ourselves as the only things that have agency as humans with some asterisks around animals and maybe some blindness around corporations, you know, collections of humans.
But the the reason our it’s super agency is because it’s like, oh my God, this new thing challenges our agency in a way that other technologies, which we had discussion about how they change our agency is more fundamental because maybe it’s somewhat autonomous and maybe it does things that we previously said was only human like writing, you know, as an instance of this.
And that’s why the navigation is really important. And that’s part of the reason why we do this podcast, wrote Superagency and [Superagency matters] to say, here is how we navigate getting a stronger agency through this path. And it doesn’t mean that doesn’t transform, doesn’t mean that previous things that you would really like, you know, are now different in the age of AI.
And I think this, you know, I think that The New York Times did a clever thing with this blind quiz because I think both camps are right. I think that the that the camp that says look, a lot of writing is already pretty generic. And by the way, AI is already good enough to do it. That’s pretty straightforward.
And by the way, everyone who uses these, you know, kind of AI agents to, for example, produce a custom kind of equivalent of a Wikipedia page answer or report or something else goes, this is perfectly good for the kinds of things that I’m looking for, for my search for something. And by the way, short form makes it even easier.
I mean, this is, you know, one of the things I missed when, you know, Twitter was created was like, this dumb. It’s like, no, no, no. Actually, people want short form because in short form no one can be particularly smart. It’s very hard to be pretty smart, and everyone looks kind of sufficiently also banal. And so it goes across the whole thing.
And that was part of the reason why it was something in addition to blogging or blogging was no, you had to write something, you know, substantive. And so I think that the that there is a lot of the short form and AI does that. And by the way, I think AI does some long form perfectly well too for different contexts.
But that doesn’t mean that there isn’t like like I can still tell like there’s a whole bunch of writing tasks where humans yes, that it’s more expensive, is more challenging to do, but like humans do a much better job. Now the challenge will be which areas will be economically viable for the humans doing a much better job than the automation of the AI.
And that’s the kind of the brass tacks of it. And you said, well, you know, I used to hire human writers to write the manual for the product and it’s like, well, you’re probably not going to have to do that anymore. And it doesn’t mean that there won’t be someone who a human who is, you know, iterating with the human, the AI to get the writing of the manual done the right way or done in new ways or better ways, because it’s now kind of this more efficient way.
But it’s like I now no longer have to pay a writer, you know, at whatever the going market rate for writers is in order to do that if I had some reason not to do it or wasn’t trying to get better. And that’s part of the last underlying thing in this, in this issue.
Now, as for Superagency, my hope and expectation and part of what we try to shape it too is actually there’s still a lot of role for doing writers, not just because there’s a bunch of stuff where AI falls flat itself today. Just try to get it to write good dialogue as one example, just as an example today.
But there’s a lot of other things, including all the things the, you know, question of, you know, reporting lived experience, believability. Do I want to be hearing from a human on this particular topic versus, you know, the canned synthesis of AI, you know, etc., etc.
But it’s also the question of which of these areas are going to be the areas that we go this economic model works for the production and consumption.
I think people will begin to realize that it’s actually, in fact, useful and even good in some ways. Like I, you know, here’s the thing is I the part of the canary that I’ve been tracking for where will job replacement really happen is customer service, as you know. Yep.
And I think we’re still in the early days of that here in ’26. You know, I think that from what I see is, you know, companies are engaging, you know, companies like Sierra Parloa or others in customer care and getting good results. And I think they’re expanding their footprint with it. I think they’re still working out a lot of different things, and I think that businesses are doing quite well as part of it.
Now. But like, what you really want to get to is where the customer says, please put the AI on. Right. Right where it’s like the no, no, no. And they’ll do that through experience because they’ll go, well, wait a minute. Actually, in fact, the AI that directly interfaces with all the stuff as opposed to a human who doesn’t really understand all of it, is trying to follow a database script that what they are and the human kind of stumbling over themselves following a database script and probably outsourced to the Philippines or India because it’s much cheaper and then is even a little bit more not sharing in context that the AI is so much better.
It’s like, well, that’s better, right? And that’s why I think. And then once people begin to get to the oh AI is better here. Then I have areas where I prefer it. And this is part of the thing where I’m trying to get people to own their own agency is like obviously if you have exposure, if you have a doctor, you should always talk to your doctor for these things.
But by the way, if you’re talking to your doctor and the two of you aren’t using frontier models to second, you know, opinion what you’re doing like on the spot. It’s it’s bad for both of you. You should be wanting the AI, wanting to do that because it helps you in really critical ways.
And by the way, if you don’t have access to a doctor, it was like, oh, the doctor is like a clinic that’s a four hour drive away or I don’t have one whatsoever. Well start with the AI, know whether or not you should get in the car and bring your kid to the clinic.
ARIA:
All right. So we talked about economics, sort of humanity philosophy. Now we’re going to end with still AI, but politics. So Alex Karp recently warned the tech industry that they may be headed toward nationalization.
So his argument was basically, the tech companies are simultaneously saying that AI is going to wipe out huge numbers of white collar jobs, but they’re also refusing to align with U.S. national security interests. And so if that’s true, they shouldn’t be surprised if it ends up that the government is moving towards some kind of nationalization of this technology, because it’s sort of a direct threat to our way of life.
When Alex Karp said this, he was obviously poking a bit at Anthropic. We all know sort of their fight with the State Department recently and the Department of War. But people took the idea further than that because AI is starting to look less like a normal consumer product and more like critical infrastructure.
It has implications for war, intelligence, labor markets, industrial policy, sort of all of those things at the same time. And so with this technology becoming foundational enough, maybe it makes sense for governments not to treat it, you know, just like a normal private market company.
So when someone like Alex Karp talks about the possible nationalization of technology, how much do you think this is just rhetoric? He wants clicks, he wants headlines. And how much do you think we should actually pay attention to what he’s saying?
REID:
In this kind of age of disruption, everything going on, people might do a lot of stupid, foolish things. So you should always pay attention, especially when you know people with some, you know, technological knowledge and position are also kind of making what I think is incorrect and clumsy arguments.
Now part of it is you get to this kind of question of, you know, there’s this old parable about wise people and elephants. You know, this one has a trunk and that one is the tail, and these two have different legs, that one has a tusk and so forth. And part of what you have to do is you have to kind of look at AI as the whole elephant.
And you go, look, there’s a national security thing. You know, one of the legs really, really matters, of course. And so it’s like, well, if you guys don’t converge to what’s best for the national security thing, then, you know, you could just get nationalized. Kind of simple argument.
The question is, you know, like, okay, how do you look at the whole elephant? Because you don’t get the national security without all of the economic power and everything else and going and nationalizing an industry is a sure way to say stop innovation, right? Don’t build anything more here, etc., etc.
And when the thing that matters is most speed, innovation and compounding going in the future, that’s that’s the definite way to kill the golden goose. But you know, it might be like, you know, dump radioactive acid on the golden goose as we’re doing it. So I think it’s a an unwise statement, thought, etc.
Now, that being said, I didn’t say that it wasn’t important for national security and countries to be think about like, okay, we got this fundamental technology. How does this play into our national security interests? And companies need to to that? They’re located in countries they’re located in countries that provide the national security.
You know, fortunately, so far, you know, the the AI leads have come from countries that have an interest in, you know, global stability, although, you know, in recent year and a half with the current administration, you know, seems to be less interested in global stability than former American administrations.
But like so far, like, you know, we you China-US you know, and bits in Europe, the interest in global stability. It’s not Russia. You know, it’s not you know, it’s it’s it’s kind of like it’s folks who like, you know, kind of playing for we should have global stability and everything else. And I think that’s an important thing.
And I think tech companies like every tech company I talk to is actually bought into being an American company, you know, participating in global stability and so forth. And like one of the things that was way under commented in the Anthropic discussion, you know, from what I can tell is, you know, Anthropic’s point was, you know, no mass surveillance of American citizens, which is, by the way, just saying legal.
And so you can we can ignore that, because that’s just saying we’re operating legally. You can say, you know, obey the American law. And it’s like, okay, that, any contract in America, you can say, that’s fine. I can ignore that sentence, right? That doesn’t need to not be in the sentence.
And then the other one is autonomous lethal weapons, which is that our technology is not ready for it yet. Right.
And so you’re like, okay, the provider is saying our technology is not ready for that yet. So we don’t want to be doing it. And by the way, country that’s free, we can say hey, like we can provide services or not provide services. And if you want to go take services from someone else who says their technology is ready for, you know, AI autonomous weapons, you can do that.
We think that’s super dangerous, but that we’re not we’re not standing in your way of doing that. We’re not saying you can’t do that.
And then of course, this all becomes a you know, a kind of reframing exercise, which is by the way value destructive. It’s anti American. It’s versus serving the American interest in terms of what this is.
And so it’s like, you know, it’s like look, that the whole threatening position of this is I think frankly anti America’s interests, you know, anti what we should be doing. We should be going great. We have like for example we say well but you know Anthropic with Claude Code has the best coding agent that we’re all using.
That’s awesome for us as Americans, you know how do we navigate this. Not you know you need to be because by the way American companies do not have to do whatever the Department of Defense tells them to do, right? Especially when we’re not in a time of war. Because Congress hasn’t declared war yet, right?
So the issues around being in a time of war is when Congress declares war and Congress can tell companies you’re now behaving as if you’re in a time of war. Right? That’s part of why we have Congress and the declaration of war.
And so it’s like, okay. And I do think that this technology now the underlying thing that’s really important here is the technology is important geopolitically for power, for national security. And I think it’s really important that that we do that.
But by the way even in this recent, you know, brouhaha, the Anthropic people think that too. They’re like how do we help America in this position? So it’s only the framing of you have to do exactly what I tell you to do, which is not land of the free, I suspect is not even home of the brave in this.
And so I think it’s important for the tech industry to say hey look like yes we are providing we are partnering. I mean I think you know Microsoft has always done a really good job of we collaborate with the U.S. and Western democracies for a well ordered global society. That’s what we do.
ARIA:
Well said, Reid. Appreciate it. Thanks so much.
REID:
My pleasure.
REID:
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.

