This transcript is generated with the help of AI and is lightly edited for clarity.
REID:
I am Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know what happens if, in the future, everything breaks humanity’s way.
ARIA:
Typically, we ask our guests for their outlook on the best possible future, but now every other week, I get to ask Reid for his take.
REID:
This is Possible.
ARIA:
Reid, great to be with you today, actually, after we were just in person yesterday. I feel like everyone is talking about this new MIT study. And it said that 95% of these AI pilot programs and companies actually don’t lead to the scaled impact we’re looking for. And they argue that it wasn’t necessarily that the technology wasn’t there, but there are organizational hurdles, people aren’t adapting, there’s internal policies. And so that the uptake—especially for Fortune 500 companies, or for big companies who are used to doing things a certain way—is going to be much slower. So my question for you is, what do you make of this? Does this high failure rate mean that AI adoption is going to slow down across the board, especially because some organizations are still on the fence about AI adoption?
REID:
Well, let’s start with a couple of very broad themes—surprising to everyone, I know, that this is the way that I would open up my answers. But the first is part of the key thing about the AI revolution, the current one, is it’s scale compute with scale learning systems, on scale data, done by scale teams. And then it’s scale adoption. And, the societies, the industries, the companies will be the ones that adopt that scale. And by the way, blitzscaling was obviously going to play in here as well, in time, which will be the massive beneficiaries. And they’ll be the beneficiaries the same way that England wasn’t the country—that Britain wasn’t the country—that invented the industrial revolution, but embraced it early. And that’s part of the reason why this relatively small island had a global empire for centuries.
REID:
And I think that’s part of the adoption thing that is really key. Now, it doesn’t surprise me that the way that most traditional enterprise companies say, “Well, the way we adopt technology is we assign a group of people—call it three to five or something—and we buy a pilot program and we test something.” And then we go, “Oh, look, that’s…” And that’s not really working as a format because just as it’s a transformation of how individuals work, it’s a transformation of how companies work as well. And when I glanced at the MIT study, I was like, well, the next study that should be done with this is compare and contrast startups. Because my guess is the startups will be 95% are finding great acceleration and are integrating it, and all the rest in ways of doing it, because they’re building their work process from the ground up.
REID:
And it is one of the things that I love about the work I do as an entrepreneur, a technology inventor and investor, in Silicon Valley and Greylock, and all the rest of this—because of this exact thing. And so—and as you know—one of the things that I also say is that—to every individual—is if you are not fundamentally discovering something by which AI helps you do your work better today, you’re not trying hard enough. It’s part of the reason why we love all the stuff that Ethan Mollick and a number of other folks are doing. And why we wrote Superagency, to try to get this message and going out there. And why we do this podcast! All of that is engage and find the ways it’s helping you. It doesn’t mean it’ll help you in everything. Doesn’t mean it’ll take over your entire job. Actually, very, very few jobs can take over the entire job as of today. But for every human being who is using language in how they work—if you don’t speak at all, then maybe AI is not ready for you, but it still might be—but if you’re using language at all, AI is helpful.
ARIA:
Right? I mean, you often say that for any problem we have, technology is somewhere between 30 and 80% of the solution. And I wouldn’t say this is a problem. This is how can we enhance productivity and, in this instance, do our work lives better. Do you think more of the improvement in AI use and productivity over the next, let’s call it, year or two, is going to come from advancements in AI—like actually the technology—or B, advancements in how humans and organizations use it and integrate it?
REID:
Well, actually, you’re critically going to need both. But one of the things that exists today is an underuse of the capabilities we have. So part of the reason why your question is, “Well, A or B,” is well we’ve got a bunch of capabilities we’re not actually using. And if we don’t get the use of the deployment, then the capabilities won’t make a big difference. Now that being said, at different timescales—and the timescale of this one will likely be much faster than earlier technology adoption. Because even when you started with the personal computer or the mobile phone, not only does the tech gets built, and then eventually it gets deployed into things, and it’s always a lag of deployment. Sometimes that lag of deployment is very, very short, and sometimes long. I think part of the reason why people get surprised about the AI one is because there’s so much drum-rolling about how transformative it is.
REID:
And so they think you just turn it on and it starts working, and actually, there’s a whole bunch of things [that need] to happen. I mean, this is among the reasons why there’s a whole range of great technology investments in AI— whether it’s Greylock or others—in terms of doing this and why that building of applications is worth it. And I’m not of the belief that it will just be the “Well, once we have the one model, that one model will just be doing everything.” I think there’s a lot of fabric that goes into this. And what’s more, the best model will undoubtedly be extremely expensive to run. And a bunch of the thing gets to the inference compute side of how does it become where intelligence gets added to everything with the same fluidity that electricity gets added to everything.
REID:
And it’s just the electricity of powering of intelligence that upgrades everything that you’re doing. And I think that’s one of the things that is so fundamental of what this revolution needs to be. So adoption is a central part of it. And actually—as you know from various conversations you and I have had—it’s one of my worries about where democracies can screw themselves up because the democracy could lead to an impedance in the adoption of these technologies. I think probably the leading democracies with the leading impedances will probably be the Europeans. Because of their AI act and all the rest in terms of this. And it’s one of the reasons why I try to help them in various ways to say, “No, no, don’t fumble the cognitive industrial revolution. Because the adoption will actually be very important in terms of doing this.” And so that adoption actually really matters.
ARIA:
So, perfect segue, because I wanted to talk actually, about the global scene again. So, recently, Donald Trump reversed policy and said that NVIDIA could sell their chips to China, potentially in exchange for 15% of the resulting Chinese revenue. But then the Chinese government said to their companies—by ByteDance, Alibaba, Tencent—that they had to suspend their purchase of NVIDIA chips because of data security concerns. And so, according to the information DeepSeek, which we know is one of China’s leading AI developers, they have begun training some of their next-generation models on Huawei’s Ascend chips, which is a shift away from NVIDIA. And so while DeepSeek still uses NVIDIA for its largest models, the partnership with Huawei signals a strategic turning point where China is trying to build a self-reliant AI ecosystem that can compete globally, even as U.S. companies like NVIDIA warned that the competition has undeniably arrived. So that is what the information has said about this sort of global back and forth. So my question for you is, does this DeepSeek pivot towards essentially their own chips—towards Chinese chips—away from NVIDIA, does that mean the end of NVIDIA dominance? Does this further split the East and West in terms of technological power?
REID:
Well, part of the reason that the chip exports to China were constrained by previous U.S. policy was a combination of economic and national security rationale for saying, “Hey, we should get the appropriate benefits to both U.S. economic interests and U.S. national security interests in terms of doing this.” And the first thing is it’s completely incoherent to say, “Well, we’ll undercut those things by just taking a 15% tax,” the U.S. Treasury. Like, it doesn’t have any intellectual property. All the people who are theorists about American business, and trade success, and economic success should all abhor this. Now, one could argue that one should allow a certain set of chips to get out. Like you say, “Hey, the most current chips we’ll hold onto, and the other chips, we will then more broadly provision,” in which we should just do that.
REID:
We shouldn’t be imposing a 15% cut on doing it, because there’s no particular reason to be doing that. Because it’s like, “Well, if we charge 15% then we don’t have our national security concerns anymore.” It’s just really quite bizarre. But, by the way, if you’re forking the availability, then it is creating incentives for the Chinese to accelerate their own chip industry. I do think that the chip industry is a strategic power—a strategic capability—that is on the level of nuclear or energy. Or I think we fumbled over the decades in doing this in the U.S., and I think it’s super important to regain it in various ways. And of course, as we put in pressure to actively limit the Chinese, that creates an incentive for them to build their own, which then could lead to a decoupling, could lead to ultimately them solving problems.
REID:
And part of the in-depth of thing of what’s happening with a Huawei chip is actually a different mathematical model that’s underlying it. And they say, “Well, maybe that’ll be different, or maybe that’ll be better, or maybe that’ll be…” And that creates all those risk factors. This is a complicated policy area for doing that. And I do think there is a very strong technological competition between China and the West. I do think there’s a strong economic competition between China and the West—and the West and the U.S. in these cases. And I think that competition is good. I think also cooperation is also good, and decoupling is generally speaking, while having competition, decoupling is one of the things that can actually lead to further conflicts and further issues. And one has to be very careful on those things.
REID:
It’s a strategy that can be executed, but with competence and care, not by governance by Tweet or other kinds of things that we are too much in the weeds of right now. And I’m generally speaking a very strong voice in favor of the West’s economic and national security concerns here, but I don’t think decoupling is a good idea. Now, the Chinese, I think, are very smart and I think part of their concern—I don’t think they have a data security concern, but I do think that part of what they’re trying to figure out is say, “Well, okay, we do not want to be dependent upon—the same kind of things that we the U.S. recognize we don’t want to be dependent upon—supply chain here that can, that can make a difference.”
REID:
And I think part of the thing that the current federal government is learning is to say, “Well, we have dependencies upon the Chinese supply chains too.” And I think part of the background story of this—which you could see from various slips in [Howard] Lutnick’s and others’ statements—is like, well, actually this has more ties to rare earths and other things that we have as a concern, with maybe papering over a 15% tariff in terms of how this operates. Like this is the kind of thing where quality governance and quality activity matters. And I think we’re being a little bit—well, I’m being charitable—we’re being incompetent about how we’re navigating this stuff. And I think we should up our level of competence on this.
ARIA:
Absolutely. Saying one thing has different implications, saying they’re doing it for one reason, affecting another thing. And then also just not sticking to their capitalist roots, I would say. Some news that I was really excited about from last week came from OpenAI. Obviously, OpenAI, you led the first investment round in OpenAI. You were on the board for a number of years. It’s a company that we’re really excited for all the good that they can do on the AI side. And so last week they announced a $50 million fund, and they want to help NGOs use AI for education, healthcare, economic opportunity, community organizing—all of the things that you and I often talk about that we think AI can be really fantastic for. One of the things I liked about this grant program in particular is they said they were launching it in early September, it closes in early October, and they’re going to give these grants out by the end of the year. So it is quick, $50 million out the door to organizations—both old and new—that can really use AI to create better wellbeing for all Americans. And so, is this something that you think all of the AI labs should be doing, or should this be governments? Should this be foundations? Where should the responsibility to use this AI for social good lie?
REID:
So, probably not surprising to you, since in these big things, I tend to be inclusive. So the short answer is everyone. So yes, the Frontier Lab should do it. Yes, it’s a great thing showing OpenAI’s leadership in its being a humanist organization, and caring about what happens with human society and human individuals. I think everyone should be doing this sort of thing within the commercial. But I think it also means that governments, and NGOs, and all the rest should also, of course, be doing this. Because what’ll happen is we’ll be going through rapid news cycles, because—just like we were talking about with the MIT study—of like, “Oh my God, everything’s going to change.” “Wait, nothing changed in the last three to six months.” Where this is all overblown, it’s all fictional, et cetera.
REID:
And what frequently happens here is that the discourse over-predicts the next one, two, three years, and under-predicts ten years. And so it’s like, look, the reason why they get in this, to be experimenting, to be doing things, whether it’s individuals, other things, is because that’s important. And so, for example, one of the things—just like we did earlier on our podcast—what is AI going to mean for education in various ways? And AI’s impact for education is going to be very important. Getting it into deployment is very important. Obviously, there’s going to be a bunch of different democratic and other institutions that are going to resist that, which will be bad for American children, because they don’t want to change their work processes. And you even see universities doing that—it’s not just a K-12 thing. Everyone’s like, “Oh, I don’t want to. I’m used to how I teach my syllabus, and what I do…”
ARIA:
Sure, “I’m used to testing. I have my tests, I have my curricula. Why would I change it?”
REID:
Yes. And it’s part of the reason, what like Larry Kramer at London School of Economics is doing. It’s providing the funds for them to restructure professors’ curricula, using AI as part of it. It’s great leadership by them. There’s others as well. You know, Michael Crow at ASU is always doing amazing things. But I think it’s really, really important to do this stuff. And I think it’s great that OpenAI is establishing this and saying, “Hey, we have partial responsibility to help here too.”
ARIA:
Awesome. So another study that came out last week, led by one of our great friends, the researcher and professor at Stanford, Eric Brynjolfsson. And what he and his team looked at is they actually looked at ADP data from thousands upon thousands of employees in the U.S. They partnered with Anthropic and Claude Code to do this, and saw that if you looked at entry-level jobs—in particular for 22 to 24 year olds—those dropped 16% in those professions that you would say would most be vulnerable to AI replacements. So customer service, computer engineering, things like that. Whereas when you looked at professions at entry-level that weren’t vulnerable to AI—nursing, in-person, trucking—they actually didn’t see a drop. So, they looked through a lot of reasons this could be, and their hypothesis is that AI is directly contributing to the 16% drop in jobs for 22 to 25-year-olds.
ARIA:
So, my question for you is, this is not good. This is what we’ve all been like, “Ah, is it going to hit the entry-level folks first because we don’t need those entry-level folks because we’re substituting out for AI?” And their study evokes the canaries in the coal mine. Is this going to be something worse down the road? Could this just be aberrance in the data, or is this something we need to prepare for over the coming years—that AI will actually negatively affect that very first rung in the career ladder?
REID:
Well, I thought Eric Brynjolfsson always does great work, and I’m on the advisory board for his digital economy lab. So I just advertise that, given that I’m about to say a bunch of very positive things about his work. And I think it’s great, solid work. Now, I’ve personally thought that the first place that we’re going to see a bunch of job replacement versus job transformation is in customer service. It’s one of the reasons why I’ve invested in some customer service companies. I think it’s like job transformation, the same way that moving agriculture to urban as the industrial revolution is actually in fact a good transformation. There may ultimately be a different form. Maybe some forms of customer service will follow accounting. Like everyone thought when the spreadsheet was created, accounting would just simply go away, and instead it transformed into scenario analysis and other kinds of detailed risk planning, and mitigation, and economic planning.
REID:
And maybe there’ll be a similar parallel where customer service becomes more of a strategically planned thing, where it’s customer engagement and has a bunch of different things, and that there is a new set of human jobs that go into it. Any place where you’re trying to get a human being to act like a robot, ultimately, the robot will be better. And customer service jobs tend to go through these weird scripting systems and all the rest. One of the things we’re seeing, anecdotally—and I don’t know what the percentages are—but you get customers—people—calling into human customer service agents saying, “Please stop, get me away from the AI and have me talk to a human being,” because the human being’s trying to follow a script. And they’re like, “Ah, this is so fucked up. This must be an AI,” as a way it’s doing it. Even though it’s like, “No, no, I’m a human.” It’s like, “No, that’s what the AI would say!” So it does not surprise me at all. Now, the computer engineering one was an interesting one to me. I still believe very strongly that there will still be essentially unlimited jobs in the computer science and engineering thing. In part, because I think all human knowledge work, all human information work, will have a software co-pilot for doing it. And so, therefore, people who also think in terms of how computational programs work will be naturally enhanced in this. And I think there’ll be a much broader base of it. Now, in the first part of it, it may be that the “Hey, entry-level coding jobs, we don’t know how to engage them yet.”
REID:
One of the other natural places to start using it is within engineering. All highly good, technical organizations are now actually in fact really going deep about what ways can they be using AI co-pilots, and other amplifications, in order to be doing code. And that may be getting them into an unknown era when it gets to it. Now, my general advice for organizations and also for the students would be, “Well, really embrace the current, the boldest edges of vibe coding, all the rest.” And to be using that to what you would bring to corporations, in terms of how you operate. Because by the way, organizations are slow to adopt. That’s something you could bring. Organizations should look for it. This is part of my second book, The Alliance, in terms of hiring entrepreneurial folks. And they should be engaging with that. Now, the fact that they may not be doing it yet, it may be a slower transition, maybe that’s what’s going on. Maybe that’s what the hypothesis that Eric is discovering. I don’t think it’s a law of physics that it plays out that way. And I’m still bullish that way, but it does cause me to go look at it, recheck my theories of the world, what needs to happen in order to get there. Always love quality work from Eric and others in these fields.
ARIA:
Awesome. Reid, pleasure to talk to you. Thanks so much.
REID:
Always a pleasure.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Thanasi Dilos, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.