This transcript is generated with the help of AI and is lightly edited for clarity.
//
ARIA:
Reid, it is delightful to be here with you today. No surprise, we are going to be talking AI. I saw someone on Twitter today say, gosh, I just haven’t been able to read enough on AI. Does anyone have any takes? So we have seen many takes over the past days and weeks, and there’s actually a new HBR piece, and it argues something that feels true in a lot of knowledge-work teams: that AI doesn’t really save time. It argues that, sure, maybe you can do things faster, but it’s raising the pace and the volume of work.
ARIA:
So perhaps it’s not saving you time, it’s just increasing expectations. And so there’s more drafts, more iterations, faster turnaround. You’re sort of having to review, you know, six different drafts with your manager instead of just one. And I think, on the other side of the coin, these could be drafts that are really high quality and you’re doing really good work. But then in other instances, you are vibe coding something, and perhaps the, sort of, more experienced developers and engineers have to spend a lot of time sort of fixing or looking at the security risks by things that were created by their colleagues. And so in some ways it’s super high quality work. In other ways, perhaps not as much. But the question is that the promise of AI, among many promises, was certainly a time saver.
ARIA:
And so what do you think? Is AI saving time, and how quickly do you think it’s changing, sort of, expectations in the workplace?
REID:
So I think it’s just beginning to really change expectations. I think most of this stuff is overblown, and people are taking small signals and then kind of generalizing it to the entire industry. The vast majority of people, even in the information work industries, are actually, in fact, using these tools for much less than the current capabilities enable them to, let alone the capabilities that are being built to. And so, given that your, kind of, underuse of the current capabilities is so much to overly go, it’s a time saver, it’s a time waster, it’s a time accelerator, it’s a– you know, like da da, or any number of things like quality of work, work together, number of projects you’re doing, all the rest. It’s all very easy.
REID:
So that’s much more useful to kind of say, hey, look, here is the scope of all the different possibilities of when you put this together. Maybe we’re seeing some of them. You know, one of the problems with academic pieces is, we looked at six things and these six things said, and you’re like, okay, that’s tiny in the whole thing. It’s nowhere near statistically significant. Because, for example, if I said I looked at 10 startups of under 10 people and how they’re using vibe coding, the answer would be like, oh my God, no one’s doing anything other than vibe coding anymore. And then you go to a 100-person startup and you go, well, 15 people are doing that and a bunch of others.
REID:
But only like– and it’s only like, you know, 10 of them being engineers and five of them being non-engineers. But the marketing people aren’t doing that yet. And so, even though they could in various ways and that kind of thing, that’s the accelerant. Now, is it a time accelerant? The answer is absolutely yes, and you can see it, right? You can do it in, you know, you could do it in any domain you want. You could say, hey, I’m doing financial analysis, I’m looking at a possible investment in a business, as I have done this myself. And I can go and say, hey, I do a prompt and I say, give me what you think are the relevant questions of due diligence. Give me an analysis of possible competitors, what are possible substitution products?
REID:
And in even using the highest end product and the most compute, and in 10 to 15 minutes, you get a bunch of work that would have taken a person many hours, maybe even some days to do for that. Now, it wouldn’t be exactly what they’ve done because some of it will be a little off and unprioritized. But you’ll get a set. And so that could accelerate. Now you could say, well, but I’m still going to spend the three days doing this. I’m not going to have gone from three days to 15 minutes. Great. Well, that acceleration gives you a quality assist, if it’s not a speed assist, you know, in terms of doing it.
REID:
But the speed versus quality trade off is what—It’s like a classic work kind of question, which is, all right, you know, cheap, fast, good, pick two, pick one. You know, kind of as a way of doing this. Well, which way are we going to configure this use of the AI amplification to the task we’re doing? Now if it’s on something where I’m like, for example, going to invest millions of dollars into a company, I’m not going to go, great, I did all my due diligence in 30 minutes, I’m done. I’m going to go, great, you’ve delivered to me in 30 minutes what would’ve taken three days. Maybe instead of taking two weeks, I’ll now do seven days or eight days. I’ll use it as a competitive thing against other investors to be done and to make a term sheet offer—speed.
REID:
But I’m still getting that same quality in, as a way of doing it, because it matters in terms of it. Or maybe I’ll go, hey, I’m looking at this area myself, like, say, for example, it’s a fusion investment or something else. And there isn’t the area that you get the dogpile in, like say AI coding investments or something like that. And you go, I’m now going to still use the same two weeks. I’m just going to have the quality of the analysis be twice as better. Right? And of course, when more outputs are being generated, that causes all kinds of things. Because one of the things AI certainly does is I can do much higher volume of output, might even be—It’s definitely speed and in some cases definitely quality.
REID:
Quality can be uneven, which is one of the, you know, questions that occurs in all of these things. And people say, well, but the quality is getting so much better so fast that the unevenness will all go away, you know, tomorrow. And you’re like, okay, that’s possible, right? And might happen. But by the way, like here is like a nuancing—So like you’ve got competitive games, investing, selling, building product, shipping product, supply chain stuff and all this. Now why do legal contracts look the long, you know, bulky way that they do? Is it because people really optimizing for the quickest time to get a contract done, minimizing lawyer spend? No, of course not. What they kind of do is they say, look, if we’ve got X as a legal budget, let’s make sure that we’ve covered all of the corner cases.
REID:
Let’s make sure that we’ve done all this stuff. And so part of the reason why you spend a whole bunch of money on lawyers, and by the way, it’s a competitive thing because the other side’s also spending lawyers. So you have to kind of, you know, be in parity and all the rest. And you kind of go, okay. A most natural thing for people to think about is like, oh my gosh, lawyers are just a cost of doing business. And we’re going to now really reduce the cost. And we’re now only going to be doing as opposed to three weeks to the contract, we’re going to be doing one day to then the contract will be done.
REID:
And you’re like, that’s very unlikely because the same reason what we got to these fucking monstrosities of contracts is because it’s that dynamic between two players kind of, you know, like trying to out lawyer each other and manage all the different risks and address things in advance and all the rest. And so what I think is going to happen with AI is not suddenly we’re going to be going, oh, we’re doing all our contracts in a day. Maybe it won’t be three weeks now or six weeks, maybe it’ll be two weeks. But they’re probably going to be 5x as long because both sides are going to be using AI to generate, analyze, suggest clauses, read clauses and so forth. And it’s going to be just a lot thicker. And so it isn’t like, lawyers’ jobs are going away.
REID:
Because by the way, in a classic kind of situation of this, you go, well, I’m just using ChatGPT. And you go, well, I’m being better, I’m using Claude or I’m being better, I’m using Gemini or I’m using Copilot or whatever. It’s like, okay, if that’s all you’re doing, then you don’t have a differential edge. So you’re gonna be like, how are the ways that I get differential edge in this? And typically it’s, I try to hire better lawyers. And it’s like, well, okay, which ways are we gonna be doing this? And that’s gonna be the kind of work process. Now, you know, it’s very easy to make calls where we say, I look at that one and it produced a lot of things much more quickly. Great, you know? That one produced a lot more high quality. Great. That happened.
REID:
That produced a whole much more volume. And the volume actually in fact sucked up a whole bunch of team management time. Yep, that’s going to happen too. And like for example, AI content generation on the Internet, it’s going to be like, oh my God, there’s so much of it. Some of it’s going to be really great and a lot of it’s going to be kind of schlocky. But by the way, news flash, there’s a lot of schlocky content on the Internet, you know, even pre-AI. It’s kind of, what is a demand and kind of interaction. So all of these will be part of the work process.
REID:
And it is good, like it’s not one puck moving, like the millions of pucks moving and what that configuration is and what is the thing that you should be doing as it changes the landscape in which you’re working in. Now, in all of this, what’s going to happen is the quality and use of AI tools for being able to do it effectively will be a competitive advantage for individuals, for groups, for companies and for industries. And you could use it poorly. Like for example, if one group said, hey, we’re going to use financial tools for analysis and another group didn’t, the financial tools for analysis will probably play out in various ways. I’m choosing something that’s very general across, you know, anything from steel manufacturing to tech investing.
REID:
But if like, one of them said I’m going to use only Excel spreadsheets and the other one said, I’m going to be bringing in a whole bunch of math libraries and like, AI assistants for doing this, that will be a differentiation. So that kind of differentiation will be there. And I think part of what I think is important is not so much of, well, what’s like—it’s kind of like, it’s like there’s a natural—it’s like AI is like gravity and it’s going to orient everything towards, like magnetically, heading towards the North Pole. It’s like, no, it’s a massive accelerant in a variety of ways with some jagged edges and a rapidly changing nature of how agents operate, of what tool capabilities look like, of what skill capabilities look like, of what models are doing, how people deploy them individually within teams, et cetera, and that’s good.
REID:
And the real question is to say, well, how should we be engaging right now and how should we be learning and how should we be changing and adapting. In saving time, saving time doesn’t mean that I would do my work in 15 minutes and then I go have margaritas on the golf course, right? Because part of the nature of a lot of this work is it’s competitive and it’s like, you know, even marketing or sales, competitive. And how does that play out within the circumstance? And you know, like if I was the first person, I knew people who were doing this, you know, two years ago, GPT-4, I’m the first people doing it. I’m doing my marketing copy with this and that’s an hour of where I used to have an eight hour day and I’m going—
REID:
Well, now a whole bunch of people are doing it enough and you disappear after an hour and the other people don’t and they’re targeting higher quality, more outputs, et cetera. You’re gonna be at a competitive disadvantage. And so it’s the shape of how you’re deploying it in these environments versus a law of gravity that, you know, we’re now all—we’re having, you know, one hour work weeks versus four hour work weeks.
ARIA:
Well, I actually see this article as sort of hopeful because I think part of the worry was that there was no more human tasks to do. And so certainly it’s a management problem if managers are creating busy work or expecting five drafts of something that isn’t useful either to your point, adding productivity or adding quality. But if what AI does is simply makes us more productive, so we don’t save time, but we have greater output, then that’s good, that’s good for the future of jobs. That’s great. It means we can all do higher quality work. And that to your point, AI won’t lead to sort of a massive disruption because of course the person doing one hour a week, like they’re probably not going to hold onto that job for too long.
ARIA:
But there’s other people who are realizing how to utilize it, make it more quality, make it more efficient, whatever it might be.
REID:
Well, but here’s the kind of thing, I guess—I 100% agree with what you just said. But like the thing is, people say, well, we’ll have a lot less bureaucracy, we’ll have a lot less meetings. The AIs will be doing meetings. And what people don’t track is like, for example, like that was the reason I was using legal contracts, is like there is a reason, there’s a set of reasons why the legal contracts get that way. There’s a set of reasons why the bureaucratic processes get that way. Those reasons persist in various ways. They may be changing the scope given tech and all the rest, but they don’t suddenly become the, oh great—like a classic engineering thing is, can I do all my work and have no meetings? And the answer is no.
REID:
And the reason is because we need to coordinate on what we’re doing, not just operational plans, but strategies and kind of how does it play out and so forth. And people need to be coordinating the work across the things. Now that doesn’t mean that it isn’t good to really try to sit on the meeting profusion that happens. And of course one of the things that AI can do is to say, look, I don’t need to attend these meetings because one of the pieces of inefficiency before is I went to sit in the 90 minute meeting because there was five minutes I really needed to hear.
REID:
And if the AI agent was just listening in the whole thing and may actually pull out the eight minutes because I didn’t realize I wasn’t really paying attention to the other three minutes and did that and I didn’t need to be there for that and kind of async in. Those are all important things, but those will become part of how we’re going, “how do we shape this to better do our fitness function?” You know, our offerings, the product services, our support, our next generation product development, our selling, our marketing, our financial analysis, our capital allocation, our, you know, strategy for how we’re running the whole business. All of that, all of that will be shifting because AI will touch every part of it.
REID:
And all of the things where people say, you know, it’s like the wise people and the elephant, it’s like, well, it’s faster, well, it’s higher quality, well, you actually have to do a whole bunch more work because you’ll have to cross-check a whole bunch of things. Well now the expectations for how much is in a release and an output in whatever your function is—coding, selling, marketing, legal, finance, et cetera—is now going to be much higher. Yes, all of the above in different shapes as we’re doing it and making it happen.
ARIA:
So I want to talk about two essays that actually touch on much of what we’ve just been talking about, about productivity, quality. What does it mean to do work, what does it mean to use tools? So many of our listeners probably read these two viral Twitter essays that came out over the last week. And essay number one was titled Something Big is Happening. And it argued that the AI moment essentially is February 2020 again. We were all having a great time, chatting with each other, you know, staying within six feet of each other. And then COVID hit while we were barely paying attention. And they sort of argue that this is actually even a bigger moment.
ARIA:
And they argue that this is—like we have hit the acceleration phase, especially with the two new models that came out just a few weeks ago, and that in one to five years there’s going to be massive white-collar disruption. The tone is urgent. It says, you know, save your money, you know, stockpile what you have right now because knowledge-work is about to go away, get steamrolled. It’s going to be a huge change. And I saw number two, and I don’t know if I would necessarily put it in direct contrast, but it certainly speaks to it in a conversation. It’s titled Tool-Shaped Objects. And it’s essentially a rebuttal. And it doesn’t say that AI is fake, but it says that the boom is being misread. It frames much of the current AI wave as tool-shaped objects.
ARIA:
So systems that sort of feel like productivity and generate activity and you’re using tons of tokens and there’s dashboards, but the output is often ambiguous, marginal. You’re not necessarily seeing those productivity gains. And so they claim that AI is everywhere in consumption. We’re using it a lot, but we don’t actually see the economic output yet. And perhaps we’re just early, but you know, they’re arguing we’re not seeing that other side of the coin. So I would love to hear your takes on these two essays. I’m sure you lie somewhere in between, but like, where are we in this AI moment?
REID:
So not surprising to you because you said in between, I’d say, in a sense, both. And I think if I recall from my read of Tool-Shaped Objects, it’s actually also referring a little bit to the Something Big is Happening. There’s always a little bit of a hesitancy that outsiders have listening to the dramatic white-collar bloodbath because it’s kind of like it’s the “my AI work is so important, my LLM thing is so important,” because you know, you should stop everything and focus on the thing that I’m doing, that I’m selling, etc. And it’s one of the hesitancies non-AI people have had in looking at AI. Now I think the hesitancy is wrong. I do think we are in a dramatic moment. I don’t think we’re in a dramatic moment in weeks or months. Small end months, large end months, maybe.
REID:
Because of the speed at which people actually really adopt things. Speed as individuals, speed as organizations, speed as markets and all the rest. And there is still a bunch of things that are still fundamentally paced at human speed, despite the acceleration intelligence, the amplification intelligence, you know, that kind of AI it’s bringing. But you know, in the Something Big is Happening—and there were a couple of pieces that were like, for example, there were lines in the essay that was like, some of the best engineers I know are doing only AI now. I was like, well, I wonder what that means about the quality of engineers you’re associated with. Because you know, I’ve talked to a set of these engineers and it depends on which set of areas.
REID:
Like if I’m an engineer and I’m a smartphone app developer, I’m fucking using AI all the time, right? If I’m an engineer and I’m doing DevOps tooling or I’m doing data analysis, I’m using AI all the time if I know what I’m doing. If I’m an AI engineer that’s working on like the code around chips or around systems architecture, not around, you know, kind of like the tool tips for API usage, that’s another thing. But like I’ve tried AI every month and every eight months it’s not been doing good for me. And I haven’t gone, fuck it, it’s all broken. I’m like, I’ll try the next iteration, right? But I’m not doing that as much. And so it’s not like, oh my God, everyone is on the wave and here’s it’s going.
REID:
Now, part of the reason the author would defend their work saying, well look, I’m just talking about this exponential curve and if we look at last three years to now, look at what the base has been, then of course the kernel engineer of the server will be doing this in maybe two or three months. And he’s like, well maybe, right? Because part of the thing that I have is the over generalization of J-curves in capabilities. All J-curves everywhere in nature turn into S-curves and by the way, sometimes they layer on top of each other and so forth. But it’s like, it’s inevitable. Like look at this exponential curve. Like the whole world will explode in compute in two and a half years. You’re like, yeah, but no, no, it won’t, right?
REID:
I can guarantee it won’t in various ways because you know, there’s various things that turn into the S-curve. Now that being said is, well, what if the S-curve is above the capability of 99.9999% in every single feature of everything that people currently do. You go, wow, that’ll be a really big change. But then the question is, well, can we adapt? And the way we’re learning and which ways we slot in and yes, it’s moving at a very fast speed and we’re at a different speed and we have to figure out the speed impedances and all the rest of the stuff, in even that the J-curve turning to S-curve is well above our current thing, right? In terms of how it operates. So the Something Big is Happening is I think overly dramatic but fundamentally correct. Right?
REID:
Which is, and because you know this, because we’ve talked about this, look, and you know the Something Big is Happening is like, oh, you focused on code because code creates an acceleration of AI research itself and then in the build of the models. And that is in part true how a number of them are doing that. But a little bit of what’s overly dramatized is you say, well, it’s causing this fundamental acceleration in AI. It’s like, yes, it’s accelerating a bunch of the work in terms of a lot of coding work has been, kind of, call it like, a lot of writing work, which is, you fill in a bunch of stuff to make the whole thing work. It’s like calling in libraries and doing all of this. And all that stuff gets massively accelerated.
REID:
And the—A—the AI agents are such a quality that they’re getting it right more often the first time. And then you bring in a second agent to be cross-checking, making it better. And that even makes it much more, in terms of doing it. And then you can express something, you know, in kind of English, simple language, and you can get something that’s valuable. But by the way, once again, it’s like part of the reason why coding is a technical mindset is like, it’s like, well, how nuanced is your strategy? Like, let’s be super simple. My prompt is “make me a game that a lot of people will pay me money for”.
ARIA:
Not going to get too far, Reid. (laughs)
REID:
Yes. And someone else’s prompt is going to be “Hey, I looked at Fortnite. I have the following kind of set of ideas that are different possible concepts. I’d like to make a set of different smaller apps that are testing these different kinds of concepts in the following way against actually, in fact, market demand. I would like to do analysis about like, what the curves of those things work like. I would then like to roll up those things into a larger game that could be really being paid for.” Person B will have a much higher likelihood of making a game that they’ll get paid a lot of money for, right? And they said, well, just the AI will know to do that.
REID:
And you’re like, well, you know, look, it may get to the point where all corporations or all AIs are working, but we’re not actually super close to that. And it’s a little bit like, there’s a bunch of really interesting things going on with Moltbot, et cetera, relative to the acceleration of sharing potential skills and other kinds of things for improvement curves on agents. There’s a bunch of scary things relative to Instabot farm, malware, hacking, et cetera. But there’s also a bunch of bogus stuff. Like they go, well, look at like the social network. They’re doing all of their own—Like, they naturally go to creating their own religion and languages and so forth.
REID:
And it’s like, nah, I nearly guarantee that it’s not that. It may not be the kind of hacking that is the “go create a religion and go talk about religion.” It may just be like, the meta prompt is go read some of the things that people have been worrying about AI doing and then start doing it and see what the other agents. Like it may just be that. But even that could get to it. And it isn’t that the natural gravity is—because by the way, we’ve run—like at Microsoft, AI agents talking to AI agents. We’ve done a whole bunch of this stuff. And like, the invention of religion isn’t something that naturally comes up in all these different contexts. Which means that it’s like, no, it doesn’t necessarily mean that the creator of Moltbot is doing something nefarious, but it’s like it gives a bunch of people across the Internet a chance to hack.
REID:
On the Tool-Shaped Objects, it’s like part of it is to understand that there’s a bunch of things that we’re still doing that still really matter. Like the more subtle one that I think is interesting is, we don’t really have a sense of where these agents and tools will go on metacognition. Like it isn’t that their cognitive capabilities, including some metacognitive capabilities, aren’t actually in fact improving in various ways.
REID:
But it’s kind of, it’s a little bit like the reason why you say, well we’ve got five companies, four of them are going to deploy only GPT-5.3 as their marketers and one and the one of them is going to deploy GPT-5.3 with a couple human beings. Well, part of what the couple human beings are going to be doing is going huh, how do we help our GPT-5.3 like outcompete the other four, right? What are the things we do? By the way, they might intervene in ways that it’s worse, but I suspect it’ll intervene in ways that it’s better. Because the reason why the chess example is frequently off is that most of these things involve metacognition in a way that’s not the simple output that you get from, like, in a chess game.
REID:
There’s no epistemology question, there’s no fitness function question, there’s no—Like, the competition is literally within a very constrained space, including even much more complicated games like Go. In life, it gets to be much more murky. And that’s the thing, I think we, you know, still have at very least a bunch of room for human contribution, if not for a long time.
REID:
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil and Ben Relles.

