This transcript is generated with the help of AI and is lightly edited for clarity.
//
ARIA:
Hey, Reid, great to see you.
REID:
Great to see you.
ARIA:
Today we are, of course, going to talk about AI. We are going to delve into crypto because you were at a crypto conference last week. We are also going to talk about the recent layoffs that have been in the news, and of course, how the big frontier models are trying to integrate themselves into every way of life, including Wall Street. Let’s start on crypto. So, Reid, last week you took the stage at CoinDesk’s Consensus conference in Miami and said that crypto becomes even more critical in an AI world. Within a few years, there will almost certainly be more AI agents on the internet than there are humans. They’re going to book meetings, move money, sign contracts, buy things, negotiate with each other. Most of that without a person in the loop on either side.
ARIA:
And so this raises the question the consumer internet has never really had to answer. When my agent talks to your agent, how do either of us know the thing on the other end is real, is authorized, is who it claims to be, and is good for what it just promised? Like, we’ve always talked about trust on the internet, and I think trust is even going to become more central as agents talk to each other, but also as agents pay each other. So for someone who perhaps rolls their eyes at crypto, or as some might say, for someone who hasn’t crossed over yet, can you tell us why does an agentic internet require crypto specifically, both for transactions and for identity providers?
REID:
“Require” is a little strong, but “highly used,” maybe “best tech” just as a nuance. And is part of the reason why attending the conference was an excellent conference, despite the fact it was in Miami. Little California-at-Florida dig there.
ARIA:
I’ll take digs at Florida all day long, Reid. Bring them on.
REID:
So, you know, I bought my first Bitcoin in 2014, you know, haven’t sold it, you know, or sold any of the Bitcoin that I purchased. Obviously led, invested in Xapo, was part of Greylock’s investment in Coinbase. And all of this stuff was key things from the very early days. But then I diverted massively in AI, continue, as you know, to do a lot of AI stuff. But it’s AI, in fact, that has partially gotten me back into thinking about crypto. And part of that is, as your excellent question frames, is like, well, if we’re moving into an agentic universe or an agentic internet in terms of not only how do we know who’s human, but also which are agents, and which agents actually represent Aria, which agents represent Reid, which agents represent, you know, other people? Can those agents transact?
REID:
What can you trust for making various forms of commitment, which include, you know, financial transactions and others, but not only financial? And crypto is, so far, best-designed set of protocols for this. I mean, like, one of the things I said a number of years ago was that the internet domain name service would be better if it was in crypto, if we had these ideas, because, you know, having identity systems that are robust and on a distributed network as a way of doing that, there’s various ways where you wouldn’t necessarily want, as an optimum solution, a centralized identity service for these agent certifications because, well, what if it’s corrupted or what if it goes down, et cetera.
REID:
And therefore, a decentralized identity system, domain name service system, certification system for the agents could be very useful because it’s not just how do companies talk to each other or how do ISPs talk to each other, but there is kind of this question of, like, well, every individual with their agents. There’s not just individuals within companies. Obviously, there’s going to be a whole set of different agentic identity systems, control systems, validation systems, enablement systems within companies. And when you look around, that’s precisely the kind of problem that crypto has solved. And you’re going to need some things that are going to be okay, well, we’ll need human identification—obviously one of the things that Worldcoin and others are kind of doing there.
REID:
You’ll need some things that are agent identification, and then you’ll also need things that are agent for this human entity, whether it’s a Reid or Aria, whether it’s a company, whether it’s other things. And these identity providers are not just for financial transactions—obviously super important, agentic commerce, et cetera—but also for trust of information, trust of commitments, even low-level trust. Like, hey, we’re setting up this podcast recording thing at this time, you know, on this day. Well, do we trust that was all set up and done through scheduling processes that involve these agents? And that’s—all of that stuff is going to be very useful within the crypto environment.
ARIA:
So, Reid, you brought up that you first bought Bitcoin in 2014, a long time ago. And then last week you were asked what your exit price for Bitcoin was. And your answer, perhaps a little tongue in cheek, was, is there such a thing as an exit price? So I’m a little confused. I think everything has an exit price. Tell us more about what you meant.
REID:
So there’s two phrases that—in one sense, everything does have an immediate exit price. Whereas, for example, if someone were to come along today and say, I want to buy your Bitcoin from you for $200,000 a coin, of course I would sell them that because I could then go buy more Bitcoin somewhere else. So there’s one sense in which that’s definitely the case. There’s always some transaction prices for these things. Now, part of it for long-term investing is that people say, hey, there’s a fixed price at some point where you would simply exit Bitcoin in the future and because I actually think Bitcoin is actually going to be a part of our, call it, economic firmament, I think it’ll be there as long as the internet’s there.
REID:
As long as we have this e-commerce system or this kind of internet system and identity system and so forth, then maybe that’ll all change. Like, when the internet goes away, maybe Bitcoin will have gone away too. But part of the reason, in the more serious ways, is there such a thing as an exit price, is that presuming that there’s a fixed price right now that you would exit is like saying, well, there’s a fixed price right now that five years from now I would sell my Microsoft or I’d sell my Google or something. And that’s foolish. You’d actually be looking at: Where is this? Because Bitcoin’s here to stay for at least the entire life cycle of the internet, in which case there isn’t a per se exit price for that because I don’t know what the exit price for the internet is.
REID:
I mean, it’s like, is there such a thing as an exit price for the internet? It’s like, no, this was going to be there. So of course there’ll be this year, this day, there’ll be some trading prices. Now, because I continue to hold without any consideration of selling, is because I think it’s such a fundamental part of growing into value in the internet that the question of exiting at any specific projected time right now is a question that doesn’t really have an answer.
ARIA:
So this is not a show about investing advice, but you heard it here: do not sell your Bitcoin. All right, so as long as AI has sort of been at the front of consciousness, everyone has been talking about AI layoffs and AI jobs. And so this past week, some people are saying, well, it finally happened. It’s here. We saw two of the biggest infrastructure companies in tech announce major layoffs. Coinbase cut 14% of staff, about 700 people. Cloudflare cut 20%, roughly 1,100 people. And it’s not as if necessarily that Cloudflare was struggling. Their revenue was up last quarter about 34% year over year. But I think the line that got Andrew Yeung and a lot of operators on X a little hot and bothered—it was buried in Brian Armstrong’s Coinbase memo that went along with the layoffs.
ARIA:
And his framing — and I’ll quote it here. He said, and I quote, “Rebuilding Coinbase as an intelligence, with humans around the edge aligning it.” So that’s a pretty strong thing to say, that sort of this artificial intelligence is going to be the center of what we’re doing. And he outlined three core decisions. One was no pure managers. Every leader has to be a working IC, sort of like a player coach. And they might have up to 15 direct reports. The second, sort of apropos of the first, was to flatten the org chart to five layers. And so you’re going to have a lot less layers, which means a lot more direct reports. And then the third one was experiment with one-person teams, where a single human can play engineer, designer, PM. They’re directing a fleet of agents because they’re doing everything.
ARIA:
And so my question for you is, his memo essentially deprecates the people manager role that built modern tech. Like, is this the future where we are not having managers, we are having ICs, and they are managing a ton more people, but AI is sort of filling in those gaps?
REID:
So look, I think as a narrative story, this is the kind of thing that I would see the real AI layoffs heading towards, which is, you know, what I’ve been saying for a few years now is it’s not that humans are going to be losing a job, it’s humans are going to be losing their job to humans plus AI, right? And so part of when you look at and say, well, look, we’re going to be rebuilding companies as AI-native, it’s the human plus AI. And there’s various configurations of human plus AI. And what Brian has outlined is one of those configurations. Now, if you ask me to say, well, this is going to be the only configuration—is every company going to move to this kind of configuration? I’d say certainly not. Is this going to be a viable configuration?
REID:
I think yes, in some circumstances. Now, you know, in my book Blitzscaling, which was kind of showing how you scale very quickly from individual contributors and then player coaches contributing and managing to then managers of people and then managers of managers, which are executives, I think we’ll continue to have all of those kinds of different configurations as you figure it out. But AI will be in the mix everywhere. For example, I think it’s a naive thought to say, well, everyone should be coding now because we have coding agents. And so even if you’re an executive, you should be coding directly and everyone should be—like, this is now a thing now. Should everyone be using AI? Yes. Should everyone be using essentially the powers of coding assistants, even if you’re executive? The answer is yes.
REID:
When you get to a point where you’ve got an AI-native organization—where what are formerly individual contributors now orchestrating and managing fleets of agents to doing things, and are doing that with some power—should the executive still be using Codex or Claude Code or Copilot or anything else to be contributing code directly in order to be catalyzing the organization? I’d be surprised if that was a mature state to happen because at that point, the coding velocity and intensity and everything else from the folks who are now the individual contributor managers of sets of agents doing that is such a velocity that the executive doing that would probably be fucking things up in doing it. But why should executives potentially be doing it today? Is to catalyze the organization. It’s like, look, this gives us such an ability to do this.
REID:
I should A, be learning it myself, B, be challenging you to do it, C, be showing you how you could be doing this and making that happen. And so having executives, managers, et cetera, playing that role today in order to help the organization move to being AI-native with some speed is, I think, one very good play to be doing it. And we’ve been seeing it at, you know, all of my portfolio companies, including Microsoft, but others actually, in fact, having executives, managers now going, I’m getting feedback because I’m contributing code from the things that I’m doing with Codex, Claude, Amplifier, other things as ways of doing it. And I think that’s a good catalyst, and I think Brian’s statement here is a good catalyst for so doing.
REID:
And I think maybe the amplification I’d put here is—and I don’t know for Cloudflare—but it’s the, hey, look, we are going to be refactoring in directions towards being AI-native, and that’s what it’s going to take to do. Now it wouldn’t surprise me if what you see happening is, sure, Coinbase cut some staff, sure, Cloudflare cut some staff, but they continue to hire in an AI-native way and continue to reorganize to being AI-native a little bit like what you saw with Block, which is, you know, they announced these AI layoffs and they’re hiring for doing this because we’re moving towards being AI-native companies. And, you know, pundits will then say, well, but AI-native companies will naturally have lesser numbers of human employees. And in some cases that will absolutely be true.
REID:
I think in other cases it’ll be more employees because they will have captured the productivity market and other benefits of it. And then we’ll be exploiting that for: How do we gain more market position, more market share, more strategic capability now, given that we are more AI-native than other people and we want to realize the benefit of that competitive advantage. And so that’s, I think, the kinds of things that pundits should be paying attention to.
ARIA:
Yeah, I think the IC point to what you said is a lot more about executives need to be using AI. Executives are used to delegating. And this is the thing that you can’t delegate because your teams are going to be like, you have no idea what you’re talking about. And I’ve talked to so many people who are sort of going into exec teams and teaching them how to use AI so they can be those evangelists for the organization. And so, I mean, the takes on this, you know, were all over, were, ah, this is just typical over hiring from the pandemic. They, you know, this is mismanagement. If AI was giving us such, you know, amplification, wouldn’t we sort of see it in the revenue? Wouldn’t it better than 34%?
ARIA:
So I know we’re not going to, you know, like you said, you haven’t sort of dug deep into Cloudflare and Coinbase to know sort of exactly why these came now. But we’ve been talking about sort of the AI washing for a long time. People are going to use AI excuse whenever they do a layoff. Do you think, though, that even if overhiring is 30% of it, or even if some sort of mismanagement is another 30%, that we’re going to start to see those layoffs that really are due to AI, that are due to sort of refactoring the management layer at a company and sort of the jobs that they’re hiring for? Like, is that going to come in 2026, in the back half?
REID:
I think almost for sure. And maybe even one or both of these are, because I think the way you see it and part of how you look at it is when they say, hey, we’re just doing layoffs because we’ve got such great AI productivity, I don’t actually believe that full AI productivity is massively realized right now. But I think the question of, say, hey, we are refactoring to be an AI-native organization. There’s things we’re already seeing in productivity, there’s things we’re anticipating growing to, and we’re reorganizing for that. And part of reorganizing will be, well, we need fewer staff here, maybe later more staff here. We need staff that’s more AI capable and focused. We need a staff that is working within an AI-native organization. And I think that will be not even just later half this year. I think that bell should have rung.
REID:
And it’s one of the reasons why I generally advise not just entrepreneurs but all individuals is, get engaged with using AI. Part of where you can be really beneficial is you want to be the human being who is replacing yourself with a me plus AI versus just me. And I think that’s a very important thing to do. And how to drive that as an individual, how to drive that as a group within a company, drive that within a company, and then obviously navigate that within industries and society is, I think, really key. And I think part of the ringing of the bell with the code assistants is this is part of where coding is most key. But coding will be driving all other forms of reasoning and knowledge work.
REID:
And it’s kind of coming in these things. Now that being said, I think we’ve seen data on this, and strongly, is like it may even be to say, hey, what I formerly could do with a thousand engineers, I can now do with 500. But by the way, I’m going to increase the things I’m doing. And they said, well, what happens? You still end up with two or three hundred engineers who are structurally unemployed. And actually, I think we’re going to be seeing engineers hired in all kinds of different companies, not just new startups, but like, oh, now I can hire engineers because every place wants software at a certain price to be part of the software transformation of the world. And so I don’t think it’s actually, in fact, like there’s painful transitions for markets and industries.
REID:
But I think there’s a lot of demand for hiring software engineers who know how to use AI.
ARIA:
Reid, just going back to the exact language of the announcement because I think it’s always interesting how these executives are framing AI, which sort of gives insight into how they’re using it and thinking about it for their organizations. Again, he said, “Rebuilding Coinbase as an intelligence with humans around the edge, aligning it.” This feels different perhaps than how we’ve been talking about AI with humans at the center and AI augment. Do you think this is a difference? Was that just, you know, a turn of phrase? Like, how do you parse that specific statement?
REID:
So to some degree, you can look at every company as a structured intelligence, and then the question is what is the composition of how that intelligence operates? And it isn’t just the fact that the board of directors is responsible to the shareholders, that then hires the executives, and the executives then hire everyone else, and that there’s a balance of what the capabilities are and the responsibilities between like, for example, the CEO and the CFO. There’s a reason why both the CEO and the CFO report to the board in doing this, and that’s an intelligence composition. And AI is going to change how this intelligence composition works because it’s not just the question around, oh, well, now it’s doing a bunch of code and then people are managing it doing a bunch of code. But actually, in fact, the information and communication within the firm changes.
REID:
Like, an interesting test of how AI-native you’re getting is do you have AI participating in every work meeting that you’re doing and in a—not just taking notes, but suggesting things that you maybe haven’t covered or people you need to inform or people you need to consult with or, you know, kind of like the RACI and DACI frameworks now rebuilt in kind of the age of AI in terms of how they operate. So the entire nature and the structure of communication, decision making, risk analysis, workflows, et cetera, changing because of AI, not just human beings managing fleets of AI. Now this gets down to another level where one could take several different interpretations of Brian’s phrase around humans around the edge aligning it. Now, I do think that humans managing AI in various ways is one of the kind of fundamental things.
REID:
And part of that might get to a point where, hey, the AI is running at such a speed and engine. Just like, for example, by the way, software is running like the cloud servers for Azure or AWS or, you know, others. The software is running, and humans are kind of aligning that software running. And now much more amplified, where AI is making decisions and doing stuff and doing it at a clock speed where the humans aren’t checking every box, but actually aligning and managing what’s happening. I think that’s a good gesture to that future. But I think it’s also the question about, like, well, where are we going to want for managers, for companies, for shareholders, for industries, for society, where you say, well, human governance will be more important and isn’t just around the edge aligning it?
REID:
For example, today we don’t have any kind of legal structure for saying, well, the AI agent did that. We say, no, we have various ways that human beings are responsible there, whether it’s the board of directors, CEOs, CFOs, et cetera. And so that humans around the edge aligning it, there’s a bunch of work there for where it goes to, where human governance is legally responsible and is accountable, for where we think that is critical and important, where that’s a company choice, where that’s an individual choice. So there’s a bunch of nuance there that is work to be done within Coinbase, within other companies and with industries and within society that will need a bunch of future investigation.
ARIA:
So we talked at the top about how crypto is going to become even more important in the age of AI. And I’m sure you would agree that just the financial industry in general is going to be turned upside down by AI. AI is going to influence sort of every aspect of the financial industry. And certainly being in New York, the financial capital of the US, I’m very interested in how AI will hit this industry. Anthropic spent this past week making some pretty big moves, moving Claude into the plumbing of Wall Street. So Anthropic made three big announcements, two major partnerships and a new initiative. So first it was a 1.5 billion-dollar joint venture with Blackstone, Hellman & Friedman, and Goldman Sachs to embed Claude inside the portfolio companies of some of the largest private equity firms on earth.
ARIA:
Second, I was very excited about, because a lot of people are talking about financial fraud and how AI will just exacerbate that. They announced a partnership with FIS, and that’s the company that runs the financial rails for roughly 12% of the global economy. And they’re going to build an AI agent that investigates financial crime. The first use case is anti-money laundering, going after drug traffickers, terrorists, and the people who use the banking system to move that money. So this is, again, a fight back against folks who are trying to defraud folks with AI. How can we use AI to stop that? And then lastly, Anthropic also dropped some ready-to-run agent templates for finance, which include pitch builders, KYC screeners, month-end closers.
ARIA:
So some of this is for people who work within the industry, some of this is much broader to fight back against, you know, fraud and financial crimes. So the first thing this Claude FIS agent, the first thing they’re going to do at scale is decide whose financial activity gets escalated to law enforcement. This is very important. But it’s also a profoundly political surface for an AI to sit on. We’ve obviously seen clashes between Anthropic and the federal government. And so how should a bank board, a regulator actually think about deploying a model that’s the industry leader—and they’re the industry leader with just being right 64% of the time? So they’re wrong a third of the time too. How do these regulators use AI when it’s an incredible tool, but it still gets it wrong a lot?
REID:
Well, human beings get it wrong some percentage of the time too. I don’t know if it’s a third, a quarter, or 10%. And so the benchmark for these things are not 0% errors. Everyone can have a reflex that goes to, it should go to 0% error, right? The world doesn’t work that way. It doesn’t work that way medically, financially, legally, et cetera. So you go, what are acceptable? And by the way, how do we improve and tune it? Because part of progress is improving it over time. So I myself don’t know here yet whether shifting it to it right now is a shift down in accuracy rate or a shift sideways or a shift up, but you want to be kind of a trajectory of improving.
REID:
Now I think that the question—and this kind of gets to generally speaking, bank boards, regulators, everybody, society and accountability—is to say what we want is dynamic improvement over time and the levels at which accountability, financial penalty, negative incentives, positive incentives should be directed to that. And where the current benchmark should be is kind of where it should be. Now, a nuance—for example, you could make the argument, like say for example you said, well, it’s 64%, but human is currently at 75%. But by the way, if human’s at 75% at a 10x cost rate and now it’s 64%, we’re like, we’re going to shift to 64% right now because we know that we can improve it to at least 75% and maybe even north of 85% at a much lower cost rate, that’s the right way to do that.
REID:
And that would be an actually smart, dynamic approach for how you would play forward on this. Because part of what you have to do is think about these strategy things. It’s not just like, oh well, it should be never — if 75% were the current benchmark, it should be never less than 75%. It’s like, no, actually, in fact, the important thing is what we get to and, well, why is lower cost? Well, lower cost gets reflected in the generally—in the effectiveness of the entire industry.
REID:
And so it’s one of the reasons why, like, I’ve been pro-crypto for a long time, is that actually, in fact, getting to a variety of financial systems that’s lower cost and identity, lower cost on financial means that it is broader based within our society and broader based also within the rest of the world, the Global South, et cetera, for participating in the benefits of the financial system for doing it. And that doesn’t mean that you go to and immediately, well, we take our current error rate at whatever price, and we take that current error rate or whatever price as our benchmark. It’s like, no, no, it’s playing forward to better priced, lower error rates in terms of how the financial systems—and so I think that’s the analysis by which this should run.
REID:
The details when it gets to FIS, you know, that requires someone who’s closer to it doing that analysis. It’s a dynamic anticipation. And then you say, well, should it be one year forward, two year forward? What should it be? Look, that’s all, like, adjudicated stuff, both within the strategy of running the organization and also the kind of benchmarks. One of the things that people underappreciate about the strength and value—frequently many people do—of the capitalist system is actually setting benchmarks and prices in a distributed network fashion. That’s part of also what I like about some aspects of crypto. And so when you get to a kind of a market intelligence version of that, that’s what gets to what should we take risks on?
REID:
What should be allowed to be, for example, what might even be an initial increased error rate because the pricing of what we get later is so much better.
ARIA:
All right, Reid, to wrap up, we have some questions based on your last Substack, which was In Defense of AI Slop. And for those who haven’t signed up, Reid has a new Substack called Theory of the Game. And maybe we’ll make this a regular occurrence. I’m going to read you a few of the comments and questions from your Substack and would love to hear your response. So again, the most recent piece was In Defense of AI Slop, and it drew a comparison between the early days of electrification and the critiques of that industry and with the early days of AI. So the first one is from Nat S, and he says, “The historical parallel makes sense to me and I agree with the frame that due to the widespread nature, we’re going to see things get chaotic before they get better.”
ARIA:
Or as you say, “Slop is not a sign of failure but a sign of progress.” But that does not justify the slop coming out at high quantities. And so we must also answer the question how we deal with high quantities of slop on the internet. So what would you say to Nat S about that secondary problem that the slop creates even if it’s a sign of progress?
REID:
What I would say is that the internet already has a whole bunch of slop, and AI is new slop. So it’s—yeah. And so it will be an increasing volume of it. And by the way, a bunch of it will be garbage, misleading, useless, you know, et cetera. And just like we designed on the internet, various mechanisms to try to get information quality filtering—and I, by the way, I think we need to get a lot better at that. It’s one of the things that I think is uneven, right? Because we, you know, in early days of the internet, we proxied on brand. And, you know, proxying on that brand is part of what allowed us to create good systems, like for example take Wikipedia.
REID:
So Wikipedia, generally speaking, has a lower error rate per paragraph or per word count than the former systems of, like, Encyclopedia Britannica and so forth. At least, you know, last I saw studies that was the case, and probably still is. But by the way, part of how they do it is not just a group editorial thing, but they also institute a bunch of policies like, hey, we will trust certain kinds of news sources or journalist sources more than others as a proxy for figuring out our neutral point of view. And that’s part of, you know, kind of trying to keep slop out of Wikipedia, justifying Wikipedia’s, you know, high information trust environment, both within search engines, Google, Bing, et cetera.
REID:
Presumably, although, you know, we’re in early days about AI in terms of how the AI stuff plays that out, and you need to have that kind of thing too, you know, at some level of curve through it to make sure that, you know, slop is, you know, as per the kind of earlier financial answer, is dynamically improving in terms of what we trust, see, read, share, et cetera. There will be no question much more slop because of AI—you know, an amplification of already the just tons of human slop and human, you know, misleading information and wrong and incorrect, you know, hydroxychloroquine for fixing COVID, you know, other kinds of things on this. And it’s not that you try to prevent the AI tools from making a slop or prevent people from doing it.
REID:
It’s that you have mechanisms for increasing all the things about, like, well, being able to trust certain forms of information, being able to trust what’s being, you know, sent to you, shared to you, being able to navigate your ability to, when you’re sharing with other people, to do so on an informed and good basis and that kind of thing. And that’s where the solutions will be. And yeah, we’ll need that.
ARIA:
So a sort of related question from Jonathan Aberman. He said, “I feel that the problem with the analysis is that AI slop is a feature, not a bug. It’s an aspect of the architecture of the gen AI models. And I have yet to see a clear pathway out of that other than a promise of more technology.” So how is AI sameness overcome by originality as humans and AI work together?
REID:
Well, I think Jonathan’s question contains part of the answer already, which I agree with, which is I think that one of the things that a lot of the discourse unfortunately is too much is, you know, AI replace human beings versus AI working with human beings. And right now AI is terrible at a variety of form—all AIs are terrible at forms of originality. I mean, for example, you’d think with the general discussion of superintelligence and everything else that when you go to AI creating original things, we would just be immersed in a flood of so many things. But there’s reasons, for example, why we still gesture at Move 37 and AlphaGo as like, well, there was something that no human being ever discovered in thousands of years of play.
REID:
And it revolutionized this well-studied, hundreds of millions of people playing this game, you know, billions maybe over time, but, you know, blah blah in terms of Go. And that’s one thing. And by the way, we see bits and pieces of it in the promised science. Oh, this piece of science was helped and that piece of science was helped. And obviously with Manas AI, we’re trying to create a massive, you know, increase in this. But even with Manas, I’m not a huge fan of the term centaur, but it’s the human plus AI that’s doing all this stuff with originality. And that’s one of the things to really focus on, which is part of the reason why I wrote Superagency.
REID:
It’s kind of focusing on that person plus AI, person plus machine as kind of ways of solving a bunch of these things, including differential edge, originality, a bunch of other things. Now that’s not to take away that we won’t have increasing originality from AI, that we won’t have increasingly good things there. And, you know, like, for example, to produce these kind of quirky things, like the Pi video for musical. It took a lot of work to do that with human beings, now a lot less work than it would have been without AI. And, you know, I’m the kind of person that you would pay to not sing. And here actually, the PI video was pretty fun. And so that kind of thing is, I think, the way that you steer through it.
REID:
And it’s also one of the reasons why I didn’t mean to imply with the AI slop thing is like, oh, yeah, it’ll just be AI producing a lot of slop. You produce good things by human beings doing it. By the way, humans with AI will also produce a bunch of slop, but it will also produce some really amazing things.
ARIA:
As ever, taste, and human taste in particular will become even more important in this age of AI. Reid, thank you so much.
REID:
Always a pleasure.
REID:
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Aman Suri, Lexxi Kevin, Danny Garrison, Trent Barboza and Tafadzwa Nemarundwe.

