This transcript is generated with the help of AI and is lightly edited for clarity.

REID 

I’m Reid Hoffman.

ARIA 

And I’m Aria Finger.

REID 

We want to know what happens if, in the future, everything breaks humanity’s way. This is Possible.

ARIA:

Hey there, Possible listeners. This episode is Part II of our live version of Reid Riffs, which was recorded in New York City on June 24th in partnership with Village Global. If you haven’t yet listened to Part I, please go check that out. It was last week’s episode, and it’s really good. In this week’s episode, we’ll be hearing Reid’s response to some listener questions. Hope you enjoy.

FAIZAN BHATTY:

Yes, I’m Faizan. I’m traveling from Chicago. So the question I had was, especially in this evolving world of AI, what are the biggest misconceptions founders have about scaling companies quickly?

REID:

Oh, it’s interesting. The usual general misconception is that scaling is just a straightforward game that just, kind of, like the next step. Like, once you establish product-market fit, scaling is a relatively just, “Oh, hire more people, have a larger organization, reorganize,” and there’s a lot of things. It’s one of the reasons why I actually think that the phrase “product-market fit,” is interesting and useful, but “scale-product-market fit” is what really matters, and you’re projecting a scale, and what does that look like? But then the question is—another misconception frequently—is you’ve proven everything, and then you just again add rocket fuel,—go scale. Most of the time, you go, “I’ve got a high enough probability of scale-product-market fit that I’m just going to go.”

REID:

Right? And we see lots of companies like that doing that in various challenging ways. So, classics from the book are Uber and Airbnb, and others. And so, for example, very early days—few people here will remember this—but in the early days it was, “Oh yeah, Facebook scaling, a whole bunch of people are using it, but is it going to be a good business?” Right? Was actually in fact, and they were going, “No, no. We’ll figure out the business as we go. It’ll happen. Let’s go.” And so actually, in fact, you will take some scaling risk. You will figure out some questions as you’re going. Now in an age of AI, maybe there’s a few other ways to add to that. One is, because of course, the general discourse is around size of model, and the hyperscalers, there’s a like, “Okay, is size of model the thing that really mattered to me?”

REID:

And I think in a lot of businesses, it won’t matter. Now, there’s this interesting question about how long and what shape will open-source models be provided. The reason why there’s a lot of open-source models right now is because there’s a lot of challengers. Who, it’s the way they get into the game. But once open-source providers start realizing that they’re taking their compute resources—which by the way gets more expensive as it gets larger—and they’re providing them to everybody, including competitors and all the rest, I think there will be a, “Well, maybe we shouldn’t do that.” Or maybe we’ll try to experiment with different kind[s] of license modeling and other kinds of things. So you can use open-source models, but by the way, I think there will be tons of competitors, which means that it’s like the multimodal thing as a way of doing startup is a good thing.

REID:

Because, basically, one of the reasons why—what were things we’re seeing with the various hyperscaler models now—it’s like, “Oh, this one’s better at this,” and then four months later, “No, no, this one’s now better at this.” And that’s a pattern that’s going to continue. And so you have to adjust that variability. And that will be a little bit of this question of how scale plays into what you’re building, because you have to have the dynamism in being able to shift as you need. Now, obviously, if you have network effects that defend your business, you can be slower at adoption. Maybe a really key one is Apple and artificial intelligence. You know, if anyone’s got a good result from Apple intelligence, I’d be curious. I haven’t yet heard of one. But they have such a network effect locked into the business. They have time to play at it, and obviously amazing devices and all the rest. So those are some reflections.

ARIA:

Awesome, thank you. And we’re going to dig deeper into how different industries approach AI. Is Priya Murali here?

PRIYA MURALI:

Hi, that’s me. I’m Priya. 

ARIA:

Ah, stand up.

REID:

Hi. Yes. Great.

PRIYA:

Reid, Aria, thanks so much for being here. From Cobalt ID, my question is, for founders building in cautious data-sensitive industries, how can founders go about earning the trust they need to access buyer-sensitive data? And what are some of the missteps you see them take that commonly break that trust?

REID:

Great question, and obviously one of the other challenges is because we have a general atmosphere of mistrust of tech companies overall, and what people are doing with data, and how that’s being described. That’s an additional difficulty on this particular game. But I did actually have an episode on this in Masters of Scale, and maybe it was the program, it was the Daniel Ek Spotify one, might have been the “build trust quickly,” and to some degree, solving this kind of problem can be a competitive edge in differentiation for your business. You know, to give you an example—in a similar way—one of the questions early days in PayPal was, “How do you get people to trust using PayPal to do transactions?” And I won’t name the executive that came up with the idiot idea.

REID:

But the idiot idea was the buyer protection program, which was, we guarantee your purchase up to $2,000 and basically with no conditions. And it was a classic software person who had no understanding of classic industry. And so you’d have people on eBay where like, Aria and I would be colluding, and she’d put up a plasma TV and I would buy it, and she wouldn’t ship it to me, and then I would get my money back paid by PayPal. And the losses were hemorrhaging. And so this problem landed on my desk, as a, “Oh my God, we have to shut this down. Maybe everyone’s going to lose confidence in PayPal. Maybe it’s going to go away.” I was like, “Okay, look, how many days can you give me to solve this problem?” And, you know, Peter Thiel was like, “Five.” And I was like, “Fine, I will go try to solve this problem.”

REID:

And what we came up with was we reconstituted the PayPal buyer’s protection solution. Instead of guaranteeing you $2,000 in every situation, come hell or high water, we switched to doubling the eBay insurance. Which meant we still had a buyer protection program. And of course, it was on eBay, so it was fair. And so if you use PayPal, you could double the insurance that you got with eBay. But of course, what you had to then is show that the eBay insurance, where you offloaded a whole cost structure on eBay, because you had to prove that eBay thought it was worth doing. And then you show that the thing was actually $370, versus $250, and we’d pay the $120, right?

REID:

And those are instances of how to think about how to build trust. Part of the reason Spotify could come out of Europe, and not the U.S, is because it guaranteed Denmark and Scandinavia as an area, and buying it in. So for example, getting insurance—Lloyd’s of London, other kinds of things—as ways of doing that. Basically it’s fundamentally [for] building trust quickly. It’s, again, ideation, creative, but it’s what’s the kind of thing that will give the relevant constituencies confidence that you’re holding yourself accountable to the failure points or worries they might have–buying something on PayPal or anything else. And that’s the kind of thing you’re looking at. And a lot of people also just don’t understand. One of the funniest conversations I had a number of years ago—this was a Silicon Valley person—it’s like, “What has Google given me for my data?” And it was like, “Well, free search?” So there’s a lot of misconceptions.

ARIA:

Not that I don’t love everyone in the room. This is my favorite AI person in New York. Allie Miller, are you here?

REID:

Hey. And Allie and I did a podcast and she has great ideas. So it was fun.

ALLIE MILLER:

Oh good. Okay. Well then, let me put you on the spot and see if you have good ideas for me.

REID:

No, I never do.

ALLIE:

I would like to know what signals we should be looking for,and perhaps on what timeline, where AI is good enough at embedding itself—this is part of the software writing—but AI embedding itself inside of business and technical processes? So much so, that the painfully slow human behavior of adoption and adaptation no longer matters?

REID:

Great question. Not a surprise. Given that you do this—AI leadership—quite well, over a number of things, and I will assay to try to be not dumb in my answer. What you’re essentially looking for is, “Where are there coherent loops that can move much faster that don’t have organizational change or organizational hiring as part of them?” Part of the reason why I invested in Sierra, with Bret Taylor, is because I think the front end to customer service is more easily modular, is evolved from how it interfaces with the rest of the company. Obviously there is a fair period of connection, and there’s a way that you can very quickly move it to being the front end to how you’re interfacing with the company and that kind of thing. And so, it’s one of the reasons why I think the customer service thing is one of the places where AI will really speed up what that changes.

REID:

One of the ones that I think will be interesting to see how it plays out—one can play out multiple hypotheses, a good range for multiple startups—is sales. And part of it is just customer service. You could say, “Well that’s inbound. What about outbound, with sales?” Of course, people might get really pissed off about being called by AI agents. Might get legislation, other things, a bunch of different risks, and how that plays out. But that’s another one where a modular function, where you can get that loop going. One of the classic things is the Bezos “two pizza teams” and other kinds of things. That, what the coherence of a smaller group can accomplish if the groups working pretty intensely, will now have a higher throw weight. And that’s part of the reason why you have Dunbar’s number–you know, 150 is like how much do people hold in mind.

REID:

One interesting prediction—actually, this is a pretty fun thing for everybody in the room to do. Think about what year you will never have a professional meeting where you’re not having an AI agent listening to you and playing a role. And that’s literally every single meeting that you’re doing that has any kind of professional thing. By the way, you might also do it when you are having a talk to your kids and everything else—that’s a different question. But, I don’t think that year is too far off. And if that’s the case, this is one of the things I wrote for the MIT Tech Review a decade ago as an anticipation of where AI is coming. If that’s the case, then the scalable coordination between teams might get a lot easier. Because it’s literally this team is having this meeting, and da da da. And this team is having this meeting, and the AI agents go, “Oh, wait a minute.” And immediately create notification, or even it was two hours later, that transition. And how that speed, and how that operates, will be interesting. That being said, the way the accelerations will happen, where the loops close in tight ways, and where people feel competitive need. So part of the reason why AI adoption has been so slow right now is that most people don’t feel competitive need. But once, for example, I’m sitting here doing my coding, and I see Aria 10X-ing my speed and delivery.

ARIA:
What now, Reid?

REID:

Well, yes, exactly. And I’m like, “Oh, she’s being much smarter about how she’s using AI.” I’m going to then start using AI too. And that will be another thing that will drive it. But obviously it’s a very good question and a complex space.

ARIA:

Well, I mean a lot of Allie’s work is working with Fortune 500 companies to have them adopt AI more quickly. And I could imagine that perhaps 90% of this room is already there. Can’t imagine a workday without AI listening constantly. And yet I think other people are going to be sort of dragged, A, kicking and screaming, B, only when they absolutely have to because of market need. That being said, as a parent? I mean, last week I was having a conversation with my nine-year-old. We were talking about monarchy because of the No Kings marches, and we were explaining what monarchy was and he goes, “What are some other ‘-archies’?” And I was like, “Oh, I don’t know…” And then I was like, “Oh, patriarchy!” And I explained what it was, and he goes, “Oh, it’s so good that that doesn’t exist anymore.” And I was like, “Oh, I wish an AI was listening to that conversation and could have just piped in like, ‘Aria, here’s how to explain patriarchy to your 9-year-old.’” So I think as a parenting agent, there’s hope. 

ARIA:

I think we have time for about two more questions. Esther, in the front row.

ESTHER:

So this is not a political question, but…

REID:

So this is not a political answer.

ESTHER:

Exactly, but it is an economic one. There’s a lot of value created, whether it’s Google or who else, by basically what’s a collective asset of content on the internet. What do you think of the idea of some kind of mining rights or water rights or taxes that LLMs—not people who use their own data sets for training—but the large language models, and I don’t know, maybe some other things where they would fundamentally pay some kind of taxes on the revenues they generate using the collective asset. And then that would go to teachers and childcare workers, and people who are now desperately underpaid.

REID:

I think trying to figure out how to tax all the data stuff, it’s such a fast moving thing. Like one of the things that currently seems like a pattern is that, the larger scale you get on the mixture of expert models, the less general data it actually needs. And so then, which general data and all the rest is one instance. So what the patterns are of how data fits into this is challenging. And I also believe in simplicity. I think it’s probably fine to say, “Hey, look, because of technological leverage, some companies are going to have enormously profitable business models. Let’s have a little bit more tax there and direct it towards public goods,” right? I think it’s just simpler to do something like that, wherever it is. One of the underlying tensions is because—labor and capital—and then when you get services labor, services labor gets a lot less scale leverage.

REID:

And yet, services labor really matters in terms of how we care for each other and so on and so forth. So we go, okay, as that balance changes—because one of the things that AI does is it does change some of the labor calculus towards capital—it’s like, “Okay, should we try to figure out how to get some more adjustment to services labor?” One of the mistakes I think most policymakers make is they try to be overly coding-specific. It’s almost like the mistake in software of hard variable coding, right? Let’s try to do a general program across that. So I’m positive about that. I don’t think it needs a justification of, “Well, these things can’t work unless they use what is this currently common good?” I think they live in society. And it’s important that we have a society that brings people along. 

ARIA:

Yes, right there.

Speaker 7:

Hi Reid, thank you so much for coming today and for inspiring many of us to start companies. The way you inspired me is by posting the video of you talking to your digital twin last year. And I thought, “This is so cool, and this is the future of AGI interface.” So my question is, could you share a little bit about your experience building this digital avatar? What were your learnings? What are some of the things that could go right about this technology? And what advice could you give in terms of blitzscaling for someone building in this space, and trying to compete with incumbents like HeyGen and Synthesia? Thank you.

REID:

Well, blitzscaling will be more specific and hard, so I’m not going to give much of a good answer there. But it’s a good question to be asking. It’s like, “When is your probability of the set of things, of scale-product-market fit, business model, et cetera, coming together? can you convince people to raise enough capital and go?” And what blitzscaling is set as a clock is what your competitors look like. One of the problems that you have is, if you’re doing a capital-spend competition with hyperscalers, that’s a very difficult, risky business. You better get it right. Now, on the Reid AI thing, it was funny because the idea basically came along because I have as a general principle, that one of the things that people happen to do is kind of, “Technology good, technology bad,” versus, “How do you shape technology to be good, and how do you shape technology to be less bad?”

REID:

And one of the things that I find very frustrating about most tech critics is they go “Bad!” And you’re like, “Well, maybe in some ways,” but the question is always a question of sequencing dynamism. Which things do you fix first? Which things do you improve first? It’s like, “Well, you just said medical agent, and sometimes that medical agent’s going to say something that’s wrong to this person.” And you’re like, “Well yeah,” but you have to look at this kind of thing of, “Well, how often will that person have gotten nothing? How often will that person get it wrong some other way, and all the rest?” So you have to look at it on a systematic basis. The real question is, “How do you shift it?” And when I was thinking through it, I was like, “Oh yeah, and it’s this technology called deepfakes,” which is almost like labeled, “Bad, bad.”

REID:

And the only positive case that most people can imagine for it now is making a younger Tom Hanks, playing in a movie, or doing CGI or something. And it was like, “Wow.” And all the rest of, “It’s terrible—evil, evil, evil.” And I was like, “Well, is that true?” And I went, “Well, okay, let’s start experimenting with it because I understand that this one’s harder.” Because obviously you can abound by the tremendous number of different harms and deepfakes. Everything from fiscal, to misinformation, fake news, to revenge porn, all the rest of the stuff—and can be really, really bad. And so I was thinking about it, and I said, “Well, let’s just start experimenting.” I mean literally the way we kicked it off was “:et’s just do something and then see.” So, for example, if you watch the first one, it ends with me saying, “Well, I thought I’d hate this more than I do.”

REID:

And then as we started iterating, because this is one of the things about always thinking about a positive outcome. So last year, I gave a speech at Perugia that we then had Reid AI give in nine languages, and it was because of human connection. And it was amusing to watch myself speak Hindi, Chinese, Japanese, Italian—languages, which I don’t think I know a single word of Hindi. And so that was an instance of positive use cases. And then it made me start thinking about, “All right, the likely thing is we won’t have voicemail anymore. I’ll have Reid AI that essentially answers the phone and does the initial talking, and it has actually a better interface for that.” And I was like, “Oh, right…” And those will be some of the positive use cases of this. And look, obviously going to argue with like, “Well, wait, there’s so many negatives in this one versus the positives, and what the issues are,” and the negatives definitely need to be managed.

REID:

But that’s how I got into it. And part of it is a general principle of—part of the thing you have to do—as an entrepreneur is when most people think it’s idiotville and you think you have a good idea, that’s potentially a great idea. That’s part of the contrarian and right, the trick of course: It’s easy to be contrarian sometimes, harder to be right. But that’s the thing to play out, anyway.

ARIA:

One of the really fun things about Reid AI was that we also just used commercially available technologies. We called ElevenLabs in hour one, and we worked with Respeecher and, more recently HeyGen. So, truly, it was not because we had a technological edge or anything else; it was just, “Let’s try this.” And now I will say—not surprising—Reid gets asked to give speeches and go to events, probably 20 times a day. And I used to just say no to all of them. And now, especially when they’re in a foreign country, I say, “Oh, Reid’s unavailable, but if you would like Reid AI to give a three-minute speech in French, we can arrange that.” And more often than not, people say yes. And so before they would get nothing, and now they get a little dose of like, “Oh, this is so cool because I’m using this technology to again, be more human.” You’re not usually having someone from Silicon Valley speak in French. So it’s been really fun to see all the positive use cases.

REID:

Do you have a count top of mind? Because I’ve lost count of the number of conferences.

ARIA:

Oh, I mean, over a hundred. Literally, Reid AI speaks way more than you do. And now—actually, this is the best, no offense—I’ve gotten at least five emails that are like, “Hey, can Reid AI speak?” And I’m like, “Hey, Reid’s unavailable.” And they’re like, “No, I said Reid AI.” And I’m like, “Oh, okay!”

REID:

By the way, totally happy to be replaced with this. 

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Thanasi Dilos, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles. And a big thanks to Jennifer Whiting, Sheila Goodman, Ben Casnocha and the Village Global team, Robert Kingsley, Geri Madlambaya, Samuel Henriques, the Ritz-Carlton team, and, of course, Vincent Lucero.