This transcript is generated with the help of AI and is lightly edited for clarity.

REID:

I am Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know what happens if, in the future, everything breaks humanity’s way.

ARIA:

With support from Stripe, we typically ask our guests for their outlook on the best possible future, but now every other week, I get to ask Reid for his take.

REID:

This is Possible.

ARIA:

So Reid, there have been a lot of headlines lately on antitrust, and a federal judge ruled that Google had illegally monopolized specific online advertising markets. The DOJ is seeking the divestiture of Google Ads Manager and other assets to restore competition. No surprise, Google is fighting back. So, happy to hear your thoughts on this specific case, but also more broadly, how will recent rulings reshape the digital landscape or influence the future of innovation and competition in tech?

REID:

So, not surprising—well, actually maybe surprising to a number of people—since I’m on the Microsoft Board and obviously have to have a certain care in how I talk about antitrust competitors with Microsoft, et cetera, even though my actual thing is I’m a venture capitalist. I’m on the side of scale tech. I’m on the side of building new things. And so that’s where my interest in society, my interest in economics, my interest in how do we create a better world, most align with scale tech, so startups going to scale—and so blocking out the possibility of scale is one of the places where I think antitrust legislation can be very good and important. Now, that being said, on the Google-specific case, there’s a couple of notes. So one, I think probably the most robust note is, you know, where high economics are being used to buy exclusive channels, that’s probably a pretty good sign that something is being done to lock in a monopoly or build a monopoly. T

REID:

The challenge in a lot of these monopoly cases is: What do you take as the size of the market? For example, do you take the size of the market as the general search market? In which case you go, “Okay, Google is massively dominant.” Or do you take it as digital advertising? And because if you do digital advertising, you include Meta, you include a whole bunch of other things. And so, you know, part of the reason why there is smart people arguing on both sides is you get to an artifact of, “Well, what is the comparable market to determine this?” I tend to look at it as—well, where is it stopping potential scale-up** competition? And by the way, it’s not all scale-up competition because you say, “Hey, I would like to start a new desktop search company using the same techniques that Google did to build it.” That’s not clear that that’s a benefit to society to try to squash Google enough to allow random startups, or any large tech company, to come in and compete. Now, the last comment is one that I think is maybe the most unpopular, but I think it’s important to track, which is, as we move to more of a multipolar world—and the classic thing is for example, a U.S. tech industry, a Chinese tech industry, you know, TikTok, et cetera—and we say, “Well, only the U.S. is going to be doing monopolist remedies and the others aren’t,” you have to track this within national competition. And so part of the question is to say, “Is this part of a global resorting for competition?”

REID:

And that’s actually, in fact, extremely important. Because while we definitely want the next generation of companies coming out of Silicon Valley—where, here I am right now, talking about this—but on the other hand, of course, if you say, “Well, the scale ones that have the scale benefit, we’re limiting ours. But we’re not limiting China’s. We’re not limiting other prospective ones,” that could be damaging, not just to obviously American industry, and American prosperity, and American society, but also, of course, damaging in terms of the balancing of the world. So you have to also pay attention to that, and I think one of the things that is too often not included in the considerations in these cases. Actually, I’ll say one other thing, which is there’s always some politics in this, even though the Google case went across different administrations.

ARIA:

Yeah, started under Trump.

REID:

Exactly. Started under Trump. Continued under Biden. Returned under Trump.

ARIA:

Yep.

REID:

So, but there’s always some political considerations that are not necessarily Red versus Blue. There’s also the, “Do I look like I have a win? Because I was fighting like I’m the anti-monopoly division,” et cetera. So you always have to pay some attention to this. And so you tend to go after the targets that have more of a populist or press bent. And the most obvious one for me—like, if I was force-ranking all the ones that I would consider—would actually in fact be the Apple App Store, right? Because you’re like, “Okay, this is hugely locked in. You’re not allowed to have other app stores. You’re not allowed—like, it’s very highly controlled. Right? And so, and there’s some arguments around, “Hey, we gotta maintain enough security and so forth.” But, by the way, that’s also part of how all of these things are.

ARIA:

But they’re also taking a cut of every transaction. And it makes using Amazon mobile apps sort of unusable because you have to go off-app to buy certain things. And so it’s actually not good for consumers.

REID:

Yeah. Well, and also, for example, one tell that I started with Google is say, “Hey, you’re spending massive amounts of money to lock in an effectively exclusive position. Well, that’s actually a tell. But another tell is you’re charging 30% right off the top to everybody and making a whole bunch of money from that.” And it’s like, okay, well that’s another tell. So those are the kind of tells where you say, well, those are things that you should examine carefully, look at this, and potentially consider doing remedies on. So, I guess the overall thing is I think it’s good to do these things to enable scale competition, but we don’t want to lose the gems we have as an American society. For example, probably most Americans don’t realize that these companies—Google, Apple, et cetera—get over half the revenue externally.

ARIA:

Outside the U.S.

REID:

Yes. They’re one of our massive trade benefit companies. And you’re like, “Okay, that’s important to us as a society.” Doesn’t mean we shouldn’t do antitrust things, but it means that it’s not just the “hit hard with hammer” as an approach. It’s be careful about maintaining the vigorous American tech industry strength to our prosperity.

ARIA:

Right. Probably not the best. And to your point, geopolitics does matter, and so we can take that into account when we’re doing these things, but there are certain tells for the position. And so, the Chinese market continues to embrace AI as an accelerator, and Chinese tech companies are growing their global footprints. We’ve seen that BYD, which is China’s leading electric vehicle manufacturer, is rapidly expanding its footprint in Europe, especially as Tesla really tumbles. And so, especially with this increasing tariff regime, there are going to be some real problems with U.S. auto manufacturers selling to Europe, which could mean that China gets a greater foothold there. And so what do you think that most people are still getting wrong in how they think about competition with China?

REID:

Well, there’s a couple things in terms of competition. So for example, already implicit what you said, one of the real damaging [things] to American prosperity, American quality of life—both in purchasing of things, and in jobs, and everything else—is actually in fact trade partners matter. And so it’s part of the reason why there’s been different trading blocks in the EU and in NAFTA. And those trade blocks actually matter because being part of them gives the people who are in them advantages and edges against the people who are not. And so when you say, “Hey, I’m just going to go apply tariffs to everyone,” and then the absurd thing is, “I’m going to apply tariffs to islands that have no people on them, and penguins only.” But, you know, I mean that’s just the incompetence part of the whole clown show.

REID:

But when you start doing that, you’re going, “Okay, I’m going to declare trade war on everybody.” Whereas what you’d want to be doing is saying, “Hey, I’m getting closer to my partners and allies, and I’m competing with the people that I’m competing with.” And so by saying, “Hey, we’re going to declare trade aggression with Canada, trade aggression with Europe,” the natural thing for Canada and Europe to do is say, “Great, well go trade with China, thank you very much.” And your so-called “compete with China” policy is literally a gift to China. And by the way, the BYD product is very good. And so I think this is something that is highly harming of American society, that starts from the general prosperity of our society to the functioning of our industries, to the prices and engagement of consumers in this. And I don’t just mean consumers in wealthy cities. I mean across the entire country. And so this is the thing that is, kind of, call it, most obviously wrong about thinking of competition with China.  

ARIA:

I think another interesting thing you said was that the BYD product is very good. And I think some people’s conception of the world is still China ten to 15 years ago, where it’s like, “Oh, we’re going to flood the market with cheap knockoff Chinese goods.” Well, no, they’re doing advanced manufacturing. This isn’t just, you know, low-level t-shirts.

REID:

Yeah. We have to think about the prosperity of our society. We’re like, “We want to return to manufacturing” That’s great. By the way, China, which currently has the best—well, one of the best, in a vector, a massively important scale vector—one of the best manufacturing capabilities, societies, cultures in the world. When I go to Shenzhen—or have gone to Shenzhen—it’s the only place I’ve gotten the experience where someone coming to Silicon Valley must feel like, “Holy shit!” I am seeing part of the future in terms of speed, and how people are operating. It’s more manufacturing there. And I think that the thing that we don’t realize is even though they have this advantage, they’re going full on AI, robotics, manufacturing. And by the way, that’s what we should be doing too.

REID:

Those will be the new manufacturing jobs of the future.You know, will be the ones working in robotic factories. And that’s, I think, really, really key. The Chinese know that. Even though they have an edge with all their human labor right now, they’re building—like BYD specifically—is building, is intensely robotizing its thing. Because not only is it going to have a high quality product, it’s going to be able to produce it at half the cost of any other competitor. That reduction of cost is not because of—we’re gonna claim that it’s unfair competitive practices—but it’s actually, “Well, we’re just smarter about how we build it.”

ARIA:

Switching gears a little bit to what I’ll call the cost of being polite to AI: So, OpenAI CEO Sam Altman recently admitted that the addition of words and phrases like “please” and “thank you” in users’ interactions with ChatGPT come at a real cost. He tweeted that these pleasantries contribute to tens of millions of dollars in electricity costs for OpenAI each year. And so, there’s one thing about the dollar costs, but a lot of people are really concerned about the environmental implications in particular. Altman said that expenditures like this: “tens of millions of dollars well spent.” So, question for you, do you think it matters, philosophically, how polite we are when we communicate with AI?

REID:

Well, yes, but maybe not for the reasons that people might reflexively think, which is more about, like, when we’re interacting with AI, it also evolves us. It also evolves how we behave with ourselves, with other people, not just with devices. It’s actually one of the things I was always worried about. How the initial Alexa home applications were creating bad training for children, or even adults who are not paying attention. Like, “Stop!” Or, “Aria stop! Stop question!” You know, “Aria, do this!” That’s like, what? That’s not the way we should be interacting with each other. It’s not the way we should be thinking about it. We should be generally more civil. You know, politeness is actually I think a good thing, and that’s worth it. Let alone the question around, “Well, what outputs do we get?”

REID:

Because by the way, people who are deep studies of the prompting—par of why I released the earlier book, Impromptu—people notice they actually get different responses from “please” and “thank you”and so forth—because in part this is generalized from a trillion plus words of human communication. And in your prompting when you say “please” or not, you’re telling it a little bit about what kind of interaction you want from it, you’re having from it, et cetera. And so it’s actually a useful part of the prompt too. Now, I think part of what Sam’s talking about here is if you go, “Well, it was such a great conversation. Thank you so much and I really appreciated it, and da-da-da,” and you’re not actually building to something else, then that’s good for you—as per my earlier comments—and I think it’s a good pattern to be in. But, on the other hand, you’re not getting anything out of it, and the electricity is being spent. It’s like leaving the light on for an hour because maybe you’re going to walk into the room. And so you could decide to be a little bit less cautious there, but I would err on the side of politeness.

ARIA:

I feel like AI might be the new waiter test. It used to be when you went on a date with someone and they were rude to the waiter, that was the ultimate red flag. And so the new red flag will be like, “Ah, I really like them, but they were so rude to their AI, I couldn’t get past it.” So maybe that will be in modern dating, we’ll see. But you mentioned something, how saying “please” and “thank you” actually could be a good form of prompting—you’re going to get something better. So sort of analogous to that, do you think that OpenAI, Anthropic, Pi, et cetera, should they be training people on how to prompt better themselves? It’s a big conversation, of course—how do you get the best prompts?—but should the frontier models be doing that for their users?

REID:

I mean, it’s always helpful to do it. I do think it’s one of the things about—learning how to prompt well is really key. I do think it’s really critical for people to be learning how to do this prompting in good ways. Because part of the whole AI amplifying humanity—the amplification intelligence—is the theory that we can, by us bringing something fun and interesting and unique, and perspective, and creativity, and adapting to using these tools at the table, that in fact, we are much stronger together than being replaced in the work. And this is an area of active debate within the both general work community, but also the tech community—well, where over time will that line of transformation versus replacement be? And I don’t know. No one really knows. The claim that you know that for sure, other than there will be some replacement, is foolish.

REID:

Because we know there’ll be some replacement. For example, customer service jobs, with Sierra and others doing this. But well, for example, a hot debate: software engineers. Will software engineers be amplified—which I think is actually in fact myself more likely the case—or would they be replaced? Because we’re definitely getting higher quality with the chain-of-thought models and all the rest to things that could lead to better coding. Everyone’s working in coding assistance. And I think what we want is the maximum probability chance of all the jobs that people want to do, or have any affinity for doing, are transformation jobs, not replacement jobs. Now, part of that is how we’re building the technology. I’m not actually an advocate of limiting the power and scope of the technology.

REID:

Because you go, “Yeah, fine, make the cars really slow so humans can outrun them.” It’s like, no, that doesn’t really work as a strategy. But the nudges to say, “Hey, if A and B are both performance systems, but A allows a much better partnering with human capability to get a much better output”—that’s good for the individual, good for society, et cetera. And that gets all the way back to your question on, well, people have got to be learning to use the devices better. You’ve got to be learning to prompt better. And it’s one of the reasons why I love Ethan Mollick’s work. It’s part of the reason why I myself, like when I was on Dax Shepherd’s podcast, Armchair Expert, I was like, “Okay, let me pull out my phone and let me show you how to prompt the phone to start doing this in ways that are useful to you.” Because that’s actually part of how we all shape the future more collectively. 

ARIA:

Well, what would you say, though—just last week, the news from Dario, the CEO of Anthropic, was that he said that they’d be rolling out AI coworkers as soon as 2026. You think that’s exaggeration? You think they’re going to be doing parts of roles? Or do you think that’s real—that we’re going to have AI coworkers coming as soon as, you know, a year from now?

REID:

Well, it depends. “Coworker” can mean a lot of different things. So, in a sense, with a fuzzy definition of coworker—

 

ARIA:

It could be a co-pilot. Yeah.

REID:

Yes, exactly. Like we have copilots today. Right? 

ARIA:

Right. 

REID:

Now, what he means of course, is that—and not a surprise—copilots will continue to improve. And what I think he also means is unlike the, “Hey, I’m helping guide each step,” I might be able, because it’s just like chain-of-thought thinking, with the o1 models and others—is I send it out on a task, a set of work, and it comes back with all the work done. Right? Like it guided through, it changes plans some, et cetera. And that’s obviously what we see being developed already. But again, you say, “Well, that’s a coworker that’s more robust than the current co-pilots.” But it’s also not a coworker in the, “Hey Aria, I’ve got this really cool project. Hey, I’ll talk to you in two or three weeks about it.” You know, and you’re off assembling the resources, doing all this stuff. So this is a whole continuum. So will we be advancing the continuum? Guaranteed.

ARIA:

Absolutely.

REID:

Now, one of the things that gets to is there’s a lot of dispute around—part of what AGI kind of means intuitively is: Is it a capable independent agent in the way that a human being is a capable independent agent. It has context awareness. Can change its goal sets. Can remake plans and triage based on new data. Can defend itself. It’s like, “Aria, no, I asked you to work on this book thing and you came back with this really interesting art project. Why is that?” It’s like, “Well, no, no, this is the reason why—when we were talking about it—like what we were trying to accomplish. This suddenly turned into the really interesting thing.”

ARIA:

I changed my mind. I didn’t just listen to you. Based on my own goals, and what I know about’s going on. Right. You have some agency. Yeah.

REID:

Yes. And so the question is: Where are we heading towards that? And the answer is more, but how much more? And I tend to think that these are two theories of what will the next, call it—5, 10, 20 years of agents look like? And 20 is impossibly long in these things, most people are trying to talk two years. But it’s like, well, is it a progressing set of savants where it does amazing, amazing things, but part of the reason why you stay close to it is because occasionally it fucks up in like stunning ways that like, literally, if a human did it, you’d be like, “What were you thinking?”

ARIA:

Right.

REID:

Yes. Like, what happened?

ARIA:

I mean, as you often say, predicting the future—even a year or two out right now—with AI is pretty impossible.

REID:

Which, by the way, just for everyone else, a lot of people’s natural response is to go, “Oh shit, you can’t predict a year out. That’s terrifying.” Hence, a la Superagency, you know, Doomer, Gloomer. But actually it’s super interesting, and we can help shape it. That’s what’s great. Like, do, you know, navigate risk concern. But it’s exciting as well.

ARIA:

Absolutely.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.