This transcript is generated with the help of AI and is lightly edited for clarity.

REID:

I am Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know what happens if, in the future, everything breaks humanity’s way.

ARIA:

With support from Stripe, we typically ask our guests for their outlook on the best possible future, but now every other week, I get to ask Reid for his take.

REID:

This is Possible.

ARIA:

Alright, Reid, so lovely to be here with you. So recently Tobi Lütke, the CEO of Shopify, put out a memo that was covered by everyone because it had to do with internal employees at Shopify and what they were expected to do with AI. It talked to them about how if you’re going to request more resources for your team, you actually better check if AI could do the job better, faster, and you actually don’t need that additional headcount. It also said everyone here should be expected to be using AI every day. And he said, “I’m the CEO, I’m no different. We want to grow by 20, 30, 40% a year. Every employee needs to grow as well.” I think some people were shocked by this memo. Other people found it reasonable. What did you think about the contents of the memo, and also Tobi sort of putting out this bold statement for the industry?

REID:

You know, I also found Tobi’s memo to be exactly right. I thought it’s the kind of leadership that Tobi does. And then also thinking about his classic technologist, because he’s obviously an engineer, is how we use tools—AI as amplification intelligence, how it is that we get superagency through doing this. And his memo, I think, is exactly the kind of thing that I think everybody, not just technology companies, should be doing. Every single CEO of anything from a five to seven person company, to a tens of thousands of people company, should look at that and say, “What’s my version of how I should do that and how I should integrate it?” And Tobi obviously has given it enough thought to say, “Look, here are some key checkpoints that work within companies.” Which is you ask for more resources—make sure that if you’re going to ask more resources, how you’re asking for more resources in a context of: and here is how I’m already using AI, and here are the reasons I need more resources given how I’m using AI. Either AI’s lack or the AI opportunities from doing it.

REID:

So one of the things that I’ve been telling my portfolio companies is to actually have weekly, monthly check-ins where everyone has to bring a, “And here is the new thing I’ve learned about how to use AI to help me do my job, help us do our job, help us perform better in our mission as a company for doing that.” Because the answer is, if you actually haven’t found something that was useful to you—it’s useful to your group, it’s useful to your company—you haven’t tried hard enough. I think Tobi’s memo is the kind of thing that in fact CEOs and all group leaders should be looking at saying, “Great, how do I build on that? Thank you for the open source management technique. What are the things that I should do specifically for our group, for our company, for our mission, for our culture? What is our version of that?” And then start iterating in the same way.

ARIA:

I mean, I have to admit, last week on LinkedIn I saw a marketing agency. And they said on LinkedIn, “We promise our clients that you’ll never get an image that was started in an image generator that was using AI. We promise you you’re never going to get a tagline from us where we use ChatGPT to create it.” And I literally had to look at the posting date because I thought it was an April Fools’ joke and it wasn’t. And like I get the nervousness, and the being scared about your job, and about the future, but I just couldn’t imagine that this marketing agency was essentially doing the exact opposite of Shopify and sort of banning AI in their workplace. I’m sure it befuddles you as much as it befuddles me.

REID:

Well, I mean, I think generally speaking, that’s the similar idiocy in the education space saying, “Our students shouldn’t use ChatGPT.” Because the whole answer is you’re preparing them for the future. You’re preparing them for being citizens. For being workers. For being people who are navigating life. And here is this fundamental tool. It’s kind of like saying, “Hey, none of our people can use anything that uses electricity. And that’s how they learn. They have to use pencils and papers and no electricity whatsoever! In anything!” You’re like, “Well, that’s idiotic.” Well, it’s similar to the ChatGPT. And so I think that marketing agency, the question is really when is it going to have to shift? Or it’s probably going to die, or become very esoteric, boutique.

ARIA:

Right? Absolutely. If you want to be the most boutique agency, perhaps that’s the way to go. Well, another concern people have with AI though is misinformation, disinformation. All of this synthetic media that was created. And actually last week—and this wasn’t created by AI—this was a tweet that sent the market, made an $8 trillion worth of market volatility, because someone tweeted that the tariffs were off when they in fact were not. And so if a single tweet can move the market by $8 trillion, what does this mean for the future when disinformation, misinformation is increasing? And perhaps with algorithmic trading and AI able to do this at greater quantities and greater speeds, how do we protect against that for the future?

REID:

There’s a combination of a free market response, which I think is partially correct, and a societal response, which is also partially correct. And so that’s the balance that makes this challenging. So the free market response is to simply say, “Well, if people who are doing trading are going to be idiots and not track false posts, and so they’re going to lose money and eventually they will be disempowered.” And so what you principally need to do is to just make sure that there are validated sources of information—that kind of are the anchors—and then to increase that validation, accuracy, availability, and then allow the market to sort it out. And that’s a partial answer. And my principle thought there is we should not be trying to restrict technology as much as we should be trying to shape technology.

REID:

Because the question isn’t, “Let’s not have algorithmic training.” And it’s like, okay, that’s kind of foolish. It’s let’s have algorithmic trading work in the following way: generating the following reports; making sure it’s involving the following kinds of data; is only deployable by entities that have a method by which they participate in the market, in a way that is healthy for not creating crazy volatility swings that damage society. It’s a little bit similar to saying, “Hey, you know, car manufacturers don’t want to manufacture seatbelts. Drivers don’t want to wear seatbelts.” But actually, in fact, because the cost to society, and the healthcare system, and everything else, is so high, you would say, “Hey, as a free market, you should decide whether or not you’re going to take the risk and they’re going to die.” It’s like, “No, no, no. Actually, in fact, there’s so many injuries and so many costs here, and the cost of enforcing you to wear a seatbelt is very low. So let’s do that.” And what are the seatbelt parallels for making the overall system work, is I think an ongoing and kind of thoughtful thing that banks, and regulators, and intellectuals, and economists, should think about. What are those minimal shaping technology, or technology adds, that keep the cost of transactions down? And the cost of not having an overly centralized system? And the benefits of all the free market and broad network working, while navigating the fact that we live in a more volatile space now?

ARIA:

On a lighter note, if there’s any parents out there who are navigating this, I just read the book [Mr.]Lemoncello’s Library with my nine- and seven-year-olds, and a main plot point is a fake Wikipedia post that leads to ruining someone’s reputation, and the kids who like don’t believe it. So anyway, try that out if you’re looking to teach your kids about misinformation on the internet. But actually moving on to another thing that people think is childlike and play, one of the fun things about our conversation with Demi Hassabis last week was we talked about games. And it was so clear that Demis grew up playing chess, and games were so important to him, both in terms of his scientific research, but also in the progression of AI. Whether it was AlphaGo or the famous IBM Watson chess competition. And so when you think about the future as AI is more enmeshed in our daily lives, will that give humans the opportunity to play more? Are we going to be playing with AI? Are we going to be interacting with it solo, with teams, as a game? How do you see that connection between games and our AI future?

REID:

There’s a fun book—which Demis also knows about—Homo Ludens, which is like, we are not just sapiens, we are game players, obviously. I have this version of Homo techne, because I think part of games is as technologies, and the technologies that enable different kinds of gameplay as part of it. But games is a way we think. And as you know, I tend to approach most of my strategic thinking through the lens of games. So it’s like, with a startup, what’s your theory of the game? With a project, what’s your theory of the game? With creating a book—Superagency—what’s your theory of the game? Because game playing brings tactics, and strategies, and transformation—like large language transformers—together. And also has a notion of increasing learning and competence, because “how well are you playing the game?”, “what are the conceptual tools you’re bringing to it?”, et cetera. So games [are] a way that we operate across, call it, intelligent experience. Like, “Is species X intelligent?” “How do they play games?” is actually in fact directly correlated to that. That’s one of the reasons why we know that other kinds of mammals and other things have intelligence. Because we see dolphins playing games. We see chimps playing games. We play games with our dogs, and we play games with our cats. And that initiating gameplay, and everything else, is part of how that tends to operate. We don’t just play games solo. We don’t just play solitaire. We don’t just play games one-on-one. We play games as teams—sports games and all else. And that’s part of how you model what companies do.

REID:

And when it gets to this kind of superagency future of saying, “Well, how is it we’re deploying?” It’s like, well, when I deploy now in work—and this is kind of the Tobi Lütke memo—I should deploy with agents. I should deploy with these tools. And, by the way, we as teams should deploy with these tools. We as companies should deploy with these tools. We as individual scientists, as groups of scientists, should deploy with these tools. And that’s the pattern that we’re on. And the model of games is a good way for us to be thinking about it. But it’s also a good way for thinking about how do we construct these devices, and also how do we interact with them. Part of the original—the very first genius moment—that Demis, and Shane, and Mustafa, brought to scalable AI was realizing that here is a way you can apply scalable compute and learning systems to creating amazing cognitive ones. As opposed to, we program the AI, the AI learns. And it learns at scale because you can use self play as a way of doing it.

REID:

Seeing this genius moment by them was part of what got me back into AI from my undergraduate days, where I concluded that the mindsets, and programming AI, would actually not work. I hadn’t gone to, what are the scalable compute learnable systems? Because back then, by the way, a single computer was super expensive. Let alone, how do you create a server farm of a hundred thousand working in concert, and all the rest. And by the way, the computers back then were less powerful than the smartphone that’s in your pocket.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.