This transcript is generated with the help of AI and is lightly edited for clarity.

ARIA:

Hi everyone! So excited for our discussion today. We are going be responding and talking about our conversation with Yuval Harari. So if you have not listened to our Yuval Harari episode from last week, go listen to that first. That will make this make a lot more sense. And here we go. Hey, Reid, it is great to be with you here today.

REID:

Always great to be here.

ARIA:

There’s obviously a lot of agreement between you and Yuval about how you see the world, about your care for humanity—you want to make sure to be shooting in the right direction. But there was also plenty of disagreement, both philosophical, and also like what exactly is going to happen. So I’d love to hear your reflections. Tell me about the conversation, and where did you disagree?

REID:

Well, Yuval obviously, always an amazing delight to talk to. And one of the leading public intellectuals of our time and a very clear-headed thinker. I mean, part of what he tries to do is say, “Look, I’m seeing the world as clearly and without illusion or self delusion as possible. And I report what I see.” For him, he looks at the exponential, or the massive acceleration—I’m not even sure, the word exponential is frequently overused in this context—of increase in capability. And he goes, “Well, it’s going to be a new species. We don’t know what the new species is. We know that we as Homo sapiens, when we were a new species, we remade the planet. Why shouldn’t we just infer that it’s going to remake the planet? And, boy, I hope there’s a role for me, continuing to be a meditative human being in this mixture.” And I think that it’s actually going to remake lots of things, and remake industries—cognitive industrial revolution—going to remake what the future of human activity is. And I think it’s a much higher probability that this elongates the robustness, the capabilities, of Homo sapiens. It’s like saying, again, in a simple metaphorical sense, is it more likely to be creating new forms of antibiotics that will help against drug-resistant tuberculosis? Or is it more likely to be creating [the] tuberculosis?

REID:

And I think it’s more likely to be creating [the] therapeutics to tuberculosis. So there’s a kind of probability set now. Then when you get into the details, he says, “Well, but one of the problems is it’s a learning machine. And if you’re learning from human beings, look at all this bad stuff we do.” You know, look, we’re cruel. We torture people. We engage in war. We’re lying and self-deceptive and, dah, dah, dah, dah. And isn’t it just going to—just like a child of ours—isn’t it just going to learn, because it isn’t just what we say, it watches what we do.

REID:

And I think obviously there is definitely some truth to that. But also, by the way, just as when we’re teaching children ourselves, even though we have some faults, we actually try to direct children towards our more aspirational selves. The fact that we’re compassionate, the fact that we’re wise. The fact that we have these humanism values, these humanity values. It’s kind of what’s in-depth encoded in religions, and where religions share a context of how do you aspire to being our better selves. And this is the “half full, half empty.” I go, well, actually, in fact, I think we can shape it that way. I think we do that with our children as it is. It’s part of what we try to make progress in society. And I think that’s the thing that we can potentially drive to.

ARIA:

What I also think, to your point, a lot of it of course is optimism, and is let’s figure it out, not to complain. Let’s figure out actions, so we can get to a better place. But some of it is just recognition of some of the bad things that actually can happen. Like you said—sort of [the] tuberculosis versus [the] antibiotics for tuberculosis. The same people often who are gloomers, or saying this is going down a bad route, would probably also tell you that climate change is going to do irreparable harm, or are going to say that we’re unprepared for the next pandemic. And so you’re not just saying that AI is great compared to some utopia, you’re saying AI is great compared to the world we have now. Because we have real challenges! And AI has to be here to help solve them. So a lot of people are concerned, it’s like, “Oh, you have these five AI labs who are just deciding the future of the world.” First of all, how do we make sure that they have the right values? Is there anything we can do to affect what they’re doing? And then also, who else needs to be involved in this? Who else needs to be a part of this decision-making?

REID:

Well, a couple things. One is—and this causes existential risk people some heartburn—it’s probably five labs heading to 15, heading to 30. Or at least ten to 15…

ARIA:

And why is that good? For people who get heartburn—why do you think that’s actually a great outcome for the AI space?

REID:

Well, it’s a complicating outcome for the AI space. One of the places where I tend to find many critics to be really yelling about like, “Well, I should be in charge and it should be me.” And it’s like, “Well, not clear,” right? Even though you go, “Well, I don’t want just the large tech companies, and just the heads of these labs, to be in charge.” And you’re like, “Well, but some people will be. There will be a limit.” There will be a limited set of people who will be in charge. And then you go, given that limited set, what’s the reasonability of which limited set? And it’s like, well, folks who are investing and building this stuff, and have the capabilities of doing it, and are putting their own lives, missions, and their economics into this. And who are held accountable by various networks of accountability. Which includes critics, includes press, includes government, includes customers, includes shareholders, includes family and community members, includes teammates—these many different things.

REID:

And we try to make those channels of accountability as healthy and as inclusive as possible. Like the people who say, “It’s only these five, and what gives them the right?” It’s like, well, actually, in fact, it’s a growing number. So if your particular thing is that, “What do we think about these five?” It’s like, well, but it’s not only going to be these five. And a little bit of that is the gesture of the people who get on the—blindly or for their own reason—they get on the antitrust horse. And so they tend to go, “Our really important thing is to limit these five”. And it’s like, well, it’s only really important to limit these five if you think “the laws of physics” say it’s only these five.

REID:

And it’s actually not the case. Because since I’ve been doing the Western democracy thing both, there have been new entrants on the Western democracy side, and there’s at least three to five that we’re tracking in China. And it’s probably actually ten to 20 in China. So you’ve got this swarm of it. So you get a larger number of people. So inasmuch as it’s people with different points of view playing for this, that gestures that there’s a number of people working on this. Now, this is not a monopoly question. It’s not a monopoly question. But this gives other people heartburn. Because they go, “Well, actually, in fact, these ten to 20 people are going to compete. That is opposed to getting to our highest virtue selves, it’ll be because of competition, and because of divergence between people, we’re going to have, for example, we’ll have AI weapons created. And why won’t someone, either deliberately or accidentally, of this much larger set be targeting, creating, the equivalent of The Terminator?” So I think that the question is again, look, if we presume that the fundamental thing about human beings is we divide into groups. We have different perspectives about what the risks are, about what the important things to achieve are. And we compete with each other. And part of our competition is manifesting those values. The thing that we must do, as thoughtful human beings who are trying to be great for humanity, is we say, “Alright, how do we help those efforts that care about the same perspective of being compassionate towards human beings, of elevating our better selves, of having a human future? That is, the continued evolution that we’ve had over centuries and millennia. On a very broad brush, to continue to do that. The virtues of wisdom, and empathy, and other kinds of things. And how do we do that?” And the answer is make those projects that are deeply trying to embed those values in the AI systems they’re creating, in the products that they’re deploying. And to make those the more accelerated efforts. And, as you know, that’s of course what I’ve been doing. Look, there is a possibility that we’re creating creatures, we’re creating new entities, and mind-bending. But it doesn’t mean it’s a certainty.

REID:

And it doesn’t mean that the way that we should steer is—because that’s a possibility—we should steer about that being the certainty. We actually have to be iterating and seeing, because it’s completely possible that X years from now we’ll say, “Oh, just like when in the eighties we were talking about this with AI,” and yes, this is a much better technology. Yes, it’s achieved so much more. Yes, it’s on a better curve. But just like when we were talking about that and we said, “Okay, like that will completely change,” we may very easily discover—in as few as a few years, and maybe even five to ten—well, in fact, these aren’t really entities. They’re actually tools in the following way. And they have the following shape, and this is the way they’re integrated in. And so it’s a discovery of what’s possible, because not everything is possible at every moment.

ARIA:

And another thing I appreciated about the conversation, and this is probably true of any conversation where you and Yuval are in the same room, is that we get to talk about more philosophy. More philosophical questions about what it means to be human. And he had an argument that was focused on the critical distinction between intelligence and consciousness. And he said that while AI might become vastly more intelligent than humans, intelligence alone doesn’t guarantee a pursuit of truth or even the capacity to reject falsehood. And I know truth-seeking is something you talk about a lot. It’s like, how do we get to this place? How do we teach truth-seeking behavior? What are the ways that we can model that world? And so I’ll ask you, is consciousness a necessary foundation for this truth-seeking? And if so, can this non-conscious AI ever really be truly aligned with our deepest values? Or are we missing something fundamental with asking them to define truth when they don’t have the consciousness piece? Or at least not yet?

REID:

You know, I haven’t had a chance to have this conversation with Yuval. So if Yuval’s listening to this, this is the next move in the conversation. What I’ve realized is I think what it’s helpful to think about is, well, what kinds of truth-seeking are necessary from conscious entities and how does that illuminate what we think about consciousness? And what kinds of truth-seeking are doable without consciousness, with just intelligence or fitness functions? Because you clearly have a set of truth functions where consciousness is not necessary. For example, you can train lots of different systems—even, you know, less sophisticated than AI systems—to actually run a truth-seeking process. I mean, heck, when we have, deep research—in ChatGPT, and Copilot, and Gemini, and Claude, and others—it actually has, “Go cross-check your work,” right?

REID:

Like, “Go pull out documents and cross-check your thing. And then when two things disagree, go look for other information. And privilege these kinds of sources of information.” Just the same kind of cross-checking we do in our group truth-seeking. Whether it’s science, or judicial processes, or academic work, everything else. It’s that process—journalism—to do truth-seeking. Clearly, you can do a lot of that with just intelligence. So then you get to this interesting question around, obviously if you had an enlightened being that said, “I am in touch with how difficult suffering is, and I view the importance is to reduce suffering across more things other than just me.” And to have quality sentient life is doing that, then that’s a very good thing.

REID:

Is consciousness necessary for some component of that thing? Is the fact that we experience reflection, meditation, potential empathy, compassion, sympathy with each other, through the recognition of that consciousness, is that an essential component? And if that’s an essential component, can we keep it essential in various ways for how we operate what the future of the world is? And I think that that’s an ongoing discovery question as we get there, and I’m still thinking about it. Now, I would say, in a final closing to this, is we clearly can demonstrate better and better alignment with human values. One of the things that—again, if I were to make the argument out of evidence for optimism versus just a, “I’m hoping for the best,” is—when you look at the evolution of the OpenAI systems of GPT-2 and -3 and -4 and -4.5, as they get more sophisticated, they much more naturally and easily align with a set of human considerations.

REID:

They much more naturally understand what potentially our better selves are. What those things are. And they actually do show better ability to go, “I have some level of understanding—comprehension—of what the human goal set is in the aspirational side. I’ve learned and been trained to preference, “Oh, you’re trying to figure out how to write poetry.” Or, “You’re trying to figure out how to have a productive conversation with your friend, your child, your spouse,” and I can help with that. And then when you say, “Hey, I’d like you to make a bomb,” say, “No, I’m not going to help you with that.”” Right? And it naturally aligns better in those ways. Doesn’t mean there’s a ton of work, doesn’t mean there isn’t risk, but that’s a positive where we’re getting that alignment. Even though I think it’s only the very extreme, crazy fringe that think these systems are conscious today And there are people who think that they are conscious today. I think there’s a whole bunch of good arguments as to why they’re not. And will they ever be? That’s one more mind-bending question.

ARIA:

Well, I cannot wait for our next conversation with Yuval to tease into a lot more of these ideas. Reid, thank you so much.

REID:

Likewise.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.