This transcript is generated with the help of AI and is lightly edited for clarity.
YUVAL NOAH HARARI:
If the AI tycoon himself behaves in an untrustworthy way, and if he himself believes that all the world is just a power struggle, there is no way that he can produce a trustworthy AI. But this is not the whole of reality. I will tell people who will believe in the cynical power hungry worldview, just observe yourself. Do you think that you yourself don’t care at all about the truth and you just want power? Probably not. Now, if you have a better view of yourself, why aren’t you charitable enough to acknowledge that other people just like you, might also be really interested in knowing the truth? And this is where we can start to build a more trustworthy philosophy, which is the only stable foundation for a good society, and also for benevolent AI.
REID:
Hi, I’m Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know how, together, we can use technology like AI to help us shape the best possible future.
ARIA:
With support from Stripe, we ask technologists, ambitious builders and deep thinkers to help us sketch out the brightest version of the future, and we learn what it’ll take to get there.
REID:
This is Possible.
REID:
It’s easy to forget the shape technology can take. Agriculture, writing, the printing press, the internet. All of these are technology and are tools that people have used to profoundly expand what all humans can do.
ARIA:
Reid, you pointed this out in your latest book, Superagency, that AI is yet another tool in this long line of transformative tools. And just like these other technologies, AI can have both beneficial and detrimental effects.
REID:
Right. As we build these systems that are capable of learning, adapting, and even persuading, the question isn’t just what these technologies can do, it’s who they serve, and whether they’re being shaped to amplify our humanity or undermine it. So what will be our story with AI and how do we work to ensure it amplifies our better selves? Today we’re joined by someone who approaches these questions with a long view, and who often illuminates the past to inform where we’re headed in the future.
ARIA:
Yuval Noah Harari is a historian, philosopher, and bestselling author whose books Nexus, Sapiens, Homo Deus, and 21 Lessons for the 21st Century, reveal profound takes on today’s challenges through the lens of centuries of human evolution. Yuval sees AI a little differently than we might, as more of an unprecedented agent than a tool. He warns that AI may leave people with little to do, pose a threat to democracy, and manipulate human belief. But this isn’t prophecy, it’s a possibility. One that humanity may be able to avoid depending on how we build, iterate, and deploy AI.
REID:
Yuval and I agree on a lot, but also diverge places. We sat down for a vibrant discussion on the rewards, risks, and responsibilities that come with AI. Here’s our conversation with Yuval Noah Harari.
REID:
Yuval I’ve been looking forward to this for months. It is awesome to welcome you to Possible.
YUVAL NOAH HARARI:
Thank you. It’s good to be here.
REID:
So let’s start with a variant of a question that I frequently use at dinner parties, which is in an interview with National Geographic you said the person that you’d most want to talk to from all of human history was the Buddha. And I’m guessing meditation practice might inform this choice. What would be the question you would most want to ask the Buddha?
YUVAL NOAH HARARI:
Hmm. Ooh. It would be about consciousness, about sentience. The nature of consciousness. And you know, I mean this would be to start on a very, very deep note, but my understanding of consciousness—if I had to give a simple definition of consciousness—is it’s the only thing in the entire universe that has the capacity to suffer. And of course, also to feel joy and love. But neither atoms nor galaxies, nor almost anything in between, can suffer. Consciousness can. This is why this is the central theme of all ethics. And then you ask what exactly is suffering? And suffering is the rejection of reality. Consciousness is the one thing in the universe which doesn’t just observe—notices—what is happening, it also rejects it. And the big question I would ask, what is it in the universe that can reject reality? Like if you would try to describe it, say, mathematically, can you write an equation for something that rejects reality?
REID:
I mean, this will get very deep as we get an AI. You might be able to.
YUVAL NOAH HARARI:
It’s a central question for AI, of course. I mean, can AI suffer? Can AI reject reality?
REID:
And what are the conditions under which it would. I’d say current AI, no. What are the conditions where it gets to, is I think one of the questions we are now in the process of learning. And, you know, it may take centuries to learn, because who knows exactly where it’ll go. We’ll get into AI shortly, but one of the things I’m curious about is your personal relationship with technology. Like, for example, most people tend to never let their smartphone get too far from them. Right? It’s like, must be on. And actually, one of the things I think people have to train themselves as not to look at their smartphone too often, not to interrupt themselves. So what’s your relationship with technology?
YUVAL NOAH HARARI:
It’s complex. You know, as people say, I try to use it without being used by it. So for many years I just didn’t have a smartphone at all. Now it’s become completely impractical. There are so many healthcare services and stuff that they require me to have a smartphone. So I have it, but I use it like an old phone from the nineties, basically. I mean, I use it to make phone calls, and sometimes to send text messages, and whatever kind of essential applications—the healthcare system, whoever forces me to use—and that’s it. I usually don’t carry it with me. Like it’s not here now. I left it at home. And I’m very aware of the capacity of technology to take over our mind. Shape our desires, our thoughts. And the idea that I am smart enough and strong enough to resist it, that’s not true.
YUVAL NOAH HARARI:
This is why it’s called a smartphone and not a stupid phone, because it’s very smart. And if you think you could outsmart it—I mean, some of the smartest people in the world have been working for years to make sure that this thing can manipulate my mind. And I don’t want to give it too much access. And it doesn’t mean that I don’t use it all the time, again it is technology. I met my husband 23 years ago in one of the first dating sites for LGBT people. And, you know, for gay people, the internet and all this technology was one of the most amazing things ever. Because the gay community, one of its characteristics is that it’s not just a small minority, it’s a dispersed minority. Like if you think about, I don’t know, Jews. So as a Jewish boy, I was born to a Jewish family. But as a gay boy, I was not born into a gay family. And one of the biggest obstacles for LGBT people throughout history was simply finding one another. And I grew up in the 1980s in a small town in Israel, in a very homophobic environment. I didn’t know anybody who was out. And then the internet came along and suddenly it became easy, amazingly easy, to find each other.
ARIA:
Well actually that really transitions well to my question. You’re talking about the Internet’s ability to give people shared stories, shared narrative. You’re finding out about what other people are doing. And in your excellent book Sapiens, you describe the cognitive revolution as the turning point when Homo sapiens rose to dominance through our unique capacity to create these shared and collective stories—that was 70,000 years ago. In your mind, what are the technologies that have been most pivotal in improving our collective ability to create these stories and shared myths?
YUVAL NOAH HARARI:
So you have the list of the usual suspects like writing and print. But the key thing to understand, it’s that technology is never just about coming up with some technical invention. It always demands social, and cultural, and psychological work as well. If you think about writing, which was probably the biggest revolution. I mean, after the cognitive revolution. Actually the technical aspect of writing of script is extremely simple. The first writing system we know about was developed a little more than 5,000 years ago in what is today southern Iraq—the Sumerian city-states. And it was basically just, it was based on taking pieces of mud, clay tablets—which are just mud—and taking a piece of wood, a stick, and using the stick to imprint certain signs on the clay tablet. On this piece of mud. That’s it. This was the whole technology in the sense of, again, the physical, the technical aspect of it. But how to create the code and how to teach people to use the code, this was the really difficult thing. And this is what transformed the world.
REID:
There’s a specific area of something that I’ve said about AI that I’m curious on your reflection of, which is, I have sometimes said that AI is the most significant invention after writing. What would your reflection on that statement be?
YUVAL NOAH HARARI:
I think it’s more significant. In its potential. Writing expanded the abilities of the species that already dominated the planet—Homo sapiens. AI, at least according to certain scenarios, is the rise of a new species that could replace Homo sapiens as the dominant life form—or the dominant, at least, intelligence form—on earth. So, I mean, in 2025, I would say yes, writing is still more important than AI. Especially as AI is really a continuation of writing by other means. But looking to the future, I can imagine a scenario when the rise of AI will be an event on a cosmic scale in a way that the writing isn’t. That sometime in the future, entities would look back at the history of the universe and they would say, “Well, you had the beginning of organic life 4 billion years ago. And then you had the beginning of inorganic life with the rise of AI. Writing and all this human stuff, this is just the small details of how organic intelligence eventually gave rise, gave birth, to inorganic intelligence.”
REID:
That’s another very deep subject along with consciousness that I think I’m still going to defer a little bit. Just because I think it’s worth lingering on in depth. Maybe it’s worth saying that part of my vision of this is what the probability is that over the next century AI is a tool, versus it’s a species. It’s neither zero percent that it’s a species nor a hundred percent. But one of the things that you’ve written about is how AI is going to transform, even as it is—even as the amplifier of writing—how we work. Even as a tool, it’s a transformation of society.
YUVAL NOAH HARARI:
Absolutely.
REID:
So say a little bit about what do you think are the key things that are likely in that transformation? How should we as the tool makers build it? How should society adapt? Does it make some people—I think you referred to—a potential useless class?
YUVAL NOAH HARARI:
Yeah, that’s one of the biggest dangers we face.
REID:
Which we want to avoid. So say a little bit about what the dangers are, and what are some of the things to do to try to mitigate against those dangers.
YUVAL NOAH HARARI:
One danger that everybody’s talking about is that a lot of people will not just lose their jobs, but will find themselves almost unemployable. Because even if there are new jobs, you need completely new skills. And who is going to invest the time, the financial resources, in retraining the population again and again. Because this is not a one time thing. That you have some big revolution and then everybody retrains and that’s it. No, there’ll be a cascade of ever more disruptive revolutions. I would say that—again, from a historical perspective—the main problem is adaptation. Humans are remarkably adaptable beings, but we adapt on an organic scale. And here we are confronting an inorganic or digital scale revolution, which just moves far, far faster. If we can give humanity time to adapt to the economic and social changes, I think we’ll be okay.
YUVAL NOAH HARARI:
My theory is that we don’t have that time because the revolution is moving with a rapidity, which we’ve seen nothing like that in history. Most people in places like Silicon Valley, when you raise all these fears about people becoming jobless, and about political and social disruptions, they tell you, “You know, when the Industrial Revolution began in the late 18th, early 19th century, people had all these fears that, oh, the steam engine, then the trains that will destroy society and so forth. And look, 200 years later, almost everybody has a much better job than what they did in 1800. Almost everybody has a better quality of life, healthcare, transportation, entertainment. Everything is much better than in 1800. Not that the world is perfect, but if you look at 1800, you look at say, 2000 or 2025, the fears were completely unjustified.”
YUVAL NOAH HARARI:
My answer as a historian is that in history, the problem is usually not a destination, it’s way there. Because the way from 1800 to 2000, the problem was that the Industrial Revolution upended all the existing economic, social, political structures. Nobody knew how to build the new industrial societies because there was no precedent anywhere in human history of how do you build an industrial society? And people experimented in different ways. So one big experiment was European imperialism. All the countries that led the Industrial Revolution also engaged in imperial conquests overseas, or sometimes nearby, because the logic was the only way to build a viable industrial society was to build an empire. And this is what the British did. And this is what even small countries like Belgium did, when they industrialized, and when industrialization spread to other parts of the world.
YUVAL NOAH HARARI:
Another big experiment was 20th century totalitarianism. Both communists and fascists were very closely linked to the Industrial Revolution. You could not build a communist dictatorship in 17th century Russia. Impossible. You don’t have totalitarian regimes without trains, electricity, radio—all that. And what Lenin, and Stalin, and Mussolini, and Hitler—what they said is that liberal democracies can’t handle industrial technology. Industry releases such immense powers of creation and destruction that only a totalitarian system can manage it well. So if you look at the real process of change from 1800 to 2000, it was a roller coaster in which hundreds of millions of people paid with immense suffering, and sometimes with their lives, for all these experiments in how to build an industrial society. And humanity got very close to self-destruction with the nuclear bombs, after 1945. If humanity was a student in one of my courses—like the course of how to survive the Industrial Revolution—I would give it a C minus.
YUVAL NOAH HARARI:
So we survived, but we had a couple of very close calls and we could have done better, really. And my fear is that if this is the analogy—if humanity gets a C minus in the 21st century in how to deal with AI—then billions of people will pay a very, very high price for it. And maybe this time a C minus just isn’t enough. One of the dangers with AI is that we could see—we all already seeing—a resurgence of both imperialism and totalitarianism, but in a new and potentially even more frightening form.
REID:
So, by the way, I completely agree that the technology enables new forms of control, new forms of centralization. Enables different forms, like evolved forms of imperialism, empire, et cetera. That I completely agree with. But this actually gets to the thread that I wanted to raise on it, which is, so when we build technology, I think some technologies are naive, and they think that some technologies are inherently decentralizing, and other technologies are centralizing. I think all technology is inherently centralizing, but we can choose to make it decentralizing. And part of where I think the Industrial Revolution—because I’m one of these Silicon Valley technologists who uses this metaphor. And I take all of your points seriously, just to be clear about it. But I also think the Industrial Revolution is what gives the potential ground for a democracy, middle-class society, and so forth. Because it actually allows for a distributed enfranchisement of things that allow for broad-based education, broad-based wealth and prosperity, and so forth.
REID:
And the Industrial Revolution gives that as a possibility. But what I also think is an interesting thing of looking forward from the Industrial Revolution, including what I now refer to AI as the cognitive Industrial Revolution—that parallel—is that what are the things that we need to do to a learn to be better than a C minus? In what we’re doing. Look, I actually think a C minus is a fair grade for the Industrial Revolution. And I do think that one of the things we should do—and as part of the reason why Santayana*** and others—to engage with history, is to say, “Let’s do it better this time.” What would be some of those guideposts?
YUVAL NOAH HARARI:
First, I agree that the Industrial Revolution also made modern democracy possible. The same way that totalitarianism was impossible in the ancient world on a large scale, the same is true of democracy. The thing is democracy is a conversation. To have a conversation, people need to be able to communicate in real time. This was impossible to do under ancient technological conditions, except in a small city or a tribe. So you don’t have any example of a large-scale democracy—millions of people, spread over thousands of kilometers, engaged in a real time discussion of political choices. You start seeing it happen only in the 19th century. And it’s also true for AI. It can go either way. What do we need to do in order to—okay, so you experiment in different ways to know how to build society.
YUVAL NOAH HARARI:
How do you avoid dystopia and learn over time to build better societies? The key term I would say is self-correcting mechanisms. A good system is a system that has an internal mechanism that allows it to identify and correct its own mistakes. This is a key ingredient of democracies. Elections, free cause, free media. They are all self-correcting mechanisms enabling the public to identify and correct the mistakes we did before. You start with the assumption that mistakes will be made and then you bake into the system some mechanism to identify and correct them. And this is also true of biological organisms. The way we survive—all organisms survive—is because of these self-correcting mechanisms. Like a child learns how to walk not by being instructed by parents and teachers—they can give some encouragement—but basically it’s all self-correction. You get up, you try to walk, you fall down, you learn something for the mistake, you try again, you fall down—eventually you learn to walk.
YUVAL NOAH HARARI:
So how do we maintain self-correcting mechanisms in the 21st century during the AI revolution? Part of the problem is that self-correcting is a relatively slow and cumbersome process. At the pace that AI is developing, one of my fears is that there is just no time for human self-correction. By the time you understand the current AI technology and what are the impacts on society, and the politics, it has morphed ten times, and you are faced by a completely different situation. We are still struggling with how to deal with social media algorithms, and how to deal with the fallout from the social media revolution of ten and 15 years ago. What is happening right now nobody really understands. Because again, it takes time just to collect the data and figure out what’s happening.
REID:
This actually is an excellent lens focusing on one of the things that you may hope for as well, but you may have a lower probability on, which is can we use technologies, specifically AI, to increase the speed of our self-correction mechanisms?
YUVAL NOAH HARARI:
That’s a key question.
REID:
Yes. That is exactly one of the things that I think when—part of what I think I’ve found through our discussions is we actually parse many of the variables the same way, and then we assign some different probability weights. And part of the thing in the different probability weights is my sense is yes, we have this challenge of accelerated speed, and we’re not naturally equipped to deal with that accelerating speed. So what do we do in order to do it? And this is part of the reason I say, “Well, wherever technology presents a challenge, also see if you can use technology to present the solution.”
YUVAL NOAH HARARI:
Yeah. That’s I think the heart of the disagreement. Right there. My basic problem is if we don’t know whether the technology is trustworthy, and then we give the task of ascertaining that it is trustworthy to the technology, we seem to be caught in a kind of loop. Because if I can trust the AI tool to make sure that the other AI tool is okay, then everything is good. But how can I trust the verifying AI tool? That I think is the key problem. And at the heart of it is also just the question of time, of this collision between humans who work on these organic time. We are extremely fast compared to other organisms, but still we are extremely slow when compared to the inorganic time of AIs. In many of my discussions—also with you, with other people who are from Silicon Valley—our understanding of time is different. Like when somebody from Silicon Valley uses the term a long time, I now understand they are thinking like two years.
ARIA:
You’re thinking 70,000. They’re thinking two.
YUVAL NOAH HARARI:
I meet people who tell me, “Look, AI has been around for such a long time and still human society hasn’t collapsed.” No, it hasn’t been around for a long time! It’s been around for nothing!
REID:
You know, I thought this even before you did, that global coordination is not actually possible. But what I think still is possible is to actually have groups—this is one of the reasons why I’m a strong believer in multilateralism and at least partial globalism—which is to form alliances. I don’t think there’s a way that we can get global cooperation to slow down the time clock. Given the time clock is what it is, how do we help build the self-correcting mechanisms that have the best possibility of giving us that adaptation for the better future? And I don’t think any of us know, but I think we need to start throwing out the ideas for that, so that possibly we can then get it right.
YUVAL NOAH HARARI:
The most important thing we need is to build trust. And this is different levels—philosophical, practical. One of the reasons that the order of the world is collapsing, is that we have a deficit of trust in the world. And it makes us extremely vulnerable to AI. And it also, I think, guarantees a very dangerous kind of AI that will try and probably succeed in taking the world from us. Why? Because you can think about AIs as the children of humanity. When you try to educate children, there are the things you say to them, and there are the things they observe you actually do. And your behavior has far more influence on their education than what you tell them to do. So if we tell our AIs, “Don’t be power hungry. Don’t be untrustworthy. Don’t cheat. Don’t lie. Don’t manipulate.”
YUVAL NOAH HARARI:
And then the AI observes us constantly manipulating, and cheating, and grabbing power, the AI will learn from how we behave. I mean even if the AI tycoon tells the engineers, “Find some way to engineer into the AI something that will make it trustworthy.” If the AI tycoon himself—or in rare cases herself—behaves in an untrustworthy way, and if he himself believes that all the world is just a power struggle, there is no way that he can produce a trustworthy AI. No way. Now, the good news is that this entire worldview is not just cynical and dangerous, it’s also a mistake. It’s not true that the only reality is power, and that all human interactions are power struggles. Yes, power is an important part of the world. Yes, some human interactions, or some part of human interactions, they are power struggles.
YUVAL NOAH HARARI:
And yes, some institutions—or all institutions—they have problems with corruption. They have problems with people manipulating. But this is not the whole of reality. There are other aspects to human beings. Human beings, all of them—except if they are some extreme psychopath—they are genuinely interested in love, in compassion, in truth. This is not some cynical maneuver to gain power. And what I will tell people who believe in the cynical power-hungry worldview, is just observe yourself. Is this the way you think about yourself? Do you think that you yourself don’t care at all about the truth and you just want power? Probably not. You probably think you are different. Now, if you have a better view of yourself, why aren’t you charitable enough to acknowledge that other people, just like you, might also be really interested in knowing the truth? Or in having loving relationships with other beings? And this is where we can start to build a more trustworthy philosophy, which is the only stable foundation for a good society, and also for benevolent AIs. This is not a guarantee, but there is at least a chance that an AI developed by a society that believes genuinely in the pursuit of truth, in compassionate relations, the AIs will also be more trustworthy, and more compassionate ,and so forth.
ARIA:
I was going to say, you started real dark there. And I was like, there’s nothing we can do! We’re in this society. And then we ended on a place where your conception, it seems, of humanity is that we are loving beings, and we are compassionate beings.
YUVAL NOAH HARARI:
This is just a reality. It’s not some kind of fantasy I’m projecting.
ARIA:
And like you said, it’s a truth, it’s a fact, that we don’t live in a zero sum society. That’s just true. And so that’s actually incredibly hopeful. And so if AI is just reflecting back humanity—you started saying that—well, why won’t we enter on the good path? Why won’t AI go down the positive loving path? Can’t that be the future that we see?
YUVAL NOAH HARARI:
Oh, it can. But if human society—at the time we develop AI—if human society is dominated by cynical power hungry people, this guarantees that the AI developed in that society will also be manipulative, and power hungry, and untrustworthy.
REID:
By the way, here, I think this is in many ways a spectacular conversation. Because what I think of when I think about AI learning, I don’t think of it necessarily as learning from the most macro geopolitical point of view. It doesn’t say, it doesn’t see it. Just like the child of two parents living in London might not see it as well. But as precisely reason, for example, why I went around and said, “Okay, I see this issue. How do we grow this tool? What is the process by which that tool is going to evolve? Well, it’ll be through these labs, through the technological builders. Let me go try to influence the labs.” Where do you see the parenting level?
YUVAL NOAH HARARI:
The lab is like the womb. It has a lot of influence. But a big part of education also happens after the child leaves the womb. And one thing you cannot do in a laboratory is to simulate history, and to simulate the world. So I read about all these kinds of simulations, that they try to see how would the AI react in this situation? How would the AI react in that situation? To be aware of dangerous potential. And sometimes they discover quite scary things in the lab. But they will never be able to witness the scariest things in the lab, because you cannot simulate what happens when billions of people in the real world interact with billions of AI agents. And these are the most dangerous scenarios. And again, you can try to somehow build into the AI all kinds of mechanisms that will make it aligned with human values, and less likely to cheat or to manipulate.
YUVAL NOAH HARARI:
But once that AI is—I mean, for me, a key part of the definition of AI is the ability to learn and change by itself. If a machine is incapable of learning and changing by itself, it’s not an AI. So by definition, no matter how you design the AI in the lab, once it’s in the real world outside, it could learn and change by itself. And it’ll learn, at least in the beginning, from human behavior. So if you have a society dominated by Elon Musk and Donald Trump, it’ll learn from Elon Musk and Donald Trump because this is what it’ll copy. It’ll not learn, “Oh, I should do all kinds of things for the American public.” No, what it’ll see is it’s okay to lie if this makes you more powerful. “Ah, I get it. This is how you behave in the world.” And I think what would be the best thing that we can do right now, if some group of engineers and tycoons can come together to create a convincing demonstration that you can take a president or a prime minister, and they tell you, “I only have one hour.”
YUVAL NOAH HARARI:
And you put them for one hour in a room and they interact with an AI and they come out thinking, “Holy shit, this is the most frightening thing ever.” Then we have a chance to maybe have some global cooperation on this quickly. Part of the problem with AI is that there are very positive scenarios, there are very negative scenarios. I think everybody—almost everybody—would agree, there is a dangerous potential. We give it different percentages, but there is a negative potential. Very difficult for people to understand what is really the negative potential and what is the magnitude of the threat. If we could focus the mind of especially political leaders around the world by having this demonstration, this will go a long way towards getting something done quickly.
ARIA:
I think that’s so interesting. You know, recently Bill Gates said that he was giving the rest of his fortune and closing the shop in 2045. And he said the thing that made him realize that he wanted to start the Gates Foundation and do all of this, was that he went to Sub-Saharan Africa, and he went to Soweto. And so an intellectual person can read about children dying in Africa all day long, but they’re not affected by it until they go and they see it. So I think it’s really interesting that you’re saying we need people to see it. We need people to understand it. And so then what would you have our political leaders do? What would this global cooperation look like in order to create this positive future?
YUVAL NOAH HARARI:
To maybe not stop, but at least slow down the arms race. As I travel around the world, I talk with many of the leaders of the AI revolution, and almost all of them agree that there is a dangerous potential, and that it would be a good idea to invest more in safety, to take things a bit more slowly, to give you humanity more time to adapt and to think it through. But they all say they can’t slow down because they’re afraid of their competitors. And then you have this built-in paradox at the heart of the AI revolution. You have the same people telling you that they can’t trust other humans, but that they think they could trust the AIs they are developing. Because again, when you ask them, “Can you slow down a little?” They say, “No, because we can’t trust the other humans.” When you ask them, “Okay, but do you think you could trust the super intelligent AIs you are developing?” And they say, “Yes.” This is insane! I mean we have many reasons to suspect other humans, but we also have some good reasons to trust them, because we have thousands of years of experience with them. We just don’t have much experience, any experience, with millions of super intelligent AIs running around. They have a very cynical view of the other humans and they have a very trusting view of the AIs. And this doesn’t make sense.
REID:
I think one of the reasons to defend the perspective that you might be able to make AI more trustworthy than human beings, is that, to some degree, when you’re creating it—and yes, it learns on its own, I completely agree with the definition of what the AI revolution is, which is it’s a self-learning machine. But just like all self-learning—the learning algorithms, the initial learning, the path that you set it from—is actually part of that precondition as to where the path is of the learning machine. If you set the learning machine to be very emphatic on truth, now, one of the truths that it will learn is that human beings can operate in a way, that they’re operating on power, even when they claim that they’re not, et cetera. And it will learn all those truths as part of this. But if you set it as, “I am on the path of truth seeking,” part of the reason why I maintain a stance of optimism and possibility here, is because I think that the seeking of elevation of consciousness—and I’m going to come back to consciousness now, where we started—is that I have the hope, and the aspiration, that that is the truth that you could actually set a learning clock to.
YUVAL NOAH HARARI:
But notice the big difference. I mean, AI is artificial intelligence, not artificial consciousness. I think the truth-seeking part of human beings come from consciousness, not from intelligence. People are over fixated on intelligence. And intelligence is no guarantee in this respect. I mean, humans are the most intelligent animals on the planet. They are also the most delusional entities on the planet. A super intelligence is likely to be super delusional as well. Nothing human history indicates that intelligence is a good antidote to delusion or necessarily puts you on a path towards the truth. I think that the impulse towards truth comes really from consciousness. There is no evidence that AI has consciousness. And I’m agnostic about it. I don’t know, maybe it’ll develop. Maybe as we speak the first conscious AI is being born somewhere. But so far we have—I’ve seen—no convincing evidence that it is conscious, that it’s capable of suffering. And one of, I think, the big delusions of places like Silicon Valley is this overemphasis on intelligence, which is partly because there are extremely intelligent people there, who their life is built on intelligence and they tend to overvalue intelligence. My historical instinct is that a super intelligent AI will, no matter what you do, if it lacks consciousness, it will not pursue the truth. It’ll very quickly start to pursue other things which will be shaped by delusions of various types.
REID:
So there’s two points there. One, I agree with you, high intelligence can also be a high delusion. But I do think that we might be able to set even just intelligence on more of a truth path. I do think it’s interesting, because I had not considered the possibility that the truth-seeking is necessarily oriented in consciousness. I actually think the truth-seeking is possible with intelligence.
YUVAL NOAH HARARI:
With only intelligence.
REID:
With only intelligence. And I think that’s a thread for future conversation. I think one thing that I also have learned in this conversation is the importance of rebuilding trust. For example, one of the things that I have the same deep concern from both the far right, and the far left, and the populist, is by being antitrust, by being only power, by tearing down institutions, you’re tearing down the very society that has gotten us to our most elevated golden age. And it’s between horrifically negligent and deeply bad. And so rebuilding trust is really important. One of the things I’m actually doing with Lever for Change is a challenge for how do we rebuild trust in institutions? Because I think it’s important how we might use technology, inclusive of AI, for possibly trying to rebuild trust.
YUVAL NOAH HARARI:
Yeah. I think that this is a very important direction to explore. We see initiatives in different parts of the world. Like the police system in Taiwan using social media tools, not to divide the public and to spread conspiracy theories and distrust, but just the opposite. Just by tweaking the algorithm, just by telling the algorithm to score what people say in a different way than the usual algorithms on Facebook or Twitter. I mean most social media algorithms, they only care about engagement. If something gained a lot of attention, they push it up. And what gets a lot of attention? Outrage. And this is how distrust and hate and anger spread. So this system developed in Taiwan, it does something slightly different. It scores content.
YUVAL NOAH HARARI:
First of all, it maps people into groups, what kind of content they usually like. And then it scores content on how many likes it gets from people in a different group. So if you only cater for the people in your group and they give you a lot of likes, you don’t go up. You need to say something, you need to post a video, you need to do something that will get likes from people on the other side. And immediately, very quickly, all the influencers and all the celebrities, and whatever, they start saying things in consensus. Because they realize, by trial and error, this is the only way I can get my content up. And this of course, it has its own downsides. People say, “Oh, it builds conformism,” whatever. But this is just a small example of how by a very small and simple, seemingly simple, engineering tweak, you can turn a technology from something that destroys trust into something that builds trust.
REID:
Alright, so we will move to rapid fire. Is there a movie, song, or book that fills you with optimism for the future?
YUVAL NOAH HARARI:
Maybe the Netflix series Heartstopper which is this very simple story about two boys in high school falling in love. And it’s just the most romantic and simple love story ever. There are no complications, no big tragedies. And this was just unimaginable when I grew up in the 1980s. And this, in the last two or three years, it’s been one of the most popular TV series, at least for a younger audience, in much of the world.
REID:
The demonstration that love is still central to the human soul.
YUVAL NOAH HARARI:
Absolutely.
ARIA:
I love that. So I could ask you questions all day, but what is a question that you wish people asked you more often?
YUVAL NOAH HARARI:
Which institutions do you trust and why? I think people, very often, they like to talk about all the things that go wrong and all the things they don’t trust. And actually we trust most things. We just don’t think about it. We trust the sewage system. We trust the electricity. We go on an airplane—we can then afterwards complain, “Oh, it was late and it was like this,” but we are sitting there in the air, and we usually trust it.
ARIA:
Absolutely. The boring functioning of our governments is pretty good most of the time.
YUVAL NOAH HARARI:
Yeah, most of the time!
REID:
And science. Right. This will be a little—I’ll phrase it the exact way that we normally ask it, but I think it needs a refinement for you—which is where do you see progress or momentum, outside of your industry, that inspires you?
YUVAL NOAH HARARI:
Hmm. What is my industry?
REID:
Yes, exactly. Precisely, you get the modification of the question.
ARIA:
Forget about human history. What inspires you?
YUVAL NOAH HARARI:
The cockroaches are doing wonderful lately.
REID:
They will survive!
YUVAL NOAH HARARI:
They will survive. They’re very hardy. Very trustworthy. Yes.
REID:
Maybe it’s: where do you see the elements for the possibility of rebuilding trust somewhere in society?
YUVAL NOAH HARARI:
In every human being. Again, I think that when we look at ourselves, we usually have a bit of a more compassionate and charitable view of humanity than when we look at our political opponents, or religious rivals, or whatever. So this is why I think meditation is so powerful. That if you really get to know your own mind, and you have this inkling that actually this is not just my mind, the mind of other people works more or less the same. This is a source for a lot of hope.
ARIA:
Alright. Well, our final question. Can you leave us with a final thought for what you think is possible if everything breaks humanity’s way in the next 15 years, and what’s the first step to get there?
YUVAL NOAH HARARI:
The way to get there is, first of all, to rebuild trust, both within societies, also between societies. And if we can do that, and it’s certainly possible, then we now have the resources to build the best society that ever existed in history. For most of history people struggled against problems that they just didn’t have the resources to overcome. Like if you live in a medieval kingdom and the black death comes and kills within a year between a third and half of the population, you are completely helpless. You don’t have the scientific knowledge. You don’t have the technological and the government infrastructure to deal with the pandemic. So you can pray, which is what people did, but it’s really beyond—it was beyond the human capacity. And similarly, every few years you would have this massive famine. Because there would be a flood, or a drought, and the fields don’t produce enough wheat. And it’s too costly to import wheat from halfway around the world. So people starve to death. These were the big problems of people in the Middle Ages. We know how to deal with them. And we are not perfect, but compared to every previous time in history, we are doing much, much better because we have the resources. And similarly with the new problems we face—whether it’s nuclear war, whether it’s climate change, whether it’s the AI revolution—this is not a natural disaster beyond our capacity to understand and to mitigate. We have the understanding. We have the resources. What we need is only the motivation and the trust.
ARIA:
Fantastic.
REID:
Amazing as always. Yuval, thank you.
YUVAL NOAH HARARI:
Thank you so much.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles. And a big thanks to Michael Zur, Dima Basov, Brooke Ann Shutters, Melis Uslu, and the Burgh House.