This transcript is generated with the help of AI and is lightly edited for clarity.

///

REID:
We’re at a constant state of cyberwarfare, and AI’s superpowers advances that game. If human beings are potentially being judged in a court context that AI is involved with, what does that mean for our rights as human beings? If you think, “oh, just because I’m building technology, I’m the expert on what humanity should evolve to or what should happen in society,” that’s silly. It’s absurd.

///

ARIA:
Reid, today, let’s first talk about Pope Leo, whom I am loving more and more. He is the first American pope— I don’t love everything, but I’m loving and more. And he recently said on X that he wanted the AI industry to cultivate moral discernment. And he called for AI systems that reflect justice, solidarity, reverence for life. These things seem to me like, pretty table stakes. Like, we all want moral discernment. We’ve all talked about how we want AI to be cultivated in a way that’s good for humanity. We’ve also talked about how the Catholic Church might be able to have an influence here. So there’s a lot of Catholic schools for, you know, secondary education. I believe the Catholic Church runs about 19% of hospitals in the United States. So education, health care, those are important.

ARIA:
But my question for you is, do you think the Catholic Church and Pope Leo can actually affect the AI dialogue in the United States? Do they have the power? Do they have the persuasion? Like, how do they enter the chat when it comes to AI and how we’re building it?

REID:
I think there’s a number of ways they can certainly influence the dialogue because, you know, there’s a kind of question of control or resources and all the rest of the stuff. But the dialogue— in terms of a massive swath of both the US population and the world population—looks to the Catholic Church as their kind of spiritual, moral, even humanist leadership in terms of, you know, “what are the questions? What are the ways that our set of values applies?” Now, the Catholic Church is also one of the most amazingly powerful religions on the planet. Because not only, of course, do they have a great congregation, but they also have a centralized hierarchy, a state, a network of universities— not just the network of hospitals that you’re referring to, but like, both universities and departments within universities.

REID:
So the question about like, what questions should be asked, what considerations would be, what convening power? I mean, just about most of the serious leaders of the frontier labs have actually made a trip to Rome and the Vatican, some of which I’ve helped with in terms of making that happen. So I think they can actually, in fact, and have over the last 10 years, starting with Pope Francis, started playing a role in this dialogue.

ARIA:
Obviously, some of the questions about AI are about job disruption and health and medicine and IP. But there’s also deeper questions of spirituality and what is human? Like, how do you think the Catholic Church influences that? And should our AI builders and engineers be sort of thinking more about the spiritual side?

REID:
It’s certainly fine if they think more about the spiritual side. And I think, you know, actually the diversity of thinking about the spiritual side is, I think, one of the things very important when you get to the human characteristics. So it’s not obviously just Catholic, but also Hindu and Buddhist and all the like— it’s important to have a range of this. Now, that being said, part of having participated in a number of dialogues led by a variety of the academics and cardinals and bishops and priests within the Vatican, it might surprise people to know that, like, some of their most central questions are not like things about like, well, you know, what does AI mean when you’re reading the Bible? Obviously they’re spiritual people, they have those questions. But it’s like, well, what does this mean for the future work?

REID:
What does this mean for the future of our community? What does it mean for human connection? If human beings are potentially, for example, being judged in a court context that AI is involved with, what does that mean for our rights as human beings? Those are the kinds of questions that are also central. And part of the thing that I, having helped a number of the different AI lab leaders come through, participate in a set of dialogues, part of the questions that I think that the AI leaders have taken away is not necessarily a pure spiritual question. Obviously everyone has their own kind of spiritual lens on these things. But on these questions about what does it mean for human beings? What does it mean for human community?

REID:
And like, for example, the Catholic Church has been studying the meaningfulness and the way that life fits into society and work fits into life. And so it was like, oh, those are great questions about not just, like, the competence about how work is done and jobs done, but what is the way that work fits into my life and fits into meaningful path within community?

ARIA:
So as we discussed, Pope Leo talked about cultivating moral discernment when it came to building AI. To me, this seemed pretty, uh, pretty vanilla, like of course we should. Yet there were a lot of folks on Twitter, specifically builders in Silicon Valley, who seemed to sort of take offense to this. They sort of didn’t like this way that it was going. What does that say about Silicon Valley or the builders of AI, that this sort of innocuous statement sort of set them off.

REID:
Well, as you know, it was only some of the builders. (laughs)

ARIA:
Exactly. (laughs) Not you.

REID:
Yes, well, not only not me, but not a lot of— a lot of other folks. And look, obviously there’s this kind of weird trigger around Wokeism and other kinds of things in the valley and saying, hey, if you’re bringing up this kind of humanist consideration, you’re not just allowing builders to build. And by the way, you know, builders should build and builders should build fast and builders should take risks. All of that is great. But thinking about what is the human goal, what is the society goal, having other people in the dialogue? Because if you think, “oh, just because I’m building technology, I’m the expert on what humanity should evolve to or what should happen in society,” that’s silly. It’s absurd.

REID:
It’s actually, I think, one of the things that is— that is our responsibility as technologists to actually be in those dialogues and to say, hey, you’re smart, you’re thoughtful, you have some concerns here, I want to hear them. I may not be able to match them all within this blitzscaling journey that is frequently building technology and is certainly building AI. But that dialogue about what it is that helps technology make us more human, that’s a dialogue that’s not only technologists, it is a— it is philosophers, it is priests, it is government leaders, it is people in the street. It’s really important that that dialogue is broad.

GEMINI AD:
This podcast is sponsored by Google. Hey folks, I’m Amar, product and design lead at Google DeepMind. We just launched a revamped vibe coding experience in AI Studio that lets you mix and match AI capabilities to turn your ideas into reality faster than ever. Just describe your app and Gemini will automatically wire up the right models and APIs for you. And if you need a spark, hit I’m Feeling Lucky and we’ll help you get started. Head to ai.studio/build to create your first app.

ARIA:
So one of the things we’ve talked about, about sort of genuine concerns we have over AI, is around cyber attacks and how AI can make state sanctioned cyber attacks, corporate espionage, bad actors, even stronger. And so last week there was a cyber attack and it said that Claude performed 80 to 90% of the work, making the attacks a click of a button operation. This was hackers targeting 30 organizations and they found the vulnerabilities, they told the victims about what happened, and they shut it down. So as a result of this, they alerted the victims and strengthened guardrails. But when this happens, my question is like, how serious is this? Should we be worried? Can you, like, put this into perspective for people who are reading this headline? What are you talking about, there was a cyber attack with Claude where 90% of it was just done by AI? That makes people really scared. How scared should they be?

REID:
I’ve never really wanted, you know, to advocate fear. But deep concern, absolutely. I mean, I think among the areas where there are fairly deep and legitimate concerns about where AI could misfire is in cybersecurity and in cyber hacking. Because AI, as per super agency, gives us superpowers, and it’s— part of the super agency is millions of people enabled and even hundreds of millions and billions enabled at the same time. While obviously we get all these amazing amplifications with Claude Code and GitHub and everything else, it’s not only just like, hey, I’m building the next app, the next app might be a medical app, the next app may be a game. It’s also, of course, cyber offense and cyber defense.

REID:
And part of what we need to be attuned as AI builders and developers is, we want to be differentially creating the cyber capabilities amongst, say, for example, key societal infrastructure, corporations and defense. And of course, it will be first and most selectively used by rogue nations, criminal groups, because they have a huge incentive to adopt and go first. And then the fact is we need to do everything we can to try to make sure that the infrastructure of society is kept up with it in various ways. And so it’s kind of a testimony to the kind of things where Anthropic has a very good moral character that they are proactively looking for it. And they say, well, why don’t they just stop it?

REID:
It’s like, well, there’s hundreds of millions of people using it, and there isn’t an easy way of just looking at, oh, that one’s a cyber attacker and that one’s a cyber defender, and that one’s a legitimate source and that one’s an illegitimate source. So it actually takes a whole bunch of work. The same reason why we have these cyber vulnerabilities. So the state of the cyber world is one of the things that people should be generally concerned about because, by the way, in the globe, we’re at a constant state of cyberwarfare and AI as superpowers advances that game.

ARIA:
So we were lucky enough just to be at a conference with Village Global and 70 amazing Unicorn founders. And so if you were talking to them, what would you tell them about what they should do to protect themselves about a cyber attack? All the way down to… we have a lot of nonprofit leaders who are listening to this show. Like, what should the everyday person do? So all along the spectrum of what would [you] do to protect yourself all the way up to, you know, sort of the fastest growing companies, how do you protect yourself from these cyber attacks?

REID:
Well, this is something that evolves constantly and is not a short answer, but the baseline is pay attention, practice good hygiene— good hygiene is everything from updated OSes, secure passwords, you know, kind of compartmentalized access, and change your passwords, you know, fairly often. Engage in good cyber defense, you know, kind of capabilities in organizations. Like, one of the reasons that almost all of my cyber defense runs through the organizations that I’m affiliated with— Greylock, where we’re sitting here today, Microsoft, other kinds of things— and use those kinds of practices. And you need to be paying attention. And most critically, never believe you’re 100% secure. I know people would love to believe that, that’s just foolishness.

ARIA:
So recently, the Disney CEO Bob Iger, he hinted that pretty soon they might be opening up some of their IP so that their customers and enthusiasts could use, you know, Pinocchio, Snow White, The Little Mermaid, to create amazing videos, pictures, scenes. If I was Bob, I would say, like, some of this is already happening that we don’t like, and you’re using my IP, so why not have Disney own that? I can imagine my kids, we’re reading Harry Potter, Harry Potter fan fiction has been around forever. If they could actually make something with Ron and Hermione from the movies, they would be super excited. So what do you think, this, like, ability to sort of actually utilize Disney IP or potentially some other companies, what does that mean to the future of storytelling?

REID:
Well, I think it’s just the beginning. I think it would be a smart move by Bob— I know Bob. I like him. He’s smart— and I think that would be a good thing to do. I think it’s a good thing for IP holders to generally think about doing. I think it’s one of the things that has created enduring universes and franchises. And what’s more, you know, one thing you just say, hey, we enable it for you. Another thing is we enable a channel by which you can submit it and we might even, you know, showcase some of the stuff that we think is best if you wanted to publish it. Almost like— kind of Sora-like things, you could imagine thinking about like which things get arranged to the canon. You might even craft your universes in a way of the alternative universes.

REID:
Like, you know, think about, like, the alternative timelines for, like, Star Trek and the awesome, you know, JJ reinvention with, you know, Damon [Lindelof] and all the rest doing that really well. But you can imagine like all of these things, and I would start building it into what you’re doing. And I think it’s one of the things that could then create a lot of— even like 10xing, 100xing, the emotional commitment and participation. And obviously things around economics need to be figured out. But like, if you’re generating 100x the attention, it shouldn’t be that hard to generate something 2x, 5x, 10x the economics. And that would simply be great.

ARIA:
This could lead to more revenue streams. It’s not a shrinking pie now. It’s a growing pie, and we can be more generous. And it feels to me like Hollywood and the streamers and IP owners, they’ve had a pretty adversarial relationship thus far with the foundation models. There’s been lawsuits, “you’re stealing my IP,” you know, the writer strike, sort of all those things. Do you think something like this could lead to more of a partnership as opposed to adversarial relationship between these two camps?

REID:
Well, I think it’d be wise for both camps— or maybe there’s many camps— to do that. And, you know, like a simple thing would be is, hey, we’re doing this so if you in your environment— like in Copilot or Gemini or ChatGPT or Claude— you’re getting prompted this way. You go, “oh, by the way, we have this partnership where you can do this officially over here,” and then kind of make that work, you know, because then part of what is, “oh, it’s wrapping back into the things I have” through this. And that’s what we want from you. And in which case we have a more workable partnership as a way of doing that. That would be a natural potential win-win.

ARIA:
Awesome. Reid, thank you so much.

REID:
Pleasure, as always.

ARIA:
Really appreciate it.

REID:
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.

ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.

GEMINI AD:
This podcast is supported by Google. Hey, folks. Steven Johnson here, co-founder of NotebookLM. As an author, I’ve always been obsessed with how software could help organize ideas and make connections. So we built NotebookLM as an AI first tool for anyone trying to make sense of complex information. Upload your documents and NotebookLM instantly becomes your personal expert, uncovering insights and helping you brainstorm. Try it at NotebookLM.Google.com.