This transcript is generated with the help of AI and is lightly edited for clarity.
//
REED HASTINGS:
In the 1980s, I was in the AI wave that was expert systems and they were public companies and fifth-generation. And the Japanese were going to be this big threat and, you know, and that AI didn’t work, but this one is. Anything that lives in the emotional realm will be impacted not as much by AI, because we humans react to these things emotionally. And so again, I think we’ll not watch robots playing basketball. STEM practically took over Stanford University, okay? And now maybe what we’ll see is a rotation, you know, back to the humanities into understanding the combination of history and literature. If I had a three-year-old today, I would be like, doubling down on the emotional skills. In 20 years, robots will do maybe 1% of the plumbing.
ARIA:
At most.
//
REID:
Reed Hastings is the co-founder and former CEO of Netflix, the company that helped define the streaming era and in doing so, rewired how 2 billion people spend their evenings. Under his leadership, Netflix launched streaming, pioneered the original content model with House of Cards, went global in 190 countries, and produced some of the most watched programming in TV history. He ran it for 25 years.
ARIA:
But Reid’s ambitions have always extended well beyond entertainment. He served on the boards of Microsoft and Meta, he’s currently on the boards of Bloomberg and Anthropic. He’s given hundreds of millions of dollars to education reform. And he holds a master’s degree in artificial intelligence from Stanford from 1988, before most of us had AI on our radar.
REID:
Today we’re asking, what does someone with that vantage point, across entertainment, technology, AI safety, and the long arc of institutional change, actually think is happening right now? Where is the leverage? What are we getting wrong? And what would it look like for this moment to go genuinely well for humanity?
ARIA:
This is a conversation about technology as a civilizational lever, about whether the people building AI and the people governing it are asking the right questions about what entertainment can teach us about human nature and what human nature can teach us about AI and about what it means to have spent 35 years watching technology reshape the world and to still believe the best is ahead.
REID:
And now to our conversation with Reed Hastings.
REID:
Reed, awesome as usual, to talk with you. I thought we’d start with something a little light for both of us. So I’ll answer this too. But we’ll start with you. In various weird ways, we get mistaken for each other. What was one of the funny ways that comes to mind for you about being mistaken for me? And then I will also answer that question.
REED HASTINGS:
Ah, Reid Hoffman, Reed Hastings, both in tech. So I get introduced sometimes in conferences, and someone’s done the quick ChatGPT, and they, you know, do it on Reed or Reid H or something. And they call me “the founder of LinkedIn,” and I’m very excited. ’Cause that’s a hell of a business.
REID:
Yes, well, and similarly, ’cause Netflix is a hell of a business. I think the last one that I got, and it’s out of, you know, many, was: Oh my God, that Wednesday show is awesome. You guys have done such a great job, which I’m now passing along to you. It is. It’s an awesome show, you know, like, that kind of thing. And, you know, we’ve been doing this for a while, so when we get the emails, we just forward them to each other.
ARIA:
Exactly. All right, well, people confuse you guys, but let’s see if you guys confuse yourselves with each other. So I’m gonna say a quote, and you guys are gonna tell me who said it. Shout it out. “Most entrepreneurial ideas will sound crazy, stupid, and uneconomic, and then they’ll turn out to be right.”
REID:
I think that’s me.
REED HASTINGS:
I would totally disagree.
ARIA:
It was Reid!
REED HASTINGS:
Those turn out to be wrong. That’s the whole point of the contrarian thesis. And then occasionally they’re right.
REID:
Yes, but they sound stupid.
ARIA:
Wait! No, Reed Hastings, you said it.
REED HASTINGS:
No, I did not. I’m sure that’s a misquote.
ARIA:
Oh, my God. We’re going to fact check.
REED HASTINGS:
Okay, let’s hit it from the top. But the quote is, you got to have a contrarian thesis that everyone else thinks you’re stupid, basically. And then it turns out to be right, which is rare. The whole point is that most of the time, the entrepreneurs fail because the idea doesn’t work.
ARIA:
Oh, my God. They’re so similar that they’re confusing each other’s quotes. All right, “the future isn’t something that experts and regulators can meticulously design. It’s something that society explores and discovers collectively.”
REED HASTINGS:
Boring.
REID:
So that means me. And it’s accurate.
ARIA:
That was Reid Hoffman. Thank you. Thank you. All right, we have a few more.
REID:
Boring is frequently how Reed talks to me. But, you know, it’s the details.
REED HASTINGS:
The entertainment part of Netflix, the key to that was always — not that I am — but always trying to be thrilling. And boring is the only sin in entertainment.
ARIA:
There you go. All right. “Stone Age, Bronze Age, Iron Age. We define entire epochs of humanity by the technology that they use.” Reed Hastings. All right, we got that. All right, two more. “We are Homo techne. When we cross the river, we are deepening our understanding of technology and ourselves.” That was Reid Hoffman. And last one: “Companies rarely die from moving too fast, and they frequently die from moving too slowly.”. That was Reed Hastings. But I feel like you’ve said similar stuff, such similar stuff.
REID:
It’s brick and bracket, you know.
ARIA:
Thank you for humoring me.
REED HASTINGS:
You should do who’s the better investor?
REID:
Who’s more entertaining?
ARIA:
All right, there you go.
REED HASTINGS:
Well, it turns out that operating personality is really different from allocating capital personality. So operating personality, you’re like a dog with a bone. You never give up on a problem. You just work it. You work it. And investing personality is staying very broad. Not falling in love with an idea, you know, cutting your losses and moving on. And like, when I tried investing, I just fell in love with all the entrepreneurs and kept giving them money and none of it worked out. And when Reid tried operating, you know, we quickly found the right guy in Jeff to operate for him, who was an amazing operator.
ARIA:
Fair enough, Reid. I’ll turn it over to you.
REID:
So you were CEO of Netflix for 25 years, and then when you handed over the role, January 2023, what happened the next day? What did you then go do?
REED HASTINGS:
My schedule evaporated. So, you know, first of all, it was a secret. And second, you know, I had a whole bunch of work meetings and a pretty full schedule that I then wasn’t going to do. A bunch of internal and external things. You know, first couple days, it was just like, be at home in shock and, you know, a lot of calls and just people saying congratulations, kind of nice, sweet things. And then, you know, I said, well, I’ve always wanted to ski a lot, and I’ve only been able to ski five or ten days a year because of work. So I’m going to take February and March and, you know, ski my brains out. So mostly that’s what I did.
REED HASTINGS:
And that was a super fun release and fulfillment, but it wasn’t really planned and it gave space, you know, really to the company and you kind of — In many cases, it’s good to have the ex-CEO not be itching to, like, call in to make decisions, “How about lunch, Ted?”, “Craig, let’s talk about strategy.”
ARIA:
“That episode of Love is Blind was terrible.”
REED HASTINGS:
So I was able to just enjoy myself and, yeah, focus on, well, snowboarding.
REID:
But was there anything that you found surprising that you missed?
REED HASTINGS:
No, the huge surprise is how much I was okay with moving on. I thought I would, like, miss everything, you know, because it was my whole entire life. I loved every second of it. And what I just realized was that I had done everything I wanted. I had, you know, done this huge global rollout. I had lived on an airplane around the world and every week in, you know, Seoul or Mumbai or Berlin. And it was incredibly fun and exciting, but I didn’t need any more of it. And so I was super surprised that I didn’t like, yearn to get back in the saddle. And of course there’s aspects you miss and the people I missed, but I’ve stayed in touch with them on sort of a personal basis.
REED HASTINGS:
So it was surprisingly easy because I certainly have heard of people who have a very hard time with it.
ARIA:
We’ve certainly seen some CEOs who stepped away and then had to go right back because they missed it so much.
REID:
Even in your former industry.
ARIA:
But beyond Netflix, you’ve been on the boards of Meta, Bloomberg, so many influential companies. And these companies were some tech, some less, different business models, different industries. What did you learn from being on the board of such diverse companies, obviously in addition to Netflix, about the AI and media ecosystem?
REED HASTINGS:
Well, the lucky break of my professional life in that way was that in 2005, Microsoft was looking to get someone from tech on the board. But they have conflict rules, so they couldn’t have, you know, the head of Cisco on the board. And so they started going down the list, you know, of tech people. And after a year they had gotten down to the head of a domestic DVD rental service, Reed Hastings, for an interview. And so I was like, you want to interview me? And then I went up and met with Steve Ballmer and Bill Gates and we really hit it off. And a little while after that they put me on the board of directors of the Microsoft Corporation, which, you know, was a million times larger than Netflix at that time and super global and super long term in their thinking.
REED HASTINGS:
So the stuff I got out of it was amazing because they were willing to work on projects for things 10 years ahead and like I had never been able to do that because the company didn’t have enough profit stream to do that kind of thing. And so it was such incredible learning for me. Then later, as social became really important, I thought, oh, you know, I’m kind of old, I should really get much closer to this as it’ll transform Netflix, like it transformed photos and things. So I got on the board of Facebook and Mark Zuckerberg was very open within the company and I tried to do what I could to help and I certainly learned a ton about social. But then it turned out that social really had very little to do with movies and TV shows.
REED HASTINGS:
So it wasn’t like this huge transformation of the rest of the universe like in the way that, say, AI is. So those were two long-term board assignments I did and I learned a lot on and I certainly encourage all the CEOs I know to do one or two other companies, hopefully big companies where they can learn from. And then I got on the Bloomberg board really as a favor to my friend Mike Bloomberg, who’s an incredible philanthropist. He’s been number one the last two years of most giving in the US and just a great human being and a great philanthropist. So that’s been a favor and fun. And then I’m on the board right now of Anthropic, as well as Netflix. And on Netflix, it’s pretty passive because the CEOs trust me and I trust them and so it’s just kind of easy.
REED HASTINGS:
But Anthropic is really exciting and intense and you know, I’ve really come to appreciate the team, the mission, and it’s neat to have a seat kind of on the front edge of what they’re doing.
ARIA:
Absolutely.
REID:
Well, one of the things some people know, but very early days in LinkedIn, I wrote to you asking you to join the board because you’re one of the top board picks for intelligent people in Silicon Valley. But I want to go back to Microsoft. So, you were there before me. I joined in 2017, I actually called you and as usual got advice on the stuff. What do you think — speaking of long trajectories, Satya has done a great job at Microsoft. What do you think the things were that were the right setup for Satya doing that? How much of that’s, you know, kind of, Satya’s own brand of things and how much of that’s kind of the patterns of work that were already present at Microsoft?
REED HASTINGS:
Well, let’s think about why is Microsoft 10, 15 times more valuable than when he started? You know, the profit stream has continued to grow, but then it grew a lot before that under Steve Ballmer. And so Office has stayed a very strong franchise and they have withstood, you know, some erosion from Google, but there was no wholesale change, okay? So they kind of won the Office wars and they’ve made Office good enough that, you know, fewer people are switching, so they stabilized that. Windows didn’t particularly get stabilized, you know? Apple, in particular, has continued to grow in the face of Apple. Search has continued — Google search — has continued to grow, so Bing and all those. So like Windows and Bing, you know, haven’t really delivered. Azure has delivered in a big way. It’s delivered in a big way because of the AI workload. Okay. That’s the big validating workload. So then that comes back to, Satya made one incredibly ballsy, insightful decision, which is to invest in OpenAI back in 2018.
REID:
I thought you were going to say buy LinkedIn, that’s okay, you know, keep going.
REED HASTINGS:
So, you know, he bet big on AI and that catapulted them both reputationally and in terms of the Azure business. So it was the workload that then has grown Azure into a monster and a big success, because Amazon had their own first-party workload plus all the early adopters.
REID:
Yes.
REED HASTINGS:
Okay. So it was really hard for Azure for a while.
REID:
Early adopters like Netflix.
REED HASTINGS:
That’s right. So, where Satya gets incredible — He also, internally, you know, he just got people talking and working together more than Steve had been able to do. But that, I mean, again, it was a nice place to work, but they just didn’t take any product bets that really delivered.
REID:
Yes.
REED HASTINGS:
And Satya took a huge product that did deliver.
ARIA:
So talking about AI, everyone is talking about AI. And you have also joined the public conversation. Where do you think people are asking the wrong questions, maybe having the wrong answers, or like, what is part of the discourse of AI that you think is totally missing, that people should be talking about more?
REED HASTINGS:
Well, I’m not sure if whether AGI comes in 18 months or six years, it really is going to make much difference. So I think we should just sort of say, you know, it’s coming fast and how do we want society to be? What do we imagine in 10 or 20 years, you know, sort of get over the idea of like, the intensity of how fast it’s happening to it’s here and what will we do and how does it work? And, you know, how would the legal profession work? Like, as an example, in education, people say, oh, it doesn’t change at all. And, you know, look at how medicine has changed from 50 years ago. Educators are, you know, recalcitrant because they don’t want to change.
REED HASTINGS:
But if you look at the practice of law, like, you know, pleading before the Supreme Court, it’s identical to 100 years ago. And we’re so proud of that. Right. So I imagine things like that, you know, a briefing before a Supreme Court will actually work identically in 20 years when AI is fully ubiquitous, right? Now, the briefs will get AI edited and improved and the research will be a little faster. But, you know, it’s kind of like, in the noise, in the way that today LexisNexis is.
ARIA:
Yeah.
REED HASTINGS:
Or Word for legal briefs. So, some areas of the economy, I say huge change, some areas not so much. So I think, again, there’s so much focus on, you know, is the current version of Claude Code, you know, this or that, you know, as opposed to, okay, it’s going to happen. And the robot side, the humanoid robot’s going to happen. So then what will society be in 10 to 20 years?
ARIA:
Well, you’re saying if so education, medicine, law, those are either sort of highly regulated industries or industries with big unions. Do you think that’s why they might be affected less and others will be affected more? Or to actually throw your question back at you, do you think there’s a profession that will be the most affected by AI whether it’s in three years or 10?
REED HASTINGS:
Well, I was going to say least affected, I think will be entertainment. Like you’re not going to watch a basketball game of robots.
ARIA:
Right. March Madness is not going to be taken over by AI, we’re safe.
REED HASTINGS:
We like the human conflict and that draws us in. So, most, like a percentage reduction in employment. One theory would be software engineers, because everyone’s working really hard on kind of automated software development. But there’s a substantial chance that while many companies will have reduced software engineering employment, there’ll be many other opportunities for more software. So, that kind of elastic response is, well, it’s what we’ve seen in radiology, which is an interesting example because radiology is image processing. Computers and AI are much better than humans at, and have been now for several years. Okay. And so you can now go into an MRI center and if you’re self-pay, 300 bucks and get an MRI, it’s dramatically down in pricing. And so people are getting many more scans and they’re being read by AI with a radiologist approving them.
REED HASTINGS:
So I had thought four years ago, going to be devastation in radiology. It’s going to be the first, you know, high-end profession.
ARIA:
Everyone talked about that, right? Obvious.
REED HASTINGS:
Well, we have a shortage now. We have 35,000 radiologists, we need about 40,000. Wages are high. And so it’s a good example of, it’s easy to get focused on, you know, the disaster case with, you know, Armageddon. We love movies about Armageddon. We have them in our religions. You know, it’s like, we love disaster. So we’re drawn to these scenarios of AI wiping out things. Okay. And again, hasn’t happened in radiology. Maybe it will eventually, but not in the last five years. So I’m going to — my hunch is that many professions will be resilient and that there’s certainly much more demand for health care. So, now, is there much more demand for legal services? Maybe. I mean, poor people are definitely under-lawyered, they get taken advantage of a lot. So maybe there will be a whole elastic response there and we’ll see.
REED HASTINGS:
But if I had to guess, it might be lawyers as the kind of most affected because it’s very verbal and it’s somewhat formulaic. Not as much as writing software, but somewhat formulaic.
REID:
Well, and also if we thought about what would be an increase in productivity if you had actually people shifting from legal work, which is generally transactional cost on things that are happening, to actually production of things, that would actually in fact be a general positive thing in society. And I say this as a child of two lawyers, so I actually think it would be a good thing if the work shifted more towards: how do we build more products and services. What are the ways that we create entertainment, create other work, you know, other kinds of things versus the transactional cost, which is legitimately there on a number of things, but the increase in transactional costs that’s in legal.
REED HASTINGS:
Well, I think where we both are is no one quite knows exactly, right? Which fields will be. But what I think we both see is an era of great promise that the next 20 years will be super exciting and I think it will usher in this era of abundance through both scenarios, which is again, you’ve got the white-collar symbolic processing stuff that’s getting pioneered now, like lawyering, writing software. And then we’ve got the android humanoid robots coming at some pace. So that’ll be pretty exciting.
REID:
So one of the things that, you know, obviously a lot of discussion of AI is on, the question is: is there safety issues? And you know, the safety issues range, you know, we’ve been in these conversations anything from various smart people being, you know, as you were referring to earlier, you know, apocalypse doomsayers, you know, whether it’s Terminator scenarios, whether it’s jobs, et cetera. So what do you think the right way of trying to navigate the development of all of these — what we both agree are — amazing future capabilities, medicine, amplification of software engineering, a whole bunch of other things? And navigating the safety? How much of that should be technical work, constitutional AI, et cetera? How much of that should be incentives?
REID:
How much of that should be regulatory? What are the guideposts that you’d throw out there — or principles — by which people should think about how to navigate both keeping up our speed in building AI for all the good things and also navigating safety.
REED HASTINGS:
Well, in safety, we can break it down in a couple categories. So there’s the super disaster Skynet-y case where like, AI takes over.
REID:
Soon to be a Netflix film near you.
REED HASTINGS:
Yeah. And we, you know, absolutely have to prevent it. It’s not something that’s going to happen in the short term, but the danger is that we could slide into it as these things take care of more and more of our life. So I think because the downside or recovery from that — because we don’t have time travel — you know, is extremely hard. That’s when we have to think about, even if it’s low probability, sort of like, massive nuclear war, we have to do some stuff about it to prevent it because it would be so disastrous. Okay, so that’s the kind of mass nuclear war equivalent case, that they take over. Then the other case is, you know, North Korean soldiers use it to design a virus and you know, a lot of people die.
REED HASTINGS:
So, think of it as, in the hands of the wrong people. It’s very powerful. One scenario sort of combined with synthetic biology. Another scenario is cracking into other computer systems. So it turns out AI is very good at breaking in, finding bugs in open source things through code analysis or finding bugs through closed things by probing, in the way that national security agencies do today. But instead now, it’s in the hands of, you know, terrorists or other people. So again, think of it — Those are two specific examples of bad people using AI as a powerful tool to do very bad things. And so there we’ve got to make sure that the whole industry does the tech prevention that’s hard to do. Now right now, there’s sort of, everyone is doing that because they don’t want to see these scenarios.
REED HASTINGS:
But you could imagine that it might need to be regulated over time to insist that all of the sufficiently powerful AI systems protect against these kinds of scenarios. And probably some of them will happen and then we’ll set up a protection regimen afterwards to prevent that kind of thing. But none of them is going to, like, destroy all humanity at once like massive nuclear war might.
ARIA:
All right, we’re going to go from nuclear war to AI writing. (laughs).
REID:
Which, some people think of nuclear war, but yes.
ARIA:
So the New York Times recently ran a blind test and 86,000 people took it. And I don’t know if you saw the results, but they were showing folks snippets of AI writing versus human writing. And 54% of people preferred the AI writing. And some people argued like, that was short form, that wasn’t novel. Like, that’s what AI is particularly good at. Other people were so devastated that more people preferred the AI writing. Like, what do you think that — like, what does this mean? Does this mean that most writing is going to become AI in the future? Or like, what are the implications of a test like this where regular people preferred the AI writing?
REED HASTINGS:
You know, I think short form writing on a specific topic is very different. Writing a story and developing character and a character arc, a conflict and a resolution. Just look at how Shakespeare still has a huge impact, you know, and it was 400 years ago and no one’s been quite as good as him at a certain broad range of things. So I think it’s an extremely rare talent of the very high-end AI. But like, average writing is, I don’t think the way to analyze it. And I’m not surprised that the AI has been better than the average writer today.
ARIA:
Right. Fair enough. And so thinking of like — thinking about all these creative revolutions, it’s like you were there for DVD to streaming and now obviously lots of people, they’re concerned about: AI in the entertainment industry. So forget the tools that people use, whether we’ll, you know, use AI for editing or scripting or whatever, but the actual stories that are being told. Like do you think the AI era ushers in new stories? Or different people who are telling these stories? Like, what will the effects be?
REED HASTINGS:
Well, people have been talking about democratization of film for 50 years and so like the early wave of democratization was in the 90s when you could shoot it on digital instead of celluloid film.
REID:
Sure.
REED HASTINGS:
And so you could do more takes. And the cost of filmmaking was going to come down and that was going to democratize film. And you know, it really didn’t change. We got, you know, bigger budgets, higher special effects. We did reshoots more. So it turned out that the constraint really wasn’t the film cost. There’s lots of student films produced today, produced back then, and they just don’t break through. And take our recent success with KPop Demon Hunters. I mean, you know, in some ways, wow!
ARIA:
I have a 10 year old, an 8 year old and a 5 year old. I certainly know about it.
REED HASTINGS:
It’s like our 28th animated film.
ARIA:
Right.
REED HASTINGS:
Okay? You know, it’s like, even for us, it’s really hard. And God knows we want to repeat KPop but you know, other than we’ll do the sequel, it’s like…
ARIA:
Lightning strikes.
REED HASTINGS:
Lightning strikes. So I mean we are really working hard, but think of it as it’s a very subtle set of high-end things. So it’ll help a little, particularly in TV when you got to do a lot of episodes. And you know, it’ll help a little bit on the scripting. Where it’ll help particularly is then script to screen. Okay. So you’ve got a big crowd shot, you know, and like, you know, a big stadium. Okay. That might be, you know, a very expensive VFX shot now, a special effects shot. But, you know, now it’ll be AI and that’ll be lower cost.
REED HASTINGS:
So the mechanical parts or industrial parts will be lower cost. But the backbone, the story. And then back to the basketball example. So we don’t want to watch robots. I think we’re going to want — People will pay for real actors. And people they recognize in that way. But we’ll see. I mean, you know, there’s definitely a threat of short form, you know, does — young people only go on TikTok. Okay. And never, you know, watch a podcast or a Netflix. We’ll see.
ARIA:
I mean we’ll also see Val Kilmer, he has a new film out and he’s been dead for several years and so, AI Val Kilmer. We’ll see if that’s a success or not.
REED HASTINGS:
Recovering past IP and extending it is a niche. Yep.
REID:
And by the way, people will watch robots, one of the great Netflix Love, Death + Robots series. Right? You know, etc. So like there are — it’s: what do you watch robots doing, is the interesting question.
REED HASTINGS:
There you go.
REID:
Now, where do you think are some of the interesting, more, side effects? Like obviously, you know, if you came to me today and said, hey, AI will help us improve identification of potential hits early. That seems unlikely unless there’s some analysis of major data streams that, you know, the humans don’t do yet. But what are some of the more, like, side cases of how AI comes into this? You know, it could be, you know, also like, you know, an AI discuss the content with. I mean like what are some of the things that kind of — not just reinventing the process but thinking about AI at the corner cases or AI at a corner case that may become big. Is there any of that you’ve had musings on yet?
REED HASTINGS:
We haven’t found, say, the equivalent of sports betting. So sports betting is a whole kind of value, emotional engagement layer on top of sports. It’s not really AI enabled and kind of nothing to do with AI, but we’re always with entertainment looking for things that — what’s a layer of conversation? What’s a layer of engagement that enriches the experience. So, you know, maybe AI will improve that, but it’s not obvious what that will be. And to some part, the beauty of a show or a movie is that it is, you know, kind of self contained and you know, that it’s this, you know, it’s been more than 100 years, we’ve had films and so it’s like a novel where the novel as an art form has really stayed constant.
REED HASTINGS:
And so there’s something about, you know, the size and the capability that people are used to it. Sure there’s short stories and sure there’s epics, but like almost all of the business is novels and what people read. And I think there’s a lot of that with film and TV series that they’re not artifacts, they’re really reflections of human attention span and stories.
ARIA:
I feel like people have been complaining forever about, you know, you have one superhero movie hit and then you have 25 more superheros or like Barbie hits and then all of a sudden you have toys like Toy stories. Like you’re not — we’re not getting sort of the diversity of hits because people are looking at an algorithm or they’re: well, you know, you got to put this star with this story and then… And so you don’t maybe get a KPop Demon Hunters, which like you said, was the 28th and no one could’ve predicted it. Obviously we’re in sort of the time of Heated Rivalry when no one could have predicted that.
ARIA:
Do you think AI will flatten things? Like will AI lead to people just betting on past success because the algorithm told it to and so they’ll take less chances or that’s already happening? Like how does AI affect it from a data perspective?
REED HASTINGS:
You know, very little. We’re really predicting for kind of that human conflict and something, you know, you want something that’s both familiar but also fresh. So those are our tensions. In terms of like over-sequelization, if you look at the amount of new content that Netflix is producing, that’s really gone away. In other words, sure, we’ll have some sequels. And think of season two of Wednesday was, you know, onto season three, etc. So that’s always been a part of the business. It’s very enjoyable for people because it’s a story they already know, and you see more of it, but you don’t want it to be the only thing. And that’s certainly not true today. Today we have an incredible set of new films coming out all the time.
ARIA:
Yeah. And do you, like, how well do you think Netflix can predict a hit? Like, sometimes things come by surprise, but you’re like, no, no, we got it. We know. Or you’re like, we sometimes don’t.
REED HASTINGS:
Yeah, it’s a mix of both. So season twos, we have much more.
ARIA:
Sure. You have more data there. All right, fair enough.
REID:
One of the many smart things that Ted Sarandos said is that there’s a much better business in increasing the upside, the quality, the volume of the reception of content by 10%, than cutting cost by 50%. And it was a way of kind of lensing to why is AI potentially really amazing? What are the kinds of things, in terms of thinking about how AI can be additive that you think that the entertainment industry should be thinking about broadly given, obviously, there’s a lot of concerns and uncertainty around it.
REED HASTINGS:
You know, in entertainment, there’ll be a lot of work on special effects and sort of things that didn’t fit in a budget before can now be done, you know, using AI. So that’s a good example. But more or less, I think anything that lives in the emotional realm will be impacted not as much by AI because we humans react to these things emotionally. And so, again, I think we’ll not watch robots playing basketball. And so it’s the easiest example for people to get, that things that are emotional — pleasure, emotional stimulation — will still do. So, you know, you like to give and get real flowers, not fake flowers. So I don’t think that’s changing just because of AI. AI is very good at that kind of thinking and logic — like coding. Wow, it really is breaking through.
REED HASTINGS:
So I think there’s just — think of it as tremendous excitement in medicine, in biology, and things that are very factual, logical, hard and complex, and things that are emotional, you know, will be — It’s not that it won’t eventually be able to add value, but that’s certainly not the big thrust of the AI world.
ARIA:
All right, this is something I’m very excited to talk about. You and I share a deep passion for education, and I feel like education is, like, at the center of AI. It’s like, is it going to usher in a new era where everyone has amazing tutors and education is supercharged? Or are public school students especially, perhaps just going to have AI slop constantly. Like, as a philanthropist who’s invested a lot here and sort of studied the education, especially in the United States, how do you think AI is going to affect education?
REED HASTINGS:
Well, there’s two kinds of big questions, which is one, what are we educating kids for? What kind of society are we educating them for? What skills should they have? So the kind of goal state, you know, in the past was pass a lot of AP exams. And that’s probably not the right goal state for the future world. So there’s a big discussion to have, which is what are the skills that you’re trying to target for your kids? And then second is implementation, which is, can we use AI tutors to whatever we decide is important to teach more of that, to make teachers’ lives better, to have kids learn more. So they’re two different discussions that get kind of mushed together.
REED HASTINGS:
I would say on the second, like how to teach more, then there’s some relatively clear paths of doing, you know, different kinds of AI tutors. And you know, just like we’re going to have AI doctor, AI lawyer, there will be AI teacher, and that will come along and it’ll be easier in private schools, it’ll be easier in charter schools, it’ll be harder in school district schools because of all the regulations they have. It’ll be easier outside of the US where they’re more willing to do some of these things. So, you know, that will happen. It’s unclear, the first question, which is, well, if you have a three-year-old or a five-year-old, what do you want them to learn? Because the hard skills like that we used to value, STEM, okay, that’s probably like coding.
REED HASTINGS:
Like we, you know, we spent 25 years saying, learn to code. Oops, you know, don’t learn to code. So it’s probably true that like all the hard facts of, you know, biology, chemistry, physics, will be extremely specialized and not necessarily general knowledge and probably not that valuable. Let’s say if you had an AP result in this, the things that are more emotional, how you read people, how you work with them, probably are quite valuable because they’re much harder for the computers to do and humans like doing them with other humans more often. So I think we’ll see a kind of shift of the value. You know, STEM practically took over Stanford University. Okay.
REED HASTINGS:
And now maybe what we’ll see is a rotation, you know, back to the humanities into understanding the combination of history and literature, but also kind of the physiology of the brain and, you know, how we interact with each other. If I had a three-year-old today, I would be like doubling down on the emotional skills. There’s some great middle schools. One’s a charter school, Valor, another’s a private school Flourish. And you know, they really focus like in seventh grade on these emotional circles where you sit around, talk about your feelings and because they feel like this kind of skill, knowing yourself, interacting with other human beings is going to be the thing that sustains those kids, you know, through their working life.
ARIA:
I feel like when people talk about AI in education, Alpha School always comes up. We had MacKenzie on the pod a few months ago and I feel like some people are like — the criticism is like they’re not even using the cutting edge AI. Other people say, look at their test scores. They’re doing, you know, twice as well as anyone else. Some people complain about equity. You know, for $40,000 a year, the thing…
REED HASTINGS:
60,000, it’s fantastic. If you can afford it, go.
ARIA:
So why do you, like — The thing I love about it is actually less about the AI, but more just about the, like, giving agency to kids, focusing on other things. Like, why do you think Alpha School is so great?
REED HASTINGS:
Well, it starts with MacKenzie and Joe’s philosophy that the kids have to love school and they love vacation. So the number one goal is they don’t want to go on vacation, they want to stay in school because they love it so much. Then they start from, okay, if we want them to love it and we need them to know the basics they have to do well on tests, what are we going to do? And so they do two hours on software a day where, you know, it’s relatively practice and drilling and these kinds of things. But they have a coach really motivating and helping. And then they get the reward for that which is they get to spend the rest of the day doing stuff they want.
REED HASTINGS:
Either some of the schools are for athletes and people focused on sports, or they spend the rest of the afternoon doing things like watching TED talks and talking about them or a range of things. But I would say they’re kind of like the Tesla Roadster. That was the first Tesla — an over-$100,000 sports car, a specialist thing. But it set electric cars as the aspiration and they were cool because the first electric car was this, you know, extremely fast accelerating thing. So that’s Alpha School, is our Tesla Roadster for AI schools.
ARIA:
And you think it can be done? We can go down the cost chain and we can do it at a lower amount.
REED HASTINGS:
100%. Model 3 will be coming.
ARIA:
I love it.
REED HASTINGS:
So, yeah, there’ll be more and more innovation at that end.
ARIA:
Cool. So, Reed, we talked about education and how there’s enormous opportunity for change. Alpha School, which is operating within the United States. How is education going to change outside the United States with AI?
REED HASTINGS:
Well, I think what we’ll see is tremendous flourishing of AI teacher in an international situation. So we have pretty good teachers in the US and pretty high satisfaction and rich countries generally. But in lower income countries, the education budget might be $300 per kid per year and the class size might be 50 to 70 kids. And there’s just not a lot of learning going on. And I think the combination of a Starlink on every school and tablet for every child and then really good AI software will close the gap. So think of the way mobile phones are ubiquitous in the developing world. I think AI learning will be ubiquitous and that will help. I don’t know that they’ll leapfrog the advanced countries, but they’ll certainly close the gap that we’ve seen historically. And in places they may leapfrog.
ARIA:
I feel like lately there’s been like breathless articles everywhere about how one laptop per child was the hope and then it was really a failure and we put so much money into it and it didn’t work. And then other people are saying like, well, that’s really nice that you want that, but you know, the cost of compute is too much. Like, what would you say to those people who think that, like, we’ve heard tech solutions in education before, but it’s not going to happen this time.
REED HASTINGS:
Yeah, there’s always the crash dream phenomena. Like, you know, in the 1980s I was in the AI wave that was expert systems and they were public companies and fifth generation and the Japanese were going to be this big threat and you know, and that AI didn’t work, but this one is. So just think, just because it failed before it was too early, it doesn’t mean it won’t work. So I think the one tablet per child around the world is very scalable, builds on the mobile phone platforms and it is some of the same ideas that One Laptop per Child had, but that was just 20 years too early.
ARIA:
And I assume you think the cost of compute is just going to go through the floor and so that it won’t be a barrier to entry for folks in the developing world.
REED HASTINGS:
That’s right. The costs are really driven off the phone market. So, you know, we have $800 phones in the US but in most of the world, you know, there’s $50 phones that are quite usable. And so think of it as $50 phone and a Starlink per school and some solar panels. So it’s very scalable.
ARIA:
Fantastic.
REID:
So one of the things you said earlier, which I completely agree with, and one of the things I think I would partially reframe, and it’s kind of a question leading into education, I think our human skills, how we collaborate, getting into humanity, understanding these things, I do think will amplify up. And the skill of learning coding obviously goes away because it’s like, oh, this doesn’t code. But I think the various — two things that I think will still be pretty deep in what we’re doing is, one, is a good systematic understanding of what truth in the world is around. So, like biology, physics, chemistry, et cetera. Because that kind of iteration of understanding the nature of the world we’re in I think is pretty familiar scientific thinking. Is it like the patterns of thinking, I think are important? The patterns of strategy are important.
REID:
And I think actually even coding will go that way. Because part of the thing is: while people say, well, should I stop studying computer science? Like, well, no, you should study computer science, not coding. Right. Like more of the kind of the thinking about the system and the pattern of it in terms of how we operate. And I think that will become more general. I’m curious what you think of my modest reframing of your earlier statement.
REED HASTINGS:
I think I’d just study math, you know, if you want to get to like, patterns.
REID:
Well, LLMs are terrible at counting. So, yes.
REED HASTINGS:
So, you know, whether it’s algebra — I mean, there’s so many interesting things in math. If you want to help people, like, kind of learn abstraction or search for truth.
REED HASTINGS:
You know, look, there’s going to be some role for science and a few people will be drawn to it. But think of the last 20 years we’ve been as a society. STEM. STEM. STEM. Learn to code. STEM. STEM. STEM. So I just think that as everyone sees that coding is overdone, my guess is we’ll see that STEM is overdone. And the kinds of things that you do with a biology background will be done so much better and faster by AI that it’ll be hard to compete for jobs in that space.
REID:
So shifting to another thread, one of the things I think AI will do is it will break us a little bit out of the industrial model, which is: The industrial model tends to be the, you know, go to high school, go to college, learn, now go work — and learning then, now work. And I think ongoing learning will be one of the key things. Do you think that will also be part of — Not just like, I have this one learning period, but it’s like, I’m constantly interweaving learning with what I’m doing?
REED HASTINGS:
Absolutely. So, you know, trying to think through the ongoing skill acquisition, you know, by learning things has probably, you know, been important for a long time because the stuff you learned in high school changes so much and the knowledge in the field changes so much, and that’s only going to continue. So certainly learning will need to be, for people who want to make a living intellectually, constant. But I don’t think that’s more true than it was in the past, because that’s been true for a while. It’s just going to be even more true, you know, in this new world.
ARIA:
So we learned earlier that you were the one that said Stone Age, Bronze Age, Iron Age. We define entire epochs of humanity by the technology that they use. And so we’re entering the AI age. Do you think, even more than previous ages, that now, if you are left out of the AI age, either personally or sort of as a country, like that, now is the time that some folks might actually fall behind and there’s going to be a bigger divergence between those who embrace the AI age and those who fall behind?
REED HASTINGS:
Well, I don’t know that just embracing it, like, if you’re a small country in Europe that has a glorious past, I don’t think it’s whether you embrace it or not that it’s going to change the outcome. I mean, you know, you got to try to figure out a strategy because, you know, it’s going to be dominated, you know, by China and the U.S. Okay, how do middle powers like Canada, you know, thrive? And I think we owe — you know, we’re part of that thinking and solution to be able to build a world that we all want to live in.
ARIA:
And so what would you do for these middle powers? Like, they don’t want to live in a world where it’s just China and the U.S.
REED HASTINGS:
I don’t know that they have much of a choice. So I think it’s — I mean, what they’re doing is linking up together, you know, and I think that probably makes sense. And certainly, you know, they’ll embrace using, you know, American AI because they’ll need to. And, you know, maybe they will be able to get some local things going or they’ll do it by treaty where, you know, they’re not going to get shut off.
ARIA:
Right.
REED HASTINGS:
But think of it as AI is going to accentuate income gaps both in the US and between the US and other countries. So you know, it’s going to create tensions just like it does in one society, it does between societies.
REID:
I do think that one of the things, and I think you would agree, that actually having an active AI strategy for the countries, for the industries, et cetera is actually in fact really important. I think it’s less important to be going: it isn’t that so called digital sovereignty isn’t also important. But the most important thing is to actually be modernizing your industry, your countries, your companies, your workforce. Because it’s a little bit like the Industrial revolution. Like one of the things I put in Superagency is that England, with a quarter of the population of France and a tenth the population of China, had a multi-century empire because they embraced the Industrial Revolution most robustly early.
REID:
Would you also think that when you get to thinking about this from a countries or industries or company’s perspective that they should adopt that same thing of like, we have to have an active strategy in being there early.
REED HASTINGS:
Well, your example is interesting. So you said the UK, which led industrialization, so that’s kind of the equivalent of America today. Right. So you didn’t really phrase it as, you know, what should Argentina do about industrialization, which is a very hard challenge because the UK did everything in its power to keep Argentina from being able to industrialize to maintain their strength of their power. So if you are a middle power like Argentina, should you have an industrialization policy? Sure, probably that makes sense. Will it work? Not clear at all. Because the power imbalance in the UK was so high. So they were laws preventing, you know, looms and various things from being in India. So the cotton, you know, from India had to come and then get processed in the UK and they enforce those laws with their military.
REED HASTINGS:
So I — you know, it sounds all great that Belgium should have a, you know, an AI policy and Estonia is a small country that’s done a lot on, say, digital ID and do good government tech. So I guess that’s better than not doing it. But I don’t know that it’s actually much of an answer for a middle-power country. I mean I think the challenges from a low power country are quite substantial and I don’t have a good solution. But I think you know, in them linking up and doing things and working well. And I hope that the US as the western leader of that is, you know, embracing in a very important way, you know, which is not the current America first policy. But I think it’s much more in our long term interest to have long and strong allies.
ARIA:
So earlier you said that you think AI is going to usher in this great era of abundance. And whether it’s 10 years or 20 years when AGI gets here, what are the things that need to happen for that to be true? Are there conditions that need to be met now for that to happen? And did you mean just in the US in terms of the abundance or could it be global abundance too?
REED HASTINGS:
It depends kind of how the IP is shared or the rewards of it, but I think it’ll be pretty global. So just like you saw in the industrialization, it really did lift all boats. You know, it’s at a different level than in the host country, but — so let’s take nuclear fusion, you know, and if we can actually make it work — which will be assisted by AI — then we’ve got a tremendous energy source. If through it we’re able to bring down the cost of solar, you know, or invent new types of batteries and storage, you know, it could really revolutionize energy usage and production, which then brings us to, you know, age of abundance. So in earlier — in early nuclear — we were a little bit naive, and we thought electricity would be too cheap to meter. Okay.
REED HASTINGS:
So, you know, we’re probably not going to get to that. But, you know, that’s the kinds of thinking for abundance that we want to do. So if you think about housing — and, you know, it’s very expensive to build — what if it was robots building it, you know, 24 hours a day? So then you got, okay, you’ve got carpenters not working, so that’s a negative. But you’ve got, you know, low-cost, very custom, beautiful housing. So that would be, you know, another example. So, you know, I think in each industry we’re going to see much lower costs for doing things and then incredible amounts of inventive energy. Because maybe we don’t need to build a house that way. Maybe you just print it, you know, in some giant 3D printer.
REED HASTINGS:
So there’ll be both tech revolution in that way, which again would be like nuclear fusion.
REID:
Yeah, I’m going to shift to one other zone of subject because I think this is something that people frequently misunderstand. What do you think are — why is so much of this innovation happen in Silicon Valley? What are the things that make Silicon Valley kind of a unique creator of these kinds of technologies? Why is it half of the Nasdaq is within 30 miles of where we’re currently sitting?
REED HASTINGS:
Well, I think if you look at the financial city of London, or you look at New York, or you look at Detroit for cars, this is not something specific about tech. You get very positive reinforcement when you’ve got a whole lot of talented people that can switch jobs and not have to move, and then they move the ideas. So in the U.S., we don’t have very high protection against the ideas walking out the door. So it makes — it can be hard for a given company because they feel like, oh, my secrets spread — but it makes it very productive for the economy and for the ecosystem that those ideas do spread. So I think of it as liquidity, and employees changing is probably the key ingredient, which is a combination of things like LinkedIn and, I wish we had non-employer-based healthcare.
REED HASTINGS:
So if you had it, you know, in some other ways that employer-based it, you’d have more liquidity. The less we do non-competes, the better. It’s one of the things I think that the Biden FTC had right — making it easier to compete with existing companies by eliminating non-competes. So those are — that liquidity of movement, I think, is the most important thing.
REID:
So one of the questions — maybe a last question before we get to rapid fire on jobs — is obviously the general discourse is loss of jobs and kind of other sorts of issues. But there’s way insufficient discussion on wages, because even though, in fact, you know — actually, in fact, I think there will be a lot of creation of jobs, a lot of transformation of jobs, and so forth — but what will happen, a little bit like the earlier comment in radiology, is there will be a, okay, which of these jobs are super valuable? Compensation goes up. Which of these jobs become less valuable? Compensation goes down. Have you had any thoughts — because you’re a systems thinker — on the wages effects, what countries, companies, industries, but specifically also individuals should be doing in kind of navigating what will be happening within wages?
REED HASTINGS:
Well, let’s see. You describe some jobs as valuable, but, like, you know, teachers are very valuable and not paid very much.
REID:
Yes.
REED HASTINGS:
So I don’t think pay follows value very much. It follows shortages in demand and supply. Okay, so the question is, for wages, what jobs will be in shortage? And I think it’s all those jobs that are emotional, that computers are not very good at, because we’ll have a lot of need in those. And the jobs that are more administrative, you know, those are going to have lower wages because you’re competing with computers who could do that job. So again, it’s really where AI does a job well — its cost when done by a human, or its pay, will be less. And where it’s super hard for the AI to do, it’ll be, you know, continued high wages.
ARIA:
I feel like a lot of people are telling people that the solution to that is trades. Everyone’s like, oh, become a plumber, HVAC, electrician, because those are in shortage right now, obviously. Like, the big question there is, like, how soon robots will catch up — like, what do you — like, it matters if that’s going to be five years or 20 years or 40 years in terms of people getting wage premiums from the trades. Like, what would you tell people in that respect?
REED HASTINGS:
Okay, so let’s look at electric cars — sorry, I should say self-driving cars. 2007 was the DARPA challenge, and it was like — it actually did work in self-driving cars in a limited lab. And now, 20 years from now, it pretty much works. Okay, but the percentage of miles driven self-driving by the machines — by high-end Teslas and by Waymos — it’s got to be less than 1% of global miles, easy. That’s after 20 years. So we’re not even to the stage of that DARPA challenge. That is, we don’t have a demonstration in the home yet, okay, that can do all these things. So think of it as, in 20 years, robots will do maybe 1% of the plumbing at most.
ARIA:
Right?
REED HASTINGS:
Okay. So it just takes a super long time to build, deploy, and then to get them to be lower cost, higher safety than others. But I think over 50 years it will happen. So that’s still going to be a great field for the next 20 years, if we’re talking plumbing specifically.
ARIA:
Thanks.
REID:
So now to rapid fire — unless there’s another question, let’s do it. All right. Is there a movie, song, or book that fills you with optimism for the future?
REED HASTINGS:
There’s a cool movie I watched recently, Queen of Chess, and it’s about a [Hungarian family in the 1980s whose father] raises his three daughters to get out of poverty via chess. And they all three become grandmasters and get to a middle-class or better living through this dedication. And so it’s a situation where there’s no reason they should have had hope, being in 1980s Hungary — and yet they did. And they worked towards a future that has been great. So it’s a — I love a documentary like that.
ARIA:
All right, I gotta check that out. What is a question that you wish people would ask you more often?
REED HASTINGS:
I think people focus a lot on business success, which I’ve been, you know, super fortunate on, and less on joy. And so I think the question would be, what gives you joy? How do you increase joy in your life?
ARIA:
All right, how do you increase joy in your life? Let’s hear it.
REED HASTINGS:
I would say that I’m trying to do more on the mindfulness and on the appreciation — noticing. Much of my work life was relatively frantic — kind of lots of email, short-burst stuff — and I think I could have integrated more mindfulness into that busy time. But certainly now that I’m retired, I can live it.
REID:
So where do you see progress or momentum outside of your industry — let’s call that tech and entertainment — that inspires you?
REED HASTINGS:
Definitely medical work. I mean, the amount of improvement in cancer therapies, health, understanding insulin resistance — it goes on and on. What we’re slowly learning about the brain, about the body — and then the brain’s, you know, even slower — but making some progress on that.
ARIA:
All right, our final question. Can you leave us with a final thought on what is possible to achieve in the next 15 years if everything breaks humanity’s way, and what’s the first step to get there?
REED HASTINGS:
Well, if everything breaks humanity’s way, it’s because AI has unleashed human flourishing, and we find the political mechanisms to share that across — within our country, across different income groups, and then between countries — the world as a whole is enhanced. And a first step for that would be to realize how interconnected we are between people in our country and then between countries — and trying to get away from win-lose to win-win.
ARIA:
Totally. Amen.
REID:
Thanks so much, Reid. Always a pleasure.
REED HASTINGS:
Indeed.
REID:
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil and Ben Relles.

