This transcript is generated with the help of AI and is lightly edited for clarity.

REID:

I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know what happens, if in the future, everything breaks humanity’s way. 

ARIA:

Typically, we ask our guests for their outlook on the best possible future. But now, every other week, I get to ask Reid for his take.

REID:

This is Possible.

ARIA:

Last week, we had a live episode with Pat Yongpradit, Chief Academic Officer at Code.org. And in the spirit of that conversation, I’d love to hear some of your takes, Reid, on education and AI literacy. So here it goes. So Reid, you’ve had some pretty unique educational experiences, from attending a Vermont boarding school that favored art and farming chores over AP classes, to also studying philosophy in a Master’s program at Oxford. You also invest in ed-tech. You care deeply about education. So given what you now know about education and technology, what would you prioritize if you were designing K-12 students’ classroom experiences today?

REID:

I think at the high line, if you’re really just kind of prioritizing, the thing that you want to do is give a combination of the energy and motive for, you know, the broadest range of students to engage. And then the ability for them to continue, engage and go to any level of depth. And the ability to engage and to do that will be deeply influenced by technology. So, you know, like part of the thing that, you know, I’ve been saying in a variety of different places is you, you know, we have line of sight to an AI tutor on every smartphone, on every subject, for every age. And so if a person just gets curious and motivated and starts down a path, that AI tutor can go down that path with them, can help and reflect and so forth. So it’s kind of like once you’ve got that kind of motivation and kickoff. Now, the motivation won’t just be, you know, having an AI agent, you know, a Pi, a you know, a ChatGPT, a Bing Chat, whatever.

REID:

It won’t just be that. It will be also the, “oh, my curiosity has been excited.” And some of that will be people around you, what kinds of things to do—and I think for that—because people are motivated in many different ways—I think, you know, you have to do many different things in order to do that. I think some of that is the social energy in a classroom, and with a teacher, and with kind of looking up. I think some of that’s with contests or prizes. Some of that’s with community events and/or puzzles. Some of that’s things that you might be able to do with your parents or family members. And I think you want to engage those kinds of incentives to get there. And that would be a kind of a very broad brush—I mean, it’s that two-pronged approach using kind of what we have as modern techniques that we can get to essentially all kids for this. Or at least like all kids that are, you know, network-connected, and let’s work to get all kids network connected.

ARIA:

One of the things I think you said that is so interesting is curiosity. Let’s get to kids curiosity. And to me like that’s one of the places where AI can be the best. If you’re learning fractions, like a teacher could now create 30 lesson plans. One is on fractions and baking. One is on fractions and Dungeons & Dragons, one…et cetera, et cetera. So like the customization of kids’ curiosity. Because I feel like there’s going to be so few things that people need to know. We just need to think, and we need to know how to get the information—you know, figure out how to get to it. So I think that curiosity piece is so key

REID:

And know how to continually learn. And so part of it is like, yes, baking, but by the way, you could do fractions with Taylor Swift songs.

ARIA:

Totally. 

REID:

Right? There’s various ways you could do that too. And that would—like, the kids go, “well, I’m interested in that.” Right? And so that kind of thing now then becomes very easily, and pun intended, possible.

ARIA:

Love it. So we have been talking about, you know, getting all kids to learn how to code. We’ve been talking about engineering. And you and others have said that English, and other human languages, is gonna be the dominant programming language of the future. So what do you think being a software engineer will even look like in 10 years? And what does that new definition mean? So, for formal students and for lifelong learners, what will it mean to be a software engineer if you’re just speaking English?

REID:

Well, code is still patterns of thinking and patterns of—like, how do you decompose a problem? Decompose a solution to a problem? What are those ways that work? And by the way, some code can be operationally better than not—and you know, which problems are actually amenable and how you might do that. And so I think all of those ways of thinking will still be there. The thing that we’ll shift off of is currently in software engineering—much like many other professions—you have to kind of first master the very basic tools. Like okay, how do you write something in this very precise syntax? How do you learn a different language? How does it manage various computer resources? How do you make it more efficient in how it’s operating? You know, all that kinda stuff. And there will still be some elements of that, because you should understand the concept. But you won’t have to be, “I’m really good because I never err in my syntax when I’m writing my Objective-C code.”

REID:

That will become much, much less. And instead it will be the, “I’m thinking about how to solve this problem and I have the amplifier of a massive kind of coding assistant.” And, and I, you know, for one, believe that there will be very high success on the, kind of like the ability to just say, “give me three more coding agents and spin them up to work on this project with me.” And that they will be very, very good at coding. And I think that every human being with access to an AI agent can then start adding coding superpowers to what they’re doing, whether it’s how they’re browsing the internet, how they’re answering email, how they’re working on an Excel spreadsheet, you know, or a Coda table.

REID:

All of these things are things that will then get really amplified. And that the coding language is going to be your natural language—that means that you can do it that way. And you can then, by the way, code a little bit more like you play a video game—like iteratively, llke you go, “oh this, oh quite this, I mean this, change it to this, you know, iterate, you know, improve that.” And that cycle of going, which you also learn in doing. And that that kind of process by which you engage in the thinking part of it will still really be there, even though these coding assistants, these Copilot’s, these Pi’s, will be helping you you know, kind of do it a lot better, a lot faster, cross-checking what you’re doing. And so I think that if anything, more of the world will be doing software engineering, not less.

ARIA:

I mean, I love that. Talk about making it accessible and making it applicable to anyone with a network connection. And so on the flipside, you know, there is an argument that STEM students today aren’t necessarily always, you know, they’re not taught to zoom out and think philosophically, or that their knowledge base is, you know, too limited just to technical details. What do you think of that criticism? How do you think that will change in the age of AI?

REID:

Look, I think there’s always going to be challenges just structurally. Because if you focus on one thing, you don’t focus on others. So you say, “well, we’re all using AI. Well, AI is going to give us blind spots. And there’s going to be problems with blind spots, because we’re using AI.” There’s always a critical perspective because nothing—you know, the perfect is the enemy of the good. The perfect is the enemy of the anything, to make happen. And so, you know, I’m definitely not one of the critics that endorse these, “STEM students aren’t taught to zoom out and think philosophically. There’s a limited amount of things they can do.” Now, obviously as a student of philosophy myself, I’m a believer in philosophical thinking. And it’s one of the things that I think that AI can definitely help with, because it allows you to be cross-disciplinary synthetic—to ask questions in different ways, to explore different channels.

REID:

Like if I go, “well, I’m really curious about what Wittgenstein would think of Jane Austen’s novels,” you know, it’s like, well I could start talking to an AI agent about that and start getting down that, when it’s, you know, search and RAG-enhanced in various ways, I say, “well, is there any pretty good content on this that I could, that could go find or anyone who’s addressed this question before? And if it hasn’t been addressed, why not? And maybe there’s something I could do about it?” And so then I can move from my—again, by the way, you could say, if you wanted to go STEM to this, you could say, “well, you know, how would Wittgenstein’s theory of following a rule apply to writing Objective-C programming? Let’s talk about that.” Right? And so, you know, it gives the channels and ease to do that—and can even give some of the prompts in different circumstances.

REID:

Like you could, you could see teachers or classrooms or projects going—and the extra credit one is articulating what the theory of, of programming that these three philosophers would advocate. You know, what would Sappho advocate? What would Nietzsche advocate? And what would Derrida advocate? Right? And who would do the theories of programming, each of them? And you’re like, well, okay, all of a sudden you have to think about that in kind of interesting ways. And this is actually one of the things I think is the general opening of part of when you get a lot of this work in mechanics is it becomes the, that our progress of human beings and what we’re adding in is thinking of interesting questions, thinking of framing questions, thinking of areas to explore. And that becomes the enhanced part of our cognitive toolset. It’s part of why we’re in this cognitive industrial revolution. Part of what enables us to be, you know, amplified, and to have—you know, to pun on the title of the book—our own impromptu.

ARIA:

Awesome. Reid, thank you so much.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Ben Relles, Parth Patil, and Little Monster Media Company.