This transcript is generated with the help of AI and is lightly edited for clarity.
ARIA:
Reid, we have been hearing for a long time that OpenAI was going to convert to a for-profit as a way to get enough capital that they needed to essentially fulfill their mission. They have confirmed that their nonprofit parent will continue to hold majority control over the for-profit subsidiary and the for-profit subsidiary will be a PBC—a public benefit corporation. And so Sam Altman said specifically in his letter that this keeps mission ahead of margin, you know, even as the company scales enormously and takes in an enormous amount of capital. So my question for you is, is this a good outcome vis-a-vis the alternative of just becoming a traditional for-profit? And why do you think a PBC in particular was the right move? Or was it the right move?
REID:
It’s actually not only just good—it’s essential for getting the humanity benefit that is the mission of OpenAI. In case, people don’t know what a PBC is, it’s a fairly simple thing, which is there is a mission, a statement about what the company’s about that the board of directors should prioritize ahead of revenue, ahead of profit. Now, there’s a reason why it’s in a company, right? Which is, hey, you need the revenue to scale. You have a customer focus in terms of how you’re operating. You know, there’s shareholders, where the shareholders and capital both invest, make the mission possible. But also, then you have stock, benefits for employees, who also then make the mission possible. And so, part of the reason why I tend to think that there’s a lot of people for their own political reasons or ignorance reasons that are attacking this, but they don’t understand that this is fundamentally just an elaboration of the mission that OpenAI has been on.
REID:
It could be people who are academics, or even anti-capitalist who go, “Oh, companies: bad.” You’re like, “Well, actually, in fact, the only thing building these kinds of things at scale is companies.” And that’s the thing that allowed OpenAI to get where it is because it had this as a subsidiary corporation. Then there’s competitors. There’s people who are like, “Oh, no, no, that shouldn’t be—even as I built a company that shouldn’t be a company because I would want it to not raise the capital, not be able to compensate talent. I’d like to be able to hire all the OpenAI people away for the stock option packages that I’ll be giving them in my non-public benefit corporation in order to do this.” And we’ve seen obviously a ton of efforts on that too. And so it’s part of the reason why I think it is an absolute mandate for keeping the mission, keeping the AGI for humanity most at the center of this organization as of any other scale organization.
ARIA:
And so do you think there’s something specific or special about LLMs, frontier models? Like you said, okay, Inflection, Anthropic, now OpenAI, are all PBCs. I think people are sort of generally familiar with B Corps, and you have companies like Patagonia and Danone. Should all companies be PBCs or B Corps? Or is there something special about these frontier models because they are going to affect so much of humanity, and we hope for the better?
REID:
I don’t know what percentage ultimately of companies should be PBCs. I mean, to some degree it’s a choice about what you’re shaping and building, and which risks you’re going to take, and all the rest. Now when you have companies that have a, call it a, humanity wide impact, even broader than society wide, how are you maximizing benefit for humanity beyond the normal commercial applications? Say new product services, new contributions to the economy, new jobs, you know, da, da da, da. Like, all of that’s really great, but what are you doing beyond that? Also, of course, that brings up that question—societies. It’s part of the thing that I have been advocating for maybe a decade now, that as technology companies get to a size, they have to think of society as a customer as well in terms of how it operates. Now, you don’t necessarily need to be a PBC in order to navigate that. It kind of depends on how urgent that topic is. And so all of that aligns to why you might have a public benefit corp and a mission. And obviously part of the reason why Mustafa Suleyman and I did that with Inflection, which obviously continues as a PBC. It’s part of the reason why Dario and the crew and Anthropic did that. And it’s part of the reason why OpenAI is setting that up. And actually in fact, I would like to see every new, major AI company that’s seeking to be a frontier model, to be similarly clear on their understanding how this enters into the mix with humanity. And what are the specific things that they’re steering towards, and what are the specific things that they’re trying to steer away from.
ARIA:
Right? You have to have humanity as a stakeholder. And so, switching gears, one of the places that I think both you and I are most excited about AI is in the education sphere. The idea that you can have a personalized tutor. This could be totally transformative and positive, yet there might be sort of a messy middle time. So Google just announced that they will soon allow children under 13 who are supervised, through Family Link, to chat with Gemini for homework help, storytelling, et cetera. And so that’s at the young end, but then at the higher end, publish an article called, appropriately, “Everyone is Cheating Their Way Through College.” And they’re talking about how AI-assisted cheating, essentially, is rampant. You have quotes from students that are saying, “All my college degree is worth is I’m good at using AI.” You know, professors can’t wait to retire because they can’t figure out this AI thing. And so, I sort of agree, this is a messy time. It’s not so easy to figure out how to do this. It doesn’t mean we should throw out AI, but so if you were talking—and I know you are talking to educators and administrators who are reacting in this moment—where should they begin? How should they think about this time?
REID:
Wishing for the 1950s past is a bad mistake. The fact that universities have not changed, and it’s like, “Well, but I already have my curriculum, and this is the way I’ve been teaching it for the last, X decades,” et cetera. Well, it’s exactly as you say it. Obviously the interim is messy, and likely there will be a bunch of things that are broken. And so obviously, part of what will happen is technology tends to get adopted by people who have the most intense need and use for it. And obviously a student goes, “Huh, I could spend 30 hours writing an essay, or I could spend 90 minutes with my ChatGPT, Claude, Pi—whatever—prompting and generate something for that.” And obviously to some degree, they’re underserving what they actually really need, which is the whole point of this stuff is education and learning.
REID:
There’s also a point about having accurate assessment of how you’re learning, and so forth, because that’s part of how we do things. And so all of that being disrupted and in turmoil right now is not great. Now a university professor would say, “Well, we should slow it all down until we figure it out,” or many universities. I’ve talked to a number of university professors who had exactly that point of view. And part of it is because you say, “Look, I get it. You’re in the same kind of disruptive circumstance that other people are, when they’re encountering this.” Whether or not they’re coders, or whether they’re not they’re lawyers, or whether they’re not they’re doctors, or whether or not they’re analysts, or financial people or et cetera, et cetera. Which is, hey, you can’t just say, “I’m going to ignore the new tool.”
REID:
And so there’s a whole bunch of ways that no AI development—teachers, professors can be using it. They have to best stir themselves to do so. Here’s something a professor could do today, a teacher could do today. It’s like, alright, so you’re teaching a class on Jane Austen and her relevance to, call it, early literary criticism, or something like that. And you say, “Okay, well I went to ChatGPT and I generated 10 essays, and here’s the 10. These are D minuses. Do better” And yes, use a tool for doing it. But if you essentially said, “Hey, as opposed to 90 minutes, what I was doing is I was actually spending 20 hours with it, refining it, understanding essays better, doing that kind of thing.” Then actually say, “Well, actually, in fact, I’m probably learning more than I was learning before when I had to type the whole thing.”
REID:
“I’m not learning some things, and I’m learning new things, but it’s probably ultimately a transformative side.” And so that’s where you need to be going. Now, part of the reason why I’m absolutely confident of this educational approach in the long term is I think that it is practically guaranteed that the way assessment is going to change is going to be essentially the AI booth, right? Whether it’s an essay, or an oral exam, or anything else—is you’re going to go in and the AI examiner is going to be with you doing that. And actually, in fact, that will be harder to fake than the pre-AI times. Because the pre-AI times most people—including myself—who had some moments of being great, of getting great grades, actually figured out how to hack it.
REID:
Like, what’s the simplest way to study? When you’re in that essay—that sit in the classroom and write the essay—what’s the way that you could produce something that isn’t really that grand, but works within the 30 minutes that you’re supposed to write it, et cetera. And so there’s a whole bunch of techniques on that. And so you could actually in fact, hack that and know less about the overall subject. Part of the reason why oral exams are hard—generally reserved for PhD students, sometimes master’s students, et cetera—is because actually, in fact, to be prepared for oral exams, you got to be across the whole zoom. Now, let’s think if every class had an oral exam essentially on it. Ooh, you’re going to have to learn a whole lot more, in order to do this. And I think that’s ultimately how this stuff will be. Now, as per your question, again, look, we’re in a disruptive moment. We have a bunch of professors, just like classic, established professionals who go, “I don’t want to be disrupted. I want to keep my curriculum the way it is. I want to keep doing the thing that I’m doing.” And it’s like, “Well, no, you can’t,” right? And so you need to be learning this. And that’s part of the reason why with LSE, and others, I’m doing this like, “Okay, what does this mean for thinking about new curricula?” What does this need—new education, new learning, new teaching, new assessment, et cetera. So to put a bow on it, of something that I know you also agree with because we’ve talked about this a bunch.
REID:
The most central thing is preparing students to be capable, healthy, happy participants in the new world. And obviously your ability to engage with, deploy, leverage, utilize, AI—AI agents, et cetera—is going to be absolutely essential. And it’s part of the advice that I give young people, is say, “Look, one of your advantages is you’re much more deeply and much more naturally AI native.” You can bring that to the workplace because just like professors are, “Hey, I’d like to just keep this in the same kind of take home essay that I’ve been doing before, and exactly the same,” have to change, workplaces have to change too. And the question is, how do they find it? Well, the new blood gives you really, really great opportunities.
ARIA:
So that fits perfectly into my next question. But I have to say, I thought it was pretty hilarious. My good friend was applying for a job at Anthropic. And in the Anthropic application—she screenshotted and sent to me—it said, “Hey, we love AI…” Okay, good, check, Anthropic. “…But please do not use AI for any aspect of this application.” Which I just thought was a little absurd coming from one of the frontier models. So I feel like that will change over time as well.
REID:
Absolutely. By the way, look, the absurd thing should be is, part of the thing—and maybe this will also go back to education, before we move on—is like, say, “How did you use AI to do this? What were you uniquely differentiated? What was your theory of it? What did you see that other people don’t see in doing it?” That’s obviously what should be on it.
ARIA:
Yeah, it was one of those, “Uh?” moments. But I feel like a lot of people always, but especially recently—we had Derek Thompson come out with an article about sort of the wage premium to a college degree is decreasing. Is that because of AI? Is that because of a million other factors? We’ve also had all of the memos. We talked about Shopify’s memo, about, “This is an AI first company.” We saw the same thing coming from Duolingo. And so I think I know the answer to this question from you. But so everyone’s talking about like, “You know what they’re always going to need, they’re always going to need electricians. So they’re always going to need certain healthcare things. They’re always going to need that nurse.” There’s certain things that lag in automation, and so people think they’re AI-proof. If you were entering today’s labor market, is your advice to double down on sectors that lag in that automation? Like the skills, trades? Or do you think people should lean into AI-intensive fields, where the tools are table stakes?
REID:
So generally speaking, I think everyone should be learning and using AI. It’s kind of like if you haven’t found something where it could be seriously helpful to you—AI today—and in the words of Ethan Mollick, the worst AI you’re ever going to be using is the AI using today. If you haven’t discovered it, you haven’t tried hard enough, you haven’t been creative enough, you haven’t been studious enough, you haven’t been asking other questions enough, et cetera. So everyone should be doing that. Then it probably forks into how comfortable are you with using this AI tool set that’s going to be evolving very quickly, and therefore changing what your interface point with it could be. For example, you could easily see getting back to the, “Hey, I’m deploying a set of AIs on a problem,” at the most extreme, “and I’m actually in fact just trying to keep up and help and make judgment. Because the AIs get so good at doing that kind of thing.” You go, “Okay, well like that would be an AI-intensive task. Am I comfortable with that being a possible outcome with going there?” Or you’re like, “No, no, no. I need to know that this is my unique value. This is the thing I’m doing. And so I’ll be a nurse. I’ll do stuff that the AI embedding in the world, or other things, are much slower to do.” And I think that gets down to individual preference. Now both should be engaged with AI seriously.
ARIA:
Yeah, absolutely. I mean, it was interesting when I saw the news last week, it’s like, of course you don’t want the returns to college degrees to lessen. You don’t want new college grads to be unemployed. But one of the things they cited in the research was that more job descriptions are not requiring college degrees. And that’s obviously a lot of the work that you and I have done with Byron at Opportunity@Work, to make it so that a degree isn’t actually a barrier to the job market. So it’s also, no, we want to make sure that people without college degrees also have access. So, awesome. Reid, thank you so much.
REID:
Pleasure.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.