GREG:
It’s worth really considering, like, why do we build technology in the first place? And fundamentally, it’s to improve humanity, to improve human lives, to be able to achieve more. And I think that we’re really seeing that unfolding right now.

REID:
Hi, I’m Reid Hoffman.

ARIA:
And I’m Aria Finger.

REID:
We want to know what happens if, in the future, everything breaks humanity’s way.

ARIA:
We’re speaking with visionaries in every field, from climate science to criminal justice, and from entertainment to education.

REID:
These conversations also feature another kind of guest, GPT-4, OpenAI’s latest and most powerful language model to date. Each episode will have a companion story, which we’ve generated with GPT-4 to spark discussion. You can find these stories down in the show notes.

ARIA:
In each episode, we seek out the brightest version of the future and learn what it’ll take to get there.

REID:
This is Possible.

So, Aria, this is obviously one of the things we’ve been looking forward to for quite some time. We both know that this year, 2023, is going to be, in many ways, a year of amplification intelligence through AI, and that one of the driving drum beats is what’s going on with OpenAI, with ChatGPT, with GPT-4. And I couldn’t be more excited to talk to Sam Altman and Greg Brockman because they are two of the folks who, in the very earliest days, started thinking about, like, “okay, what are going to be the implications of this? How do we elevate humanity? How do we shape what’s possible? How do we avoid possible highly negative impacts?”

This conversation is just going to be super interesting.

ARIA:
I mean, it feels like the most natural episode to have. When we launched Possible with Trevor Noah, we said this was going to be a conversation with him where we talked about GPT-4. And in each of our episodes, whether it was with Dr. Kim Budil or Saul Griffith, we always had GPT-4 weighing in and creating these stories and positing what the future would look like. And so it feels so natural to bring it home with Sam and Greg who are part of the team at OpenAI, of course, they created ChatGPT and created GPT-4.

And I’m particularly excited to talk to them because all of our episodes have been about how to improve humanity, whether it’s how to create a sustainable energy future or how to create the future of cities. And when Sam and Greg talk about AI, they’re not doing it for technology’s sake. They say, “how can AI improve humanity and improve the world?” And so it’s lovely to hear their point of view on how this incredible technology can be used in such a human way, as you mentioned.

REID:
And, obviously, with all of the worldwide excitement about ChatGPT and everything else, everyone may know this, but for our listeners who might not, Sam Altman is the CEO of OpenAI and chairman of Helion and was formerly the president of Y Combinator, an American technology startup accelerator. Greg Brockman is the President and Co-founder of OpenAI. He co-led the development of OpenAI’s bot, which actually co-runs the company day-to-day. He’s also a board member of Stellar, a nonprofit foundation making a blockchain for moving money across borders for a fraction of a penny.

ARIA:
Here’s our conversation with Sam Altman and Greg Brockman.

REID:
Sam and Greg, I’ve been looking forward to this for a long time. Not just because we’ve done so much work together, but also because, in terms of people who have most informed my own perspectives on AI in the future, you are both vying for number one on that list. So this is awesome. Welcome to our podcast, Possible.

GREG:
Thank you.

SAM:
Appreciate it. Good to be here.

REID:
So why don’t we start with a baseline. Say a little bit about what your OpenAI mission is, like, what is the true north that informs all of the decisions that you guys drive to? And Sam, why don’t we start with you, but I want to hear from Greg on this too.

SAM:
We are trying to develop and deploy beneficial, safe AGI for all of humanity. And that is an unprecedented project. It is difficult to always know that we’re doing the right thing. We’ll make many missteps along the way, but that’s what we’re guided towards. I very deeply believe this will be the most positively transformative technology that humanity has yet developed. And it will, you know, to the degree that this is the technology that helps us invent all future technologies, I think it’ll be a super bright future.

GREG:
You know, we started OpenAI, Reid, as you remember, almost eight years ago because I think we all had the sense that AI was really going to happen. And not immediately. It’s been a long process, and I think there’s a long process in front of us still, but I think that this technology is something that we have the opportunity to influence how it plays out. I think that this can be the most positive force and that’s something that we want to contribute to. And so our vision is that AGI, which I think is also kind of going to be a spectrum, is beneficial to all of humanity, and that we operationalize that in a lot of ways. And I think we’ve learned so much from the very early days where we kind of had this grand mission but didn’t know how to connect it to actual tactical execution. And over the years, we figured out how to build a structure, how to kind of bake our values into that structure, how to actually build systems that are useful.

It’s been really wonderful to see how over the past just couple months that, kind of, a lot of our technology has gone mainstream and people are starting to get the kinds of benefits we’ve always hoped for. But there’s still a lot left to be done. Like, we’re still sort of in the very early days of this technology. But I think it’s also one of the things that we really believe is, like, everyone needs to have a stake in it, everyone needs to have a say. And figuring out how to really get to global governance and have sort of representative input into what these systems do, we think that’s just as important as any of the technical pieces.

REID:
And having been there in the early days with you guys, I know this is something you held as fiercely and boldly eight years ago as today.

Walk us through a little bit about how – you know, there’s 8 billion people in the world, and you might say there is one plus billion in the economically advantaged middle class – how is the AGI mission that OpenAI is working toward going to benefit the other 7 billion?

GREG:
Yeah, so I think that one important force in AI is going to be about access. Like, we really think that giving everyone access to this technology to be able to better their lives, use it for their purpose, to be able to kind of get their preferences and their feedback into that system to represent them, I think that’s going to be a baseline. And, you know, there’s a lot of questions about how exactly this technology will play out. Like, I think we’ve seen other technologies that act as this sort of centralizing force of, you know, that it does kind of raise all the tide, but somehow the outliers really end up more centralized. And so that’s part of how we’ve structured our company, that we’re a capped profit, so that if there is this great centralization of capital into OpenAI, that actually it’s not owned by the shareholders, it ends up being owned by a nonprofit for distribution to the world. And so I think that there are exotic outcomes here where you can think about things like UBI and sort of distributions that way.

But I think, fundamentally, that the core of giving everyone this enabling technology that lowers the barriers to creating, to expressing your creativity, to accomplishing new tasks and pushing forward humanity on whatever problems you care about and are passionate about – like, that I think is the real key.

SAM:
A thing that I really, deeply believe is that maybe all real sustainable economic growth comes from technological progress. And I think we have somewhat – and maybe the sort of the social, sociopolitical institutions and culture that help us get that technological progress. And so, the way that I think what we’re doing helps the other 7 billion people is not that different – I mean, and hopefully be greater in magnitude than previous technologies – but it’s that technology is kind of how we lift everybody up.

REID:
Talk a little bit about education, medicine. You know, like, what are some of the advances there that you could see across the entire world? Because I think we see line of sight to some of those right now.

GREG:
Yeah, I think education, for me, is one that I’m extremely interested in. Actually, if we weren’t going to successfully start an AI company, one of my backups was to do a programming education company, because I think the way that you teach people today – like, everyone has a story about that one teacher who really understood them, who took the time to get to know them, learn what motivated them, and just really inspired them to do more. And imagine if you could give that kind of teacher to every student 24/7 whenever they want for free. It’s still a little bit science fiction, but it’s much less science fiction than it used to be. And you can look at things like Khan Academy, who are really starting to take GPT-4 and deploy it in the classroom, and really figure out how to steer this technology so that it’s a helpful tutor that if a kid asks for, “oh, just do my homework for me,” it’ll say, “no, no, no, I don’t do that,” but tries to probe to figure out what they’re excited about and how to really motivate them. And so, I think that this kind of technology of just reaching global scale and figuring out how do you get the best out of people, like, that is the realm that we’re starting to enter now.

ARIA:
One question I’d ask about education, because I think that’s, like, I love that what you all are dreaming of is bringing education to everyone and bringing everyone to economic opportunity. I think a lot of people might say, “well, we’ve had that already, we have Khan Academy, we have Coursera, we have all these online classes that were supposed to be this promise, and we didn’t see the realization.” Like, why do you both – and Greg, I’m happy to go to you since you brought it up – why do you think this actually will be the step change for the millions and billions of people who don’t have the education that they need right now?

GREG:
Yeah, I think that AI, in general, is this field of broken promises, right? If you look over the course of 70 years, everyone feels the potential, right? Because how is it that we get education at all, right? It’s through people who are smart, who, you know, are able to actually help break down problems for us and teach us things. And imagine if you could have a machine that could do that, that could help with that. And I think that we’ve been sort of building a lot of body, we’ve built these virtual assistants, the Alexa and the Siri. We’ve built a lot of these applications like the Khan Academies who have the reach that they’re talking to the students.

But the question of the brain, like, how do you actually have a machine that’s able to get this amplification, that’s able to be the technology, that can be this force multiplier on what people can do? And that’s what we’ve been missing. And so the real question, I guess the test of what we’ve been building, is will it cross that chasm? Will it achieve it? And I think that we clearly have gone beyond what many people thought possible. There’s no question that the capabilities of something like GPT-4 are just – like, I think everyone who sees it, you really see that, “I did not think computers could do this, but now they can.” And I think that there’s a second step that’s also important, which is not just having the raw knowledge, but can we really steer the machine to do the tasks that we want to reflect our intent and to sort of operate according to the values that society chooses to put into that machine? And I think all of that technology, that’s what we’re working on, what we’re building. But I think that also the fact that there’s this whole deployment apparatus that’s almost just waiting for the right brain to appear, that’s going to be equally important. So, I think we have a shot. It’s not guaranteed, but I think that the test will be the next couple years.

REID:
What have been some of the surprises that have come with scale? Some of them you already just mentioned, Greg, which was the fact that it got much more capable than maybe many people would expect it, although it was part of the kind of the R&D thesis of a number of smart people, the whole team you guys have assembled at OpenAI, to say scale really matters here and can generate a bunch of capabilities. But what have been some of the surprises? And maybe Greg, we’ll start with you, and then Sam, we’ll kick it over to you.

GREG:
Well, I’ll tell you the very first surprise. This was all the way back in 2017 when we created this paper called the “Unsupervised Sentiment Neuron.” And it didn’t get much attention at the time, but for us it was the real wake-up call that you really need to pursue this paradigm. You know, we trained a model to predict just the next character in Amazon reviews. And you expect it’s going to learn where the commas go, where the nouns are, where the verbs are. But the amazing thing was it learned a state-of-the-art sentiment analysis classifier. It could tell you if a review was positive or negative. And so you see that semantics emerged from a syntactic process. Like, where did the meanings come from? We never told the machine the meanings, it just somehow figured it out by pursuing this task. And so, I think that this story of, “oh, the machine will never learn x, it can never learn to do mathematics, it can never learn algorithms.” Each one of these things we’ve really seen fall.

And so, it’s not perfect. We still have a long way to go, it’s still early days, but I think that the fact that we’re able to start solving – like, I think programming is maybe, for me, the – it’s so interesting to see how many people have gotten this giant boost. People who have never programmed before are able to start getting into the field, but people who are excellent programmers can do more, can accomplish more, and I think that reaching this level of capability where it actually accelerates my coding workflow… I couldn’t do it with the initial co-pilot, but with GPT-4, absolutely happens.

REID:
Could you say a little bit, before we hop over to Sam on this, around what the acceleration of your coding capabilities look like? Because I think most people kind of go, “well, is it like, you know, Greg’s now in the backseat?” Like what, what is the shape of that co-piloting?

GREG:
It’s actually very interesting because you don’t normally think about your tasks in terms of all the mechanical pieces, right? You just sort of think, like, “okay, I’m going to write this program.” And then you don’t really think about the fact that you’re decomposing into all these pieces, you’re actually typing a bunch of things. A bunch of your time is spent remembering exactly which libraries do the functions you already are looking for, how to impose them, exactly what the arguments are. And you start to realize that this high level task of “I want to build a system,” or an app, or whatever it is you want to do, is suddenly turning into a sequence of keystrokes, and many of those keystrokes, they’re actually just boilerplate. They’re just rote. So the early co-pilot kinds of applications really were: if there’s boilerplate, I’ll take care of it for you.

And, you know, the models start to get a little bit better. If you don’t quite know a specific programming language or you don’t know a specific set of libraries, the model will happily supply those because it has almost this like – it’s just experienced so much, it’s seen so many different varieties of things. So it kind of knows, “oh yeah, in this context, this plugs in.” But what’s so interesting with GPT-4 is that it starts to even move higher up where it’s like, “okay, there’s an error message,” and these error messages, they’re super obscure, right? Sometimes you search around for them, you look on various websites, you try to piece together from what other people have said, and no one gets exactly the same error messages that you have. You just have to kind of, like, pattern match against what’s there. And GPT-4 just kind of knows. It’s like, “oh yeah, you know, you forgot to use the nest-asyncio library in Jupyter.” That level of sophistication is now starting to be possible. And so there’s just a lot of just common mistakes, but also now common patterns.

And, you know, sometimes even just creative ideas, like my favorite application, actually, of GPT-4, mostly for fun, but sometimes can be quite useful, is to summarize code as a poem. And you actually get real insights about what that code is doing, and it’s really fun to read it.

AI POEM:
Here’s a poem from your files, Greg. I’ll link to the code that inspired it in the episode’s show notes.
In a world where code intertwines,
APIs dance, and data shines,
a request is made, a response appears.
In the language of Python, our logic steers.
Through headers and methods, we pave our way,
asynchronous calls in asynchronous days.
With sessions and streams, the code unwinds
a masterpiece of tech where brilliance binds.

GREG:
And that’s just something that – I was never going to write a poem, I’ve never read a poem about code that was written by a human, but somehow the AI-written ones are just quite compelling.

RED:
Sam? Surprises of scale?

SAM:
I just looked up a quote to read. It’s from Noam Shazeer, and the quote is, “we offer no explanation as to why these architectures seem to work. We attribute their success, as all else, to divine benevolence.”

And that has been somewhat of a recurring theme for me. I very much think of myself as an empiricist, so if something works, and it predictably works better if you do more of it, even if you can’t have a perfect explanation, I feel very confident in trusting the curve. But when you step back and look at the whole thing – from that “Unsupervised Sentiment Neuron” that Greg mentioned, that’s, like, quite mysterious why that should work, you know, it’s hard, I challenge anyone to give a very rigorous, down to the metal explanation of what’s happening all the way through there – to how some of the stuff that GPT-4 does starts to emerge to really why gradient descent should work as well as it does at all.

I have made peace with it. I totally believe in it. I think it’s going to go super far, but it is, you know, it was surprising when it started to work, I’ll say that. Even before we started OpenAI, when we just, you know, observed the AlexNet result – that felt like magic to me. I went to school at a time when they told you if you wanted to study machine learning as a student, that the only way to be guaranteed to have a dead end career was to work on neural networks.

ARIA:
I wanted to shift gears a little bit. One of the things I really appreciate about both of you is that you’re so open to being wrong: “I don’t know what’s going to happen, we need to ask other people, we are not the only people who have the answers.” And Sam, a few weeks ago on Twitter, I think someone snarkily tweeted at you like, “well, next thing you’re going to say is we should regulate AI!” And you were like, “yeah, we should regulate AI!” [laugh] And so, my question for you is, how do you see that happening? How do you see – a lot of people are saying, “oh, you’re moving too quickly” – what would you call for in terms of either regulation or global governance or bringing people in?

SAM:
I think there’s a lot of anxiety and fear right now, and I always believe people when they’re afraid or mad or whatever, even though I don’t think people can always explain the reason. I think people feel afraid of the rate of change right now. A lot of the updates that people at OpenAI, who work at OpenAI, have been grappling with for many years a lot of the rest of the world is going through in a few months. And it’s very understandable to feel a lot of anxiety in that moment.

Look, we think that moving with great caution is super important, and I think there’s a big regulatory role there. I don’t think a pause in the naive sense is likely to help that much. You know, we spent way more than six months, by the way, not way more, somewhat more than six months aligning GPT-4 and safety testing it since we finished training. So, like, taking the time on that stuff is important. But really I think what we need to do is figure out what regulatory approach, what set of rules, what safety standards will actually work, will actually, in the messy context of reality, work. And then figure out how to get that to be the sort of regulatory posture of the world.

REID:
You know, when people always focus on their fears a little bit, like Sam, you were saying earlier, they tend to say, “slow down, stop,” et cetera. And that tends to, I think, make a bunch of mistakes. One mistake is we’re kind of supercharging a bunch of industries and, you know, you want that, you want the benefit of that supercharging industry. I think that another thing is that one of the things we’ve learned with larger scale models is we get alignment better. So the questions around safety and safety precautions are better in the future, in some very arguable sense, than now. And so with care, with voices, with governance, with spending months, you know, safety testing, I think the ultimate regulatory thing that I’ve been suggesting has been something that’s along the lines of being able to remediate the harms from your models. So if something shows up that’s particularly bad, or in close anticipation you can change it. That’s something I’ve already seen you guys doing pre-regulatory framework, but obviously getting that into a more collective regulatory framework so that preferably everywhere in the world can sign on with that is the kind of thing that I think is a vision. Do you have anything you guys would add to that, for when people think about what should be the way the people are participating?

SAM:
You touched on this, but to really echo it, I think what we believe in very strongly is that keeping the rate of change in the world relatively constant, rather than, say, go build AGI in secret and then deploy it all at once when you’re done, is much better. This idea that people relatively gradually have time to get used to this incredible new thing that is going to transform so much of the world, get a feel for it, have time to update – you know, institutions and people do not update very well overnight – to be part of its evolution, to provide critical feedback, to tell us when we’re doing dumb mistakes, to find the areas of great benefit and potential harm, to make our mistakes and learn our lessons when the stakes are lower than they will be in the future. Although we still would like to avoid them as much as we can, of course. And I don’t just mean we, I mean the field as a whole, sort of understanding, as with any new technology, where the tricky parts are going to be.

I give Greg a lot of credit for pushing on this, especially when it’s been hard. But it is, I think, the way to make a new technology like this safe. And it is messy, it is difficult, it means we have to say a lot of times, “hey, we don’t know the answer,” or, “hey, we were wrong there.” But relative to any alternative, I think this is the best way for society to not only get to the safest outcome, but for the voices of all of society to have a chance to shape us all rather than just being the people that, you know, would work in a secret lab.

GREG:
And we’ve really grappled with this question over time. Like, when we started OpenAI, really thinking about how to get from where we were starting, which was kind of nothing in a lot of ways, to a safe AGI that’s deployed, that actually benefits all of humanity. How do you connect those two? How do you actually get there? And I think that the sort of plan, like, the plan that Sam kind of alludes to of just being like, hey, you just kind of build in secret and then you deploy it one day, there’s a lot of people who really advocate for it and it has some nice properties. That means that – I think a lot of people look at it and say, “hey there’s a technical safety problem of making sure the AI can even be steered, and there’s a society problem. And that second one sounds really hard, but you know, I know technology so I’ll just focus on this first one.” And that original plan has the property that you can do that. But that never really sat well with me because I think that you need to solve both of these problems for real, right? How do you even know that your safety process actually worked. You don’t want it to be that you get one shot to get this thing right. And so I think that – look, there’s a lot to still learn. We’re still in very much the early days here, but I think that this process that we’ve gone through over the past four or five years now of starting to deploy this technology and learn has taught us so much.

And we really weren’t in a position three, four years ago, to patch issues. You know, when there was an issue with GPT-3, we would sort of patch it in the way that GPT-3 was deployed, with filters, with non-model level interventions. And now we’re starting to mature from that, we’re actually able to do model level interventions, and it is definitely the case that GPT-4 itself is really critical in all of our safety pipelines. And being able to actually sort of even understand what’s coming out of the model in an automated fashion, GPT-4 does an excellent job of this kind of thing. And so I think that there’s a lot that we are learning and that this process of doing iterative deployment has been really critical to that.

ARIA:
That tees me up really well for my next question. So you said just a few years ago you had nothing, it was just the beginning. And so, it takes a lot to go right, and maybe some luck, to, with just a few hundred people, outpace the biggest tech companies in the world to get where you are today. I would love to hear about, like, what is the magic that has led to that, Sam? Are there key leaders within OpenAI? How did you get to this incredible place, knowing that it is just the beginning and there’s so much further to go?

SAM:
I think there’s a bunch of things that we have done, some intentionally and some that we got lucky on. One big one is, I think we built a culture of – and this is another thing I’ll credit Greg for – of really sweating the details and trying to get the details right and bringing people in who want to work that way. And doing, you know, very careful engineering, very careful science, and letting that compound over a long period of time. Another is, I think we’re a pretty truth-seeking org. We just want to find what works and do more of that. And that is different than most of the other AI research efforts that existed in the field when we started that had different incentive systems, different things they prioritized. And then another thing is we make high-conviction, concentrated bets, and so rather than spread out onto lots of things, we did stuff at the time which was considered, like, unimaginable by many other AI labs. We were like, “we’re going to put most of our resources into this one project, but we’ve studied it carefully and we think we can predict how it’s going to perform.” So those are some things I would say. Greg, anything to add?

GREG:
Yeah, I mean, just to give some color on this, sort of, leaning into the cold, hard reality and thinking from first principles. I’d say in the very early days, I think Ilya and I would spend like an hour a day just, we didn’t really have an extra conference room, so we would go into the back server closet and just sort of talk about everything and just debate everything and ask lots of questions about who we should be hiring. Like, do you hire the traditional machine learning PhDs? Do you hire software engineers who have never done this before? Do you hire people who are somewhere in between? And then, you know, I think for each of these decisions, we got to see the consequences of them and then we would update, we would learn. And so I think this sort of iteration, this idea of we can’t know everything in advance, but we can learn from contact with reality, I think was there in our DNA from the very beginning. And even with this point of scale, like, where did this hypothesis of scale come from? I mean, you can look at Dota, which was this competitive video game that we set out to solve at the beginning of 2017.

NEWS REEL:
OpenAI Five, an AI that plays Dota 2, a multiplayer online battle arena game with a huge cult following. In 2018, they flat-out challenged OG, the reigning world champion team.

NEWS REEL:
And is GG game over, OpenAI taking game two, taking the series 2-0 as a player of Dota…

GREG:
It ended up being that this was our first real scaled system and that the system got better each week as we put more compute into it. And a lot of people think, “ah, they were out to prove the scaling hypothesis,” but actually the other way around, our goal was actually just to run out of room on the existing algorithms so we could do new algorithm development. Like, that’s what we actually wanted to do. And so I think it was just really seeing the, as you’re trying to pursue this direction, this other direction’s pulling you along and saying, “hey, hey, this is really working, there’s something here.” And I think that willingness to say we will update, we will pivot, we will change, we will react to what we’re seeing in front of us – that was very important from a technology perspective.

SAM:
Yeah, this has basically been lost to history, so I think it’s a fun example to touch on for a minute. When we were scaling up the Dota project, there was basically no one who was like, “it’s just going to work.” It was like, “eh, we don’t really know what to work on next, so we need to see where it breaks.”

GREG:
100%. Yeah, we had a list of three big ideas. We started on idea number one, we were so excited to work on idea number two. We really wanted to do what’s called hierarchical RL, where you would have some sort of hierarchy that, you know, you’d have long term planning and then you’d have some shorter term muscle movements kind of stuff. None of that ended up being necessary. The really funny thing is that we actually had just a lot of spare compute at the time, like, we had just all these CPUs sitting idle and no one had any use for them. And so it was Jakub and Szymon, two of our researchers, who just would constantly be like, “all right, two x scale this week.” And then you would just see the curve, it would just get better.

And I think that there’s other things we’ve also sort of discovered, right? That there’s all these scaling laws, you can find a lot of these now, these very smooth plots as you, you know, increase the amount of compute in a model or the amount of data, and there’s so many different axes you can vary on, and they all give you these incredibly smooth exponentials. And I think there’s something really deep going on that we sort of, from a scientific perspective, have been uncovering about, I don’t know if it’s the nature of our intelligence, but certainly the nature of this artificial intelligence that we are creating.

SAM:
I think we don’t marvel at this enough because we’ve gotten so used to it, and maybe other people don’t either, but the fact that you can put in an arbitrary amount of compute for a desired level of intelligence, and that seems to span an incredible range of compute and degrees of intelligence, that’s, like, an amazing scientific discovery.

GREG:
It is pretty amazing. And I do want to say that there’s something that also sometimes gets lost in the narrative here, too, which is, it’s not literally just you have a bigger computer and now everything’s better, right? The sources of progress, compute data algorithms, those have been constant over many years and they remain so. We’ve also done studies on how much algorithmic progress there’s been: it’s a smooth exponential. And so, I think that there’s these inputs that you actually develop almost at small scale, or across the whole industry, that together, combined, you put them together – the best engineering, the best systems, the best algorithmic ideas – and then that is what yields systems like GPT-4.

And so I think that the one thing we should be cognizant of as we think about this, you know, these rates of progress questions, is really looking at where the progress comes from. And it’s not any one company, it’s really this, like, if you look at the supply chain of all of the inputs of the GPUs to all of the new algorithmic ideas, to even the large scale data processing systems, like all those things together, it’s pretty massive. It’s lots of people involved with lots of different companies and it’s really a project of, like, all of humanity, at some core. This, you know, technological progress that’s driving towards being able to deliver the systems that we’re creating.

REID:
So what do you think – I mean, obviously, you know, I think you’re both wise to avoid predictions in the future questions because it’s always sooner, stranger, and different than you think. And so, myself having made foolish predictions and then a couple years later going, “yeah, that was a foolish prediction, I knew it when I was making it.” Which industries and which kind of transformations of the world do you think we’ll be first seeing over the next 3-10 years? We’ve talked about education, we haven’t talked about medicine as much, which I think is another one that will have some obvious impact. What are you guys seeing?

SAM:
I think that applying these services to law, I think that there’s a lot of benefit to be had there, with giving access to legal services. And my favorite GPT-3 service was a tool called Augrented, which would help tenants who received eviction notices understand what was in there, right? Because, like, many people in that demographic wouldn’t necessarily have access to a lawyer and that you can actually help people do things that they wouldn’t be able to otherwise. So that’s one example of the kinds of use cases that we’re seeing work really well.

ARIA:
Do you think there’s spaces that are sort of more resistant to AI, that we actually won’t be seeing changes or will be sort of slower to change on that basis?

GREG:
Well, I think the physical world is maybe the most resistant right now. We ourselves had a robotics effort and a couple years ago we actually shut it down because you know, a lot of people at the time were like – we had some great field-leading results, we had this cool robotics hand that was able to manipulate a Rubik’s Cube, all that good stuff. But we’ve realized that the digital world is moving so much faster. And so that team actually became our Copilot team. And so, you can kind of see, like, that was the trade, was that you could actually build Copilot and get that out to so many developers.

And so I think that really figuring out how can AI help us in our physical lives? And there’s many, sort of, crossover points, right? You think about just having the ability to, if you have a pet and the pet is, you know, has a cone, you want to take the cone off but you want to monitor for licking, like, can you have a webcam that is watching your pet and an AI that notifies you whenever that happens. So I think that there will be these points that we’re going to see the digital world really help in the physical. But I think that that is actually the longer pull than many of these purely intellectual applications.

SAM:
I think it’s going to just be sort of strange. To pick one example, when Deep Blue beat [Garry] Kasparov, which some people talk about is the start of the whole AI revolution, there were a lot of predictions that chess was totally done, that no one was going to bother to play chess anymore, it was just no longer interesting. And it did affect some things and change things, but I believe chess has never been more popular than it is today. And we don’t watch AIs play each other, which would be like far more interesting games or far more complex, better games, whatever you want to call it. We seem to be really interested in what other humans can do in this case. And if you’re cheating with the help of an AI and whatever else, then it’s, you know, a big scandal. But, you know, we see – like, there seems to be something deep there, and a prediction that was not only wrong, but totally the opposite of what happened and what many people would’ve predicted. And I think that’ll be true in many other cases also.

REID:
Before we move on to some other subjects around technology and the future possibility, because you guys, in addition to leading the charge on AI or doing things also around other areas, I’m curious to get your reflection on the, kind of, the theme of the book Impromptu that, last year when we were all playing with GPT-4, I said, “okay, how do I try to show, not just tell, the theory of human amplification?” It’s like, “well, why don’t I do a book with GPT-4 as a way of showing directly, here is an amplification moment.” How much do you think I’m right on the human amplification, the amplification intelligence? And how much do you think I’m just kind of being a little too quick on it. What’s your reflex on the tool amplifying humanity?

SAM:
Strongly agree, like, that’s where we would like to push things as much as we can to go. You know, you don’t get to push technology that much, but you get to do it a little bit around the edges, and, thankfully, in this case, it seems like that’s where the technology organically wants to go. And I saw a tweet that stuck with me, just someone who said they never thought they would get to coexist with an intelligence as powerful as GPT-4. You know, by the time you got here, you were deep into the AGI land, and it turns out that it has been more possible than many people thought to build a version of AI that is really good at amplifying individual human will and making us all more productive and better at what we do. And more than that, it turns out that people really love it. One of the most gratifying parts for me of OpenAI is how much people love the product and get all kinds of incredible benefits from it.

GREG:
And I think it’s worth really considering, like, why do we build technology in the first place? And fundamentally, it’s to improve humanity, to improve human lives, to be able to achieve more. And I think that we’re really seeing that unfolding right now, even in the current phase, and I think that even as you move to more capable systems, it’s going to be extremely important to make sure that we’re really architecting them for that purpose, right? That you have an answer for the human as the manager, the human as the end recipient, humanity as the beneficiary of this technology. All of those things, that’s part of our mission is to really make sure that we have an answer for, like, how humanity continues to fit in and continues to be the end beneficiary of all of these systems, no matter how smart they get.

ARIA:
One of the critiques of AI is the energy usage, and Sam, you have said that you think in the future there will not only be unlimited intelligence, but unlimited energy, and you’ve made significant investments in Helion.

We were actually super lucky to have Dr. Kim Budil from the Lawrence Livermore Lab on an earlier episode, and she was, like, we’re all obsessed with her. I would love you to talk about why you invested in fusion and how you see that playing out.

SAM:
Yeah, I don’t think it’s unlimited intelligence or energy, I think it is ever decreasing price and ever increasing abundance, but, you know, we never get to the infinite there.

I think the energy critique of these AI systems is an incredibly lowbrow critique and it usually comes up by the time people are trying to throw everything they can on a laundry list. But what I do think is abundant energy – like, truly global scale, abundant, cheap clean energy – would not only have all of the obvious benefits of addressing the climate crisis and everything else in that vein, but the cost of energy is so correlated to quality of life, throughout history, probably more than any other single input I could think of off the top of my head, that it seems like a great thing to fund efforts trying to radically change that cost, which Helion is trying to do, and I am hopeful we’ll have great news next year.
REID:
And Greg, one of the things that, I know we’ve talked education a lot, but we also talked medical, and I think there’s some investments you’ve made also in your 1% or a 0.5% side job as an angel investor, since I know how hard you work on the OpenAI stuff, what do you see coming in healthcare, drug discovery, development, primary care, et cetera, and what are some of the things that you’ve been doing in order to enable that?

GREG:
Yeah, and we also have a fund within OpenAI that invests in startups building on top of our technology, and so there’s startups like Ambience [Healthcare] who are actually trying to really operationalize this. And actually even Microsoft is starting to deploy some of our technology through Nuance in Epic and in many hospitals.

NEWS REEL:
Nuance and Microsoft working together, that partnership builds my confidence that we’ll really be able to meet the needs of our patients. It’s going to advance this technology at a speed I don’t think we would’ve been able to accomplish…

NEWS REEL:
Everything that I did here today would take me so much longer to do. This will help me be more efficient so I can spend more time actually talking to patients, looking them in the eye…

NEWS REEL:
To be able to gather and garner insights into the patient that we may not have been able to before is very exciting to me. I think that it’s going to transform healthcare.

GREG:
And so I actually think that if you think about the set of problems that a doctor has, so many of them are administrative. My parents are physicians, like, I hear all the stories about how they were forced to move to a world where they were sitting with an iPad as they were talking to a patient and like filling in boxes in epic. That is not how patient care was meant to be. And so I think that we are going to move to a world, even looking at the Nuance instant transcription and getting doctor notes afterwards, where the doctor’s able to actually focus on patient care and actually focus on strategy.

And, you know, there’s some use cases that people are already using ChatGPT for that I think are very interesting, right? So if you look on Twitter, there’s someone who saved his dog’s life with ChatGPT, and the story there was that he went to a vet and the vet really didn’t know what to do and said, let’s just observe this dog. And it just kept going downhill and still the vet didn’t want to do anything different, so he presented the medical records to ChatGPT, which very correctly said, “I’m not a veterinarian, you really need to talk to a vet,” but was willing to give him some suggestions, some hypotheses, some interpretations, some brainstorm. And with that he got the confidence to go to a second vet who was then able to run the test to save the dog’s life. And I think that this is a really interesting parable because it really points to this question of how do we want the AI to slot in? Like, you have to be so careful about overreliance in areas like medicine, but humans aren’t perfect either, right? That first doctor made a terrible call, it could have had really fatal consequences, and so figuring out how to have the right humans have the right oversight and ultimately, you know, as a patient you actually own the outcome, right? That you have to be a medical professional even though you’re not, right? So it’s like somehow our systems are not quite serving the outcomes that are required. But with AI, I think that we can provide amplification in all of these places if we get it right, if we put the right guardrails in place.

ARIA:
How do you guys see that working? You said, Greg, for instance, that you are investing through OpenAI in companies that are using your platform. Like, is OpenAI going to be creating the change? Is it other companies that are going to be using your APIs? What is the structure for all of this change that is going to happen? I know we’re not allowed to predict the future, but would love to see what you think is the optimal way for everyone to be using AI to make all these industries better.

SAM:
What we see now with people integrating the OpenAI API in amazing ways everywhere, and as the models get smarter and smarter, having that just continually lift up what the products and services are capable of – that’s just going to keep going. So this one I think we can answer with some confidence.

REID:
What have been some of the unexpected surprises so far, either within the use of the API or investments, of the various directions of enablement, amplification or other? I know that one of the things that comes from networks and platforms is some things that you’re expecting, some things you expect don’t happen, and then other things you just completely like, “I never thought of that!” What are some of the “I’ve never thought of that,” or, “that was a surprise, that’s so soon.”

GREG:
Well, I’ll tell you on the platform abuse side with GPT-3, the thing we expected to be the most desired abuse vector was misinformation. We thought that’s what everyone was going to do, and so we put all of our effort into really making sure that we could monitor for it, that we could see what was happening. And in reality, the single most common abuse vector was medical spam, making advertisements for various drugs. And so there’s something about this sort of the shape of the technology, how it fits in how people want to use it, it is very different from what you would expect. And we’ve actually seen this even in terms of our product development, you know, like where the API came from, where ChatGPT came from, both came from this place of literally – Sam remembers this well – but we spent, like, I don’t know, a couple months just writing down all the different ideas that we could work on for both GPT-3 and for GPT-4 of, like, maybe we could do a medical thing or a legal thing. And for each of these, it just felt like we’d have to give up on the AGI dream, right? We could become a company selling to hospitals, but you really got to get serious about being a company selling to hospitals and that’s what you are. And so we were like, “you know what, maybe other people can figure out how to use this technology.” And this is totally backwards from how you’re supposed to do it. As a startup, you’re supposed to have a problem to solve, not a technology in search of the solution. And I think my conclusion from the fact that it seems to be working is that I think AI might just be different from that, because it’s like every company, every individual, every business is a language business. It has language flows deeply baked in. So if you can add a little bit of value in existing language workflows, then it will just be able to be adopted so broadly.

REID:
What do you think will be some of the areas of scientific development? We’ve obviously seen various protein folding and other things, but what, what will be some of the 3-10 years of science acceleration from using AI or the modern AI techniques?

GREG:
Well, I have a pithy answer for you, which is that Terence Tao recently said that GPT-4 has sped him up. You know, he’s a famous mathematician, and it takes away all the tedium of writing grants and, like, a lot of the things that people actually spend their time on. And so, I actually think that there’s going to be a lot of this mundane, like, you just realize that the world’s brightest minds are spending all their time wading through this not very desirable, not very intellectually stimulating work, and I think that we’re just going to see people achieving more across the board.

SAM:
Yeah, that’s definitely my strongest answer right now. We may find that it’s really good at other things, but the fact that it is this productivity multiplier for basically everyone will make our best scientists much better, and that will be the way that science speeds up a lot.

Now, eventually, probably, these systems can help us hold more knowledge in one brain, for lack of a better word, than a human can and discover new connections, new ideas, whatever. But what we’re seeing right now is already so impressive.

GREG:
And just to add to that, I think that one thing we have not seen yet from this technology is really coming up with new ideas. And we’ve seen a little hint of it, right? You go look at AlphaGo, you know, there’s a famous move that no human would ever play. And that that was something that humans then got a bunch of insight into how to change the game. We saw the same thing with Dota, where we beat the world champions and they actually had been having a poor season that year, and afterwards they went on to win the World Championship again, first time ever in that game, using strategies that looked a lot like the OpenAI strategies. And so I think this idea of being able to learn from the machine, that’s something that we haven’t really seen yet from the GPTs and could be something that will unlock a lot of potential benefits.

ARIA:
All right, let’s move to rapid fire. Is there a movie, song or book that fills you with optimism for the future? Greg?

GREG:
I mean, look, I thought that Her, the movie, was very interesting. It’s very hard to find positive depictions of AI in Hollywood. And I think Her is, like, as good as it gets from the Hollywood perspective, but I think we can do better. I think that we are really going to be able to do something that’s just an amazing world.

SAM:
I didn’t want to give the same answer I always give to this question, but I thought about it more and I think it is just, like, the right answer. The Beginning of Infinity. Super cliche, but hard to be more optimistic, for me, than that book.

ARIA:
Awesome. And Greg, I just re-watched Her. It’s so great! [laugh] I’m with you.

REID:
In this case, we’ll start with Greg. Where do you see progress or momentum outside of your industry that inspires you?

GREG:
Energy. I love watching Helion, honestly. Like, I think that anyone who has a shot on goal of just delivering a super hard technological breakthrough that can unlock a better future, like, that has my support.

SAM:
That’s what I would say, too. I think that is the second most exciting thing in the world right now by far.

REID:
I do think also, obviously, there’s a long list here, like mRNA and, you know, a whole bunch of other things, but I agree with energy and Helion and a number of others as well.

ARIA:
Well, now I feel like I know your answers to the last question, but I was going to ask what technology are you excited for for its ability to transform your field? But I feel like AI is actually transforming all the other fields. But Sam, is there a technology that you’re watching that you’re like, “this is going to do it all for AI?”

SAM:
Man, I hate to be so boring, but if Helion really works and we can power large data centers, that would be cool.

ARIA:
I mean, I can’t wait till next year. You told us there’s a surprise coming and I’m excited about it.

SAM:
There’s a long gap between a scientific demo and something that is reliable.

REID:
So, last rapid fire, I think tomorrow marks the one month since GPT-4 launched. For each of you, starting with Sam, fondest memory of this last month?

SAM:
Actually, the moment that we launched. A bunch of us were in the OpenAI cafeteria together, there was like a little countdown clock. We had been working on this for, like, more than a year, and there were all these last minute little things that came up. But it was just like an extremely fun team spirit moment.

GREG:
Yeah, the funny thing for me is I actually love the launch process. Like, at this point, you know, one thing I reflected on at some point is I’ve done so many launches across the years between Stripe, OpenAI, Dota, you’re in an arena full of 20,000 fans that are cheering or booing for your AI. And that I think that each one, each launch you do, is just so different, right? There’s just something new to learn. And so, for GPT-4, I just really enjoyed the process of producing that blog post with the team. Like, there’s just so much that you just think really hard about. What is it that we did? Why did we do it this way? Wait, does this actually make sense? And really figuring that story and how it all fits together in a package that is understandable and really conveys both the strengths and the weaknesses and, you know, your hopes and your dreams and any places that it didn’t quite pan out. And so I just really enjoyed that process of just really thinking hard about reality and writing it down in a way that’s digestible.

REID:
Before we might close, anything in particular that you think that the general media discourse needs to be corrected about when it comes to AI? I mean, you know, some of our questions are obviously on that, which is, think about the amazing future you can build towards and don’t just linger on potential science fiction badnesses, but anything that, generally speaking, it’s like, “look, pay attention here and don’t get overly distracted by this.”

GREG:
So, I have two points. One is this question of where the progress comes from. A lot of the focus has been on these large scale ups, but I actually think that they’re more of a sign of progress than a driver of progress. And they are useful artifacts, so it is something that is worth paying attention to, but that really, compute data algorithms, those have been progressing and continue to progress at sort of across the industry. And I think one source of potential risk is overhangs, like, the more that it’s possible, if you were to put these things together, but you haven’t actually done it, to produce something that’s just like totally going to be a step function for society.

That’s where I start to get nervous. You know, for me, even the fact of seeing the reaction to ChatGPT and how much it felt to people like it came out of nowhere, whereas we’ve had years of seeing this technology get a little bit better, a little bit better, a little bit better. The model on ChatGPT, that wasn’t new, that had been out for almost a year at that point. And so I think that there’s something here about this continuity of really figuring out how we as a society, as an industry, as a world, as a species can coordinate. But you’ve got to really delve into the details of the technology to really figure out where the right places are to pay attention to say, “hey, we should all work together in this way.” And I think that if you’re naive about it, if you don’t quite get that right, you can actually have more negative impact than positive.

And I think a second piece that people just don’t talk about enough, I think, is really thinking about: where does humanity need help? One example to think about is specialization, right? That if you go to a doctor – like I remember I had a wrist injury at one point and I wasn’t using my wrist and I started to have neck pain and I asked the doctor, “hey, any idea what this might be?” And he looked at me, he’s like, “I’m a wrist doctor.” [laugh] He was not going to answer my next question. And we need to have the way out, we need to have a way to cross specialize to actually pool knowledge across these disciplines. And that is the essence of an AGI is the ability to do that and to learn new things and to go into new areas and figure things out. And so I think that there’s a tool that we are missing, sort of a capability that society needs. And I think really leaning into, “where are the gaps? And how can we fill them?” is as important as saying, “where are we accidentally filling things that we don’t want to right now?”

REID:
I loved talking with both Greg and Sam, and even though I’ve obviously been working with them for years and know a bunch of how they think, they invent, they create the future, they create what’s possible, it’s still really interesting and useful to how they’re thinking about, you know, anything that runs from how you’re improving education to the questions about, like, how do we do science acceleration and how do we navigate this future?

ARIA:
It’s just always such a good reminder. You think OpenAI launches, seven years later, GPT-4 comes out. Like, such a straight line makes so much sense. And so it’s so great to hear from Sam and Greg, “oh, in the beginning we didn’t know what the solution would be. We didn’t know if it was going to be LLMs. We were trying things out. We had spare compute back then that we just threw problems at.” And so, like, thinking about all of those different iterations that got them where they are today, it’s no surprise that there’s ups and downs on a startup journey, but it’s still surprising to hear from the founders and have them talk about the different things they tried, and they had three things on the list and, oh, the first one worked, and I’m still ready for two and three. I still want to know what else they got in the back ready to deploy.

REID:
And I think another thing that was really helpful about the conversation was sharing, in very brass tacks, how this innovation journey works. It’s like, “look, we were doing robotics, we were doing, you know, the hand and how to make them, but then it was just moving too slow. It just wasn’t working, that function, and we had to refocus our efforts on the things that would really make a difference.” And that trying a bunch, some of it works, you double down on it, some of it doesn’t work, you refactor, is a part of the innovation story and it’s what everyone needs to learn.

So often the mistake of innovation is you can plan it and it works exactly the plan, like a construction project, like building a house or something else. And actually, in fact, it’s moving fast and experimenting and trying things. And, by the way, occasionally, “whoop, that didn’t work,” and, “oh, wow, we had all this spare compute and we tried it and it really worked. Let’s do more.”

ARIA:
I loved the way they thought about safety being so important, but also getting it into the hands of people. They wanted millions of people to be kicking the tires with ChatGPT so they could get it better, so they could make it more safe, so that people could be using it in innovative and incredible ways that they couldn’t even dream of. It’s like, “let’s let other folks decide how we can use this incredible tool for all the positive things that are going to happen.”

One of the things I loved about hearing about the future use cases for AI is that there’s so many things that middle class people or upper class people in our country, and around the world, have access to that folks who don’t have as much money don’t have access to. And so when Greg talked about, “oh, think about people facing eviction, they don’t have access to the legal services that other folks might have, and with ChatGPT or GPT-4, we can actually expand justice, we can expand legal services to hundreds of millions of people.” And I think that is the true wish here.

You know, as Reid mentioned in the episode, how can we bring the billions of people around the world into the middle class and have the same middle class access to food, housing, justice, legal representation as other folks have. And so that, to me, is the real dream of AI, and it was really exciting to see that Greg and Sam are thinking about it.

REID:
Possible is produced by Wonder Media Network, hosted by me, Reid Hoffman, and Aria Finger. Our showrunner is Shaun Young. Possible is produced by Edie Allard and Sara Schleede. Jenny Kaplan is our executive producer and editor.

ARIA:
Special thanks to Theresa Lopez, Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, and Ben Relles.