This transcript is generated with the help of AI and is lightly edited for clarity.

 

REID:

I am Reid Hoffman. 

ARIA:

And I’m Aria Finger.

REID:

Too often when we talk about the future, it’s all doom and gloom.

ARIA:

Instead, we want to sketch out the brightest version of the future—and what it will take to get there. We’re talking about how technology and humanity can come together to create a better future.

REID:

This is Possible.

ARIA:

We love the community that we’re building with this show. And we hear you: you’ve asked for more of Reid’s takes. So, starting this season, every other week, Reid will be in the hot seat. I get to ask him a few questions in the spirit of the previous week’s episodes and get his thoughts on the latest on AI, technology, and the future.

ARIA:

Reid, I am so excited for our new segment where we get to turn the tables, and I get to ask you some questions related to the topic of the day. So last week we spoke with Kevin Scott—such a brilliant technologist, humanist—and we got to talk to him about both the sort of geography of jobs and AI—and how do we make sure that tech is spread out equitably?—but also the origins of AI and how people are adopting this at different speeds. You know, you and I have been talking to experts in AI, and we’re so excited about what’s to come, but a lot of people are just learning about it, and there’s a lot of skepticism. So I wanted to ask you a question about another technologist who you obviously work closely with: Bill Gates. And he was very excited about AI, but potentially skeptical about this approach. You know, would this approach be able to give you the gains, intelligence that we were looking for? And one of the things he said was, “Once we have a tool that could pass an AP Bio exam, you know, that’s interesting. That’s a level of intelligence that I’m excited about.” And so I think you, Sam Altman, and Kevin Scott did just that; you went by his house and showed him this latest technology that could pass an AP Bio exam. So tell us about—what was that like?

REID:

So Bill obviously has been a great advocate of the importance of AI for years in advance of this current revolution. And part of it, that he, you know, super smart guy was saying, “Look, one of the things that’s really important is to be able to have a knowledge representation of the world” and so forth. And so he was initially kind of throwing out some interesting challenges to the large-scale language model approach. But, you know, one of the things that’s great about Bill, he learns and updates intensely. And so part of the dialogue was to say, “Look, if you could show that it could read a set of biology textbooks and pass an AP Bio exam, then that would show that it would have sufficient knowledge representation—even if you can’t point to the symbols in the computer that do it, that would reflect that.”

REID:

And we’re like, “Oh, well, we think we can do that [laugh].” So we went off and as part of training GPT-4, we did not train it specifically on AP bio exam. We just trained it on a wide range of all textbooks and a bunch of other things. And so we arranged a, a dinner at Bill’s house in Seattle. Had, you know, a, a stack of OpenAI people. The person presenting was Greg Brockman a set of folks around the Microsoft. You know, Satya was obviously there, but some other executives like, you know, Rajesh [Jha] and Charlie Bell and others. And you know, Kevin [Scott] obviously. And so we went through it and we actually even had this woman who was one of the biology exam olympians there to help, you know, kind of ask the questions and parse it and kind of evaluate what we’re doing.

REID:

And we started going into the showing the demo. And when I asked Bill at the end of that, I said, “So where does this rank in tech demos that you’ve seen?” Bill said to me, he said, “well, there’s only been one other that might be as good as this one. And that was when I was demonstrated the graphical user interface at [laugh], you know, at Xerox PARC, right? And so it’s at least that if not even better.” It was an epic room — I think we all felt privileged to be in the room for.

ARIA:

And do you have a similar sort of test now? Like what is your AP bio exam? Is there something that you’re like, “Man, if AI could do X, Y, Z, you know, today or by the end of the year…” That you would be similarly excited for. Like, we’re passing by these milestones; we’re like blowing past them. Is, is there something you’re excited about?

REID:

There’s a whole stack and they range from some things that are kind of more pedestrian, which is thinking about things like Inflection and Pi. Which is, can it remember a sequence of actions and execute a plan of being your agent out in the world? Like, “Hey, you know, I’m going to Rome, you know, book me a good tour of the Vatican Museum.” And you know, that kind of stuff as a way of operating. And there’s a whole stack of stuff in that. There’s also memory and personalization of remembering you and what matters to you. And so that’s all. I think all this stuff will happen, but there’s a bunch of things that are, that will be important milestones as we get there in the development of it. Then there’s a stack of things that we think, okay, you know, high probability of accomplish exactly when. So for example, a lot of what people are working on right now is reasoning and general reasoning capabilities.

REID:

Because you know, part of what you see is you can break these things by getting them to, they don’t understand when they’re making foolish mistakes like around prime numbers or other kinds of things. And an ability to kind of navigate that. And I think some more general reasoning capabilities, which will improve capabilities for a cognitive industrial revolution. And then you get to the next level up, which is things that you have a good possibility to do, but they are hard and will take specific work. Drug discovery, you know, and other kind of, you know, biological sciences and so forth, which, you know, there’s, there’s obviously good work going on. Protein folding, you know, with isomorphic and Baker fold and other kinds of things. But there’s also you going to be some very specific work that will make some amazing discoveries. And then the next level beyond that is, “well, could it start doing things that basically we currently don’t see a line of sight for doing, but could do amazing things.” Like, for example, could it help us with the invention and creation of fusion power?

REID:

Or could it discover, you know, new branches of mathematics. Or make some intersections between different scientific fields, because there’s so much density of information it goes well outside any even, you know, one genius’s head, but to multiple people’s head and then pulling that together. And that’s unclear the probability of that. Obviously you go, “well, but we’re still increasing cognitive capabilities. It’s only a matter of time.” It’s like, well not clear because you could increase cognitive capabilities infinitely for the next hundred years and still not get that right? It’s how you are increasing the cognitive capabilities. And that’s one of the reasons why people frequently, both the proponents and the critics, can be a little hyperbolic and histrionic in either direction because they just go, “It’s increasing IQ.” You know, and it’s like, “Well no, it’s, it’s increasing set of cognitive capabilities, some of which already today are superhuman and amazing and we will continue those. But like, you know, what set of it really depends on how, like, will it be creating new science or not new physics or not. Or other.

ARIA:

Mm-Hmm. Well one of the things we talked a lot about with Kevin last week was geography. You know, he grew up in a rural area and obviously made, you know, made his way to Silicon Valley. And I have to admit, before I started working together with you, I don’t think I really understood the magic of Silicon Valley. And I will, I will admit that there is so much magic there in in the network and in the helping each other and sort of the deep concentration of talent. But I also know that you, you know, care deeply about equity and making sure that these new technologies are sort of spread out evenly to everyone. So are there certain like geographies, you know, city, state, county, international, where you would like to see more investment made when it comes to technology and AI? Like how can we spread this out? Is it geographic investment that’s needed or something else?

REID:

Well, there’s a stack of things. I mean, you know, people obviously in the industry like to talk about network effects — and regions have network effects. And by the way, regions have network effects like Hollywood or New York for media. You know, and there’s these network effects because it brings in talent, it brings in all the necessary resources for creating, you know, the next level, the next evolution of project in this. And Silicon Valley is obviously one of the, you know, great lights in the entire world for what happens here technologically. But we’re better off the more locations and the more places we have for doing this. And the way to do it is a little challenging because you do get, you know, these intense network effects. Like if people say, “I want to, you know, move somewhere to maximally create an AI startup in the world,” Silicon Valley’s a good choice today for that.

REID:

But by the way, London is not a bad choice. Paris is not a bad choice, right? And so there’s a, a stack of things and what you’re trying to do is build it up. So some of that’s investment in the area. Some of that’s government policy. Some of that’s like, one of the things that Macron has done very smartly in Paris that I think has helped with their AI thing is saying, “Hey, if you bring your technology experience back — Silicon Valley, other places — and come here, you’ll have a tax advantage status for coming back and working here. To try to bring talent back and, and things. And obviously when talent’s there and can build amazing companies, global capital follows. And obviously also you need high expertise and you know there’s a bunch of great technical schools in France. You know, and obviously also in London, Oxford and Cambridge and, you know, other places for this and all of those things play in.

REID:

Now, the one kind of thing that I tend to always emphasize — and that’s part of the reason I talked about Macron’s genius gesture here — is you always want to be building off the network. But how do you bring, how do you extend the network? So it’s like, how do you make connections between Silicon Valley and Paris? How do you bring the talent that has learned a bunch of stuff in Silicon Valley and have company formulation in Paris? And so yes, you want capital. Yes, you want investment. Yes, you want government policy. Yes, you want immigration stuff. Yes, you want startup friendly — to be able to take bold steps and, and move things and, and to make an effort and innovation without having to, you know, cross prove your possible innovation benefit, you know, 15 different ways before you do anything. You know, et cetera, et cetera. All of that’s important, but you need to be building on the network and leverage as much of a network as you can. And you know, I think probably the, the first time I had that observation and started doing that was when I joined and helped a set of efforts in the UK. First Silicon Valley comes to Oxford, and then Silicon Valley comes to the UK — with Sherry Coutu — in order to be bringing that network building, which brings a proxy of the strengths we have in Silicon Valley to help elevate other geographies as well.

ARIA:

Yeah, no, I mean, I love it. So speaking of AI, you’re always going to get people being skeptical. They were skeptical in the early days. People are skeptical now. You are not a skeptical — what do you think is the most misplaced skepticism of, of AI and, and why?

REID:

Well, as you know, I beat the optimistic drum very loudly because every, the vast majority of people think that they’re being helpful and clever by articulating their skepticism. And actually, in fact, I think most people’s articulation of skepticism is actually harmful for humanity and so forth. And not because they should, they should be quiet, but it’s like, do the work to articulate your skepticism in a way that helps you build something that’s good. The whole point is we’re trying to get to a really amazing thing, and we’re trying to navigate our way there. And so the question is to say, “well, what are the most important things that might go wrong? And what are the possibilities of how to navigate around those?” So the question is, is talk to the people who are trying to talk about what to do about this and help shift to “here are specific kinds of things that you need to be doing.”

REID:

So the kinds of things that I advocate are: we have many, many years of this being a human amplifying technology. So the question is, how do we amplify essentially the right humans — like say, doctors, educators, you know, entrepreneurs creating products and services, you know, et cetera, et cetera — to do great things for human beings and less for human beings who are being destructive? Criminals, cyber criminals, terrorists, rogue states. And obviously the engineer tends to be like, let’s try to create the tools so it can’t be used, and for bad. And obviously guardrails and safety on the tools is good, but ultimately human beings, you know, it’s like we don’t say, “Hey, we’ll just, we’ll make the, the, the nuclear bomb self-determining about when it’s going to go off or not.” We actually put it in selective hands, and that’s an extreme example. But it’s kind of like the, okay, what are the things we do to make sure that, that the set of hands is in none of the critically bad, and then as many of the good as possible.

REID:

Another one is you go, well, you know, we’re, we’re working very fast in building this technology. Are there any areas where, you know, we could possibly be kind of putting a runaway train out there. But you have to look at what the, what the areas of those possibilities are. And so for example, you know, when people say, “well, I would like to set up AI without human beings in the loop, in the following thing.” You go, “:well, go look at Dr. Strangelove or war games, or…” Let’s, let’s have relatively little, right, like autonomous major systems completely controlled by AI until we understand what the systems go at a very high level of probability. That I think is a general goodness and a kind of a principle. And so like, what are those areas that you should be cautious about? And it’s one of the reasons why — like, for example, one of the things I’ve been been doing over the last, you know, at least eight years has been, you know, arranging, you know, kind of 501C3’S of universities and, you know, a Vatican working group, and governments, and every else to pull together, you know, key leading developers, including a lot of commercial labs to say, “look, let’s share information on how to make this aligned well with very positive human outcomes. And how to avoid potential destructive elements, whether it’s humans using it or accidents or other things by sharing that kind of information.” But like safety protocols and awareness of, of how each other are thinking about it so they, they can challenge each other and say, “Hey, are you doing well enough here on what your even potentially social impact might be by releasing this technology?

REID:

Have you, have you done some red teaming? Have you done some testing, et cetera?” And, you know, not surprising for you and other people who know me, it’s a classic network thing — to increase probabilities of very good things and decrease probabilities of bad things.

ARIA:

I love it. I mean, to your point about total network building, like it, it sometimes feels like these camps are separate camps. And never the two shall meet and they don’t talk, they don’t speak the same language, you know, they both think they’re doing the right thing. So sometimes just getting them in the same room is, is so critical. Reid, thank you so much. Really appreciate it.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Ben Relles, Parth Patil, and Little Monster Media Company.