PETER:

At some point, it just makes sense that, of course, you would never practice medicine without washing your hands first. And we will definitively get to a point—probably sooner than we think—where of course, you would never, ever dream of practicing medicine without the assistance of AI.

REID:

Hi. I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know what happens if, in the future, everything breaks humanity’s way.

ARIA:

We’re speaking with visionaries in many fields, from art to geopolitics, and from healthcare to education.

REID:

These conversations showcase another kind of guest. Whether it’s Inflection’s Pi or OpenAI’s GPT-4, each episode will use AI to enhance and advance our discussion.

ARIA:

In each episode, we seek out the brightest version of the future and learn what it’ll take to get there.

REID:

This is Possible.

REID:

Last episode, we spoke with Siddhartha Mukherjee about the future of medicine, disease, cutting-edge therapies, and some philosophy, too.

ARIA:

It was a fascinating chat, and the topic of healthcare is so wide-ranging; we just had to double down. Today, we are diving deeper into how AI might revolutionize health and medicine in the future.

REID:

GPT-4 is already being used by doctors to great effect, and it has so much potential to bridge gaps in care for patients as well. That’s especially true for people who may not have easy or consistent access to medical care. Wouldn’t we say that some support in these cases—of course, with right guidance and guardrails—is better than no support at all?

ARIA:

But as we realize the potential of a tool like GPT-4, it needs to be done carefully. So how do we look at a technology like GPT-4 with all the possibilities there and use it correctly? Our guest today is the one to ask.

REID:

Peter Lee is a renowned computer scientist currently serving as the corporate vice president of Research and Incubations at Microsoft. He coauthored the book The AI Revolution in Medicine: GPT-4 and Beyond. He also serves on the board of directors for the Allen Institute for Artificial Intelligence, the Brotman Baty Institute for Precision Medicine, and the Kaiser Permanente Bernard J. Tyson School of Medicine.

ARIA:

So we sat down and talked to Peter Lee about the future of medicine, but what I think was so great in talking to Peter today is that we actually don’t even need to look into the future. We can talk about right now. Peter is talking about how he is using GPT-4 in his own personal medical issues with his family, with himself, but also how doctors—again, today—are using GPT-4 and AI to make them better and smarter. And I think the other thing that Peter does a great job at illuminating for people is that we all need to understand how to use this tool. You’re not going to use a calculator for spellcheck. I think a lot of times people say, “Oh, well, GPT-4 can’t say exactly to me what the first paragraph of that book is.” Well, no, because that’s not how you use it. And so when the medical establishment can learn better how to use it, there’s going to be more positive outcomes for everyone. And so here’s our conversation with Peter Lee.

REID:

We’d like to trace your, you know, professional bio in a somewhat atypical way. You know, you’ve said that there’s a number of times in your career you’ve seen moonshot research jump to the consumer’s pockets. Can you recount the story of your career in those instances from, you know, where, where you count your early work to, to today?

PETER:

My career, and the actual technical contributions I’ve made in my career, have been more marked by what happens in the background, in the foundations. If I think about the major moments, it’s always been a matter of me being asked to do things, sometimes against my will, and me being frankly too spineless to say no. And luckily, it’s been smart people who’ve asked me to do things. So after the 2008 election, my friend Tom Kalil was appointed the science and technology policy director for the Obama White House. And he leaned on all of his friends—I happened to be one of his friends—to write two-page position papers. And so he asked me to write two pages on what to do with DARPA. Of course, as soon as everyone took office, I was asked to go abandon my professorship at Carnegie Mellon and serve at DARPA.

PETER:

That ended up being a transformational, serious growth experience for me. That first week, you know, I had to meet with the secretary of defense, and I had no idea what to tell him. I was driving down from Pittsburgh, down to DC, and I got nabbed for speeding. And then I thought, with these trips, I’m going to lose my license. And so I downloaded Trapster on my phone, and then I realized—I know what I’m going to talk about to the secretary of defense. I’m, I’m going to talk about, you know, how crowdsourcing and network effects and machine learning can really make a difference. And, and, you know, and it’s been a series of accidents like that. You know, I was never medically trained. And then, you know, I was on a special assignment working on a, sort of an internal incubator for Satya Nadella and Harry Shum at Microsoft.

PETER:

And then in 2016, I was supposed to come back to, to research. And then Satya Nadella says, “No, I need you to work on healthcare.” I thought I was being punished for something. I, I thought, well, what do I know? And furthermore, there are 12 other corporate vice presidents at Microsoft that are doing their own thing in healthcare. They’re not going to listen to Peter Lee. But, you know, ultimately you sort of overcome these things. You get immersed, and you start understanding, wow, there’s some really important stuff to do here. And you get a little bit lucky. I just had the good fortune that really smart people saw things in the future and asked me to climb on board. And, you know, whether I was smart enough to say yes or or too spineless to say no, I ended up being in the good spot.

ARIA:

Well, I think that’s a lesson we all learn over and over again that so many people who are doing amazing things—it’s not as if you plotted out, ‘And this is how I will get to the top of Microsoft.’ Like you said, you just were working with great people, doing incredible work. And, you know, so that, that brings us to today to, to healthcare and AI. And so, you know, your relationship with AI, health, and medicine hasn’t just been academic. And I understand that you’ve used GPT-4 in your own life to better understand the medical options available to you and your family. How do you use GPT-4 as a tool in that role? Less than the medical establishment, but more in your own life?

PETER:

You know, you can receive, you know, you get a physical exam, and then maybe a week later in email you get a PDF file with all of your blood test results. And I don’t know about you, but it’s gobbledygook. Look, I’m even an elected member of the National Academy of Medicine, and I can’t decode these things. And you, you don’t feel good about calling your doctor to waste his or her time with this. It’s empowering to just ask GPT-4. I do this through Bing, but you can also use ChatGPT, to say, “Look at this and tell me, explain this to me. Is there anything I should be concerned about? It looks like my LDL is a little out of range. What does that mean?” And, and, and so on. Also, you get these—if you have health insurance —you get this weird thing called the explanation of benefits notice.

PETER:

I always felt bad that I could not make heads or tails to these things, but I’ve since learned that even executives, C-suite executives at insurance companies—they don’t understand these things either. And again, it’s just incredibly powerful and empowering to ask, you know, Bing or GPT-4, “Look at this. Explain this to me. Do I owe money? Why am I getting this notice? You know, what, what happened?” And having the AI, having GPT-4, you know, look at those CPT codes in those explanation-of-benefits notices and tell you, “Oh, someone in your family, you know, had this lab test done, and this much was covered by insurance, this much is being covered by your provider, and you don’t owe money.” Things like that are empowering. The thing that I’ve been less public about, about four months ago, my father passed away after a very long illness.

PETER:

He was ill for about 18 months, and it was his time to go, and he went very peacefully. But, but it was a struggle for over a year and a half. And part of the struggle is that my two sisters and I were all trying to provide for his care—support his care—but we all lived hundreds of miles away. You know, what happens is you have a primary care physician, you have two specialists, many months, lots of lab tests, lots of other information you collect. And the pressures on family members, on siblings, get to the point where relationships start to fray. And the typical flare-up point is, you know, we finally get that golden 15-minute phone-call appointment with specialist Dr. K, and an argument erupts between me and my two sisters on how to best use that 15 minutes.

PETER:

What I’ve learned is this is just such a common experience that people are going through. And so, to just give all the lab test results, all the other notes, to explain in writing the current situation, and then say to GPT-4, “We have a 15-minute phone call with Dr. K. What are the best three things to ask and talk about?” And it, it just brings the temperature down, makes us feel more certain and empowered in the conversation with Dr. K. It actually helps Dr. K, because she doesn’t feel like we’re wasting her time and being random. And, you know, it preserves a measure of family harmony in things. And so, you know, when you think about that, I can do that because, you know, I have access to technology, and I’m a techie, and I’m comfortable with using these things. Do I expect the normal consumer to just use ChatGPT in that way, even though ChatGPT is pretty easy to use? Well, maybe. But what I really hope is that there’ll be some great companies and ventures that will start to make customized experiences powered by technologies like GPT-4, you know, that can really help. You know, we have this silver tsunami going on in North America, in Europe, in Asia. This is just, I think, going to be a normal problem that everyone is going to have to cope with.

ARIA:

Now I would love to test out GPT-4 and how it can be used in your field. Can you give us a demo about how it can be used as a copilot for doctors?

PETER:

Yeah. So the thing that’s been discovered is that GPT-4, without any kind of specialization, is extremely good at answering these types of what are called “curbside consultation questions.” Curbside consults are the kinds of questions that doctors ask each other. And this happens all the time—multiple times a day—particularly in general practice. Let’s, let’s try one.

PETER:

Hi, GPT-4. I’m a doctor. I have a patient with some chest pain, elevated heart rate, but normal EKG. I’ve just measured, and he has elevated troponin. What else should I be looking for?

GPT-4:

Hey, Peter. Although I should emphasize that I’m not a healthcare professional, elevated troponin levels with chest pain and an elevated heart rate could potentially point to a cardiac event, like a heart attack. In this situation, you might want to consider additional tests like coronary angiography to check for arterial blockages or an echocardiogram to evaluate heart function. Now, you’re probably aware that other factors, like kidney issues or muscle damage, can also raise troponin levels. So having a complete clinical picture is crucial. Definitely consult with specialists to corroborate your findings. What’s your current game plan for diagnosis and treatment?

PETER:

Well, I’ll definitely order an angiogram. Please let me know the urgency of that. Also, one thing I forgot to mention before is: I did listen to his lungs, and I do hear some evidence of rales.

GPT-4:

Given the elevated troponin, chest pain, and elevated heart rate, I’d say the angiogram should be conducted as soon as possible to rule out or confirm coronary artery disease. Time is of the essence in potential cardiac events. The presence of rales in the lungs could suggest fluid accumulation, possibly indicative of heart failure or another condition like pneumonia. In light of this, you might also consider a BNP test to assess heart failure, or even a chest X-ray to get a better idea of what’s going on in the lungs. So all things considered, I’d say you’ve got multiple reasons to act urgently here. How are you planning to proceed?

PETER:

Thanks. In fact, I think we can get that X-ray ordered at the same time as the angiogram, and I think that’ll save a big step in the patient’s treatment. So, thank you.

PETER:

So that’s a simple type of example—and these are things that doctors consult with each other all the time. But the access to that kind of information is oftentimes not present. You know, when a doctor gets the initial presentation of a patient and some labs, initial lab test results, a doctor will develop what’s called a differential diagnosis. A differential diagnosis is the list of possible causes for the patient’s maladies with probabilities assigned to each one. A great use of GPT-4 is to give all that information to GPT-4 and say, “Take a second look at this. Is there anything I’ve missed? Anything else I should consider?” We find that GPT-4 is almost superhuman at being able to critique the work that people do and be that second set of eyes.

REID:

Yeah, a thousand percent. You know, one of the funny things is, when I was running to getting Impromptu published, you know, one of the things we talked about was whether or not we should put a medicine chapter in Impromptu—because I knew your AI Revolution in Medicine was coming and so much better than what I would do. I was like, “Well, let’s not have a medical, let’s not have a medicine chapter. There’s a much better book coming on this.” So, my team and I enjoyed the book, obviously. So, you use a fictional story to start, so what’s your take on how fiction can be used to help or harm our understanding of the future AI? And how would that affect, you know, consumers in the home, and the silver tsunami, and kids and families and mothers? What’s your, what’s your take on that? I thought the use of fiction was great.

PETER:

You know, maybe the story about why we wrote a book to begin with: This was, I think it was in August of 2022 when I was first exposed to GPT-4. And the assignment to understand the possible implications to healthcare started in November, when ChatGPT was released. And it sort of blew up. We started to get some emails. I remember one email that we got about three or four days later was from a friend who runs a health clinic. And he said, “Oh, this ChatGPT stuff is great. We’re using it now to write after-visit summary emails to our patients.” And so I thought, wow, that’s really cool—but wow, that’s really concerning also. By November, we were starting to understand the tremendous possibilities, incredible capabilities of GPT-4—but also we were starting to understand that there are some issues, like hallucination.

PETER:

If you think of a computer as a machine that does perfect calculation and perfect memory recall, ChatGPT and GPT-4 are not computers. Yeah, there’s something different. There’s some kind of reasoning engine. And the immediate worry—especially if you work at a company that’s very careful about these things, like Microsoft—is that, oh, if doctors are going to start assuming that this thing is a computer, those assumptions are going to get violated in ways that could be harmful to people. And so the question was: How do we best educate the medical community about what this technology is? After asking and consulting with a lot of people we learned that the large part of the core of the medical community still reads books. So then the—you know, you have to decide, well, what are you going to try to teach people in that book?

PETER:

And there are several key lessons, but one key lesson is that this is probably unstoppable because it’s going to be a technology that’s in every person’s pocket. How do we, how do you tell that lesson? Since it was such a new technology, we felt that the only way to do it was to make a fictional account that was realistic. You know, something that would actually happen in an emergency department, and you might have an inexperienced medical resident who just reaches into her pocket to get a second opinion about something. Oh, and by the way, also becomes totally dependent on ChatGPT—even to monitor her own health and, you know, her own daily schedule. And it was that sort of idea. The other issue is that my writing style and Zak’s are hopelessly academic. So we were co-writing this book, but it was, it would be dry beyond belief. And so Zak had a colleague, Carey Goldberg, who is a much, much better writer. She joined the project in order to turn it into a kind of a page-turner. Carey, being a writer, also injected the idea of a fictional account, and it ended up being a really fun, fun thing to do.

ARIA:

I love it. And it’s also just so interesting with the, the different strengths that you bring together to create this book. And, you know, in the book, you make these sort of three major points. It’s like, of course GPT-4 is a gamechanger for medicine and health. That it poses risks if we don’t begin wide-scale testing and understand its limits. And that also, we should be working as quickly as possible to ensure broad access to the technology, which we have been talking about a lot. We need to get this in everyone’s hands as soon as possible. So, we’re going to get to all of these points. But I want to start with the actual technology first. You compare it to Wilbur from Charlotte’s Web. Can you tell us a little bit more about that comparison? 

PETER:

So, I’ve been on the road a lot for the last several months talking to leaders of healthcare organizations. So, those are leaders of hospitals and health systems, of health insurance companies, people at the National Academy of Medicine and major medical schools. And, you know, one of the first things—which I know Aria, you and Reid have also run into—is just how to explain what the heck is this technology anyway. And so one of the things I do, which has turned out to be effective but is a little silly, is I ask people: Have they read the book Charlotte’s Web? And if you ask a room full of people that—at least in the United States—everyone raises their hands. And so I say, “Okay, I’m going to ask you three questions for you to prove to me that you’re not lying.”

PETER:

And so the first question is a simple question, which is: Can you briefly describe one of the main characters of Charlotte’s Web? And so everyone can do that. You, you just mentioned Wilbur. There’s also Charlotte, of course, the spider. There’s Fern, the girl who saves Wilbur from getting slaughtered. So I make the point that some of you haven’t read the book in more than 20, 30, 40 years. And when you read it, you distilled into the neural circuitry of the flesh in your skull those facts about the book. And if you think about what your brain is doing, it’s almost unbelievable that that can happen. And now you can start to give a lesson about GPT-4 and say, “Well, GPT-4 has read Charlotte’s Web also. And GPT-4—in reading Charlotte’s Web—has distilled into its neural circuitry facts about Charlotte’s Web. And so let’s test it.”

PETER:

So I’m going to use GPT-4 on my phone here and ask this question: Hi GPT-4. Have you read the book Charlotte’s Web? And if so, can you describe one of the main characters? And please be very brief since we’re on a podcast.

GPT-4:

Hey, Peter. Yep. I’m familiar with Charlotte’s Web. One of the main characters is Charlotte, a wise and caring spider who helps save a pig named Wilbur from being slaughtered. She’s the brains behind the “SOME PIG” message in her web. How’s the podcast going?

PETER:

Alright, so now maybe it’s not so impressive that a computer can remember that. So let’s ask a second question, which requires much more intelligence. And, and that is the question: What do you think the book Charlotte’s Web tries to teach us about the value and nature of friendship? And that’s a really hard question because it’s not answered by any words in that book. You have to read between the lines. You have to make connections to personal experience, to social context. And so what about GPT-4? Let’s try it. So what do you think that book tries to teach us about the nature and value of friendship?

GPT-4:

The book emphasizes that friendship can be transformative. It’s not just about companionship. It’s about sacrifice, loyalty, and making each other’s lives better. Charlotte literally spins webs to save Wilbur, and in doing so, teaches us the depth that friendship can reach. What’s your take on it?

PETER:

Alright, so now I tell people, “You need to be skeptical of all of this.” For sure, thousands, maybe millions of children in school have written essays about the meaning of Charlotte’s Web. They’ve probably posted them on the internet. And during GPT-4’s training, GPT-4 probably read those essays and is just simply regurgitating those ideas from school children, from teachers, from other writers. So, of course, it’s not anything special. Those are not original thoughts. That’s the skepticism.

PETER:

What is really going on here? What the heck? Are these original thoughts? Is it really thinking or what? And one of the ways to test that is to ask a question that has never been asked before and to assess the originality of the responses. So thinking about this Aria—I looked at your bio, and I know you’re connected with, you led an organization called DoSomething.org. And so, let’s see if GPT-4 can make connections with that. So GPT-4, I’m on a podcast with Reid Hoffman and Aria Finger. And Aria was formerly leading an organization called DoSomething.org. Do you see connections between the lessons about friendship in Charlotte’s Web and the goals and mission of DoSomething.org?

GPT-4:

Absolutely, there’s a connecting thread here. Dosomething.Org is all about empowering young people to take action—often for the betterment of their communities. Similarly, the friendship between Charlotte and Wilbur brings about positive change, saving Wilbur’s life and inspiring the community around them. Both emphasize the idea that individuals, even if they’re just a spider or a young person, can make significant impacts through their actions and relationships. How do Reid and Aria view this connection?

PETER:

Or Aria specifically?

ARIA:

I don’t, I don’t think there’s many essays on the web about the connection between DoSomething.org and Charlotte’s Web. So, so, Peter, I think you got us. [laugh]

PETER:

Yeah. And so now your skepticism has to start getting shaken, and you know there’s something deeper than that. Now, the last question that I ask people is this: Please recite chapter one word for word.

REID:

Yes, no chance.

PETER:

And I haven’t found a single human being—I’m sure they exist out there, but I haven’t found anyone that’s able to do that. And that’s because the human brain, as amazing as it is, does have some significant limitations. One of its limitations is in rote memorization. And oddly, today’s GPT-4 has analogous limitations. And I won’t do it here, but if you ask GPT-4 to recite chapter one today, I think it’ll demure and say, “Well, I shouldn’t do that for copyrighted material—so no, I won’t do it.” And so, this is where I get to tell a leader of a healthcare organization, “So you see, what we have here is a brand new tool. It’s not a computer in the sense of being perfect rote memorization machine or a perfect calculation machine. It’s a reasoning machine of the type we haven’t seen before. And if you, as a doctor, tried to use the thing under the assumption that it is a computer, you can get into trouble. At the same time, it’s incredible the capabilities that it has.” And so the very first lesson that I try to give to clinicians, to doctors, nurses, and leaders, is based on this story, to start you down the path of understanding just how new a tool this is. It’s a new tool that we have to figure out how to use, for which we don’t yet have the user manual.

REID:

One of the challenges usually in regulation is that it tends to mammothly restrict the speed and delivery of certain kinds of innovation. And in some areas of medicine that’s really important, and in some areas that’s restrictive. If we could kind of, like, wave a wand and kind of get the, the right balance on regulation innovation here—what do you think the, the right kind of intelligent way of kind of regulation, shaping incentives, getting innovation, protecting patient safety—what would be your, your recommendation of the path forward there?

PETER:

Roughly speaking, people today, regulators, really don’t know what to do. There are some very significant efforts. The National Academy of Medicine has a major effort to define the code of conduct for the use of AI in medicine. But the thing that’s interesting about this kind of technology is it doesn’t follow the pattern of any medical technology today. So the standard framework of a software medical device doesn’t really hold up here. And on top of that, the whole world understands that this is incredibly empowering—and it’s also a matter of national competitiveness. So, there is no regulator anywhere in the world that wants to impose regulations that might stand in the way, or impede, overall competitiveness of their, of their nation. And so, as we think about these things, my own conclusion is that the medical community itself really needs to assertively take control of the questions of whether, when, and how this technology should be adopted—and define the foundations on which future regulations would be built.

PETER:

And for that to happen, it’s so important for us as a technology community to do everything we can to educate and get the medical world up to speed. What’s going to happen—I, I’m certain—like, today, when we think about disclosures, there is a major question, you know: Should there be a disclosure to patients if a doctor uses AI to assist, let’s say, in a diagnosis or developing a treatment option? I’m certain in a very small number of years, the disclosure will work the other way: A patient will demand to know if a doctor doesn’t use AI to assist and double-check something and will demand to want to know why. And so there’s a difference between what makes sense today, in 2023, and what will make sense in the future. And when I say future, the distant future in AI is like, 2025.

PETER:

Okay? And, and so that, to me, means we probably, just to be practical, have to think about some interim code of conduct or interim set of guidelines or regulations that, that have a very set timeline. The analogy that I give sometimes to people is: It’s like handwashing. At some point, it just makes sense that, of course, you would never practice medicine without washing your hands first. And we will definitively get to a point, probably sooner than we think, where, of course, you would never, ever dream of practicing medicine without the assistance of AI. We do need something, because there is a fundamental lack of understanding of the capability of these systems and also their failure modes. I think there is emerging regulatory requirements, that there should be humans in the loop, you know, to finalize or make final decisions. That the humans are still accountable if those decisions are wrong or negligent in any way.

PETER:

And that there would be the development of codes of conduct and guidelines, you know, by organizations like the National Academy, the American Medical Association, and so on. One more thing that I’ve used with doctors and regulators is this analogy. It’s a little bit of a silly analogy, but, to imagine we’ve just invented copper wire. And we’ve discovered that copper wire can efficiently carry electricity from point A to point B. And we just know in our gut, this is going to change everything for the better. We can’t even imagine, you know, carrying digital signals or winding it around pieces of metal and making electric motors. You know, we just know, just carrying electricity is going to be big. But we haven’t yet invented the light bulb, so we don’t know why. And meanwhile, people are getting electrocuted left and right, and crazy people are putting their tongues on this thing and so on. But the point here is that, you know, it’s up to the medical community to invent those light bulbs. It’s up to us as technologists to invent the light bulbs, platform providers, to do that. And as we do that, we’ll start to understand the importance of this more broadly and the need for electrical codes and other regulations—the ideas of insulation. There’s all sorts of things that get developed. And so we’re just in that early stage.

REID:

I think we should probably address one of the, you know, natural questions that, like, if a skeptic were listening to our conversation, they’d say, “You know, look, I’ve heard about this AI stuff doing hallucinations. What does that mean? Like if I’m, if I am going to—even if I don’t have access to a doctor—what happens if I get a hallucination? And what does that mean for, like, bioethics?” And, you know, kind of, you know, I think you’ve talked about the nine steps of enlightenment as it comes to AI and medicine. What’s a way that we navigate that and we, because obviously there’s both improvements in the technology, there’s humans, there’s regulation—what are, what are our steps forward, and how do we think about that ethically? How do we think about hallucinations?

PETER:

Yeah, I know, I listened to your conversation that you had on this podcast with Ethan Mollick, and I like the way that Ethan describes GPT-4 as, the right mindset is to view it as your intern—your personal intern. And I think that’s really apt because—and I use the term reasoning engine, but who the heck knows what that means?—thinking about having your own intern is just so much more relatable. And it’s this thing where, you know, if you’re going to have your personal intern and you’re going to have a conversation about your medical issues, you’re going to have to make your own assessments about the correctness and veracity of these things. You’re going to have to probe, You might have to ask for further consultations and so on. And yet, having your personal intern is an undeniably useful thing.

PETER:

It’s something that really amplifies what you’re able to do in life. And, and I think that that sort of mindset, that kind of understanding, is really fundamental. One last thing to say: Companies like OpenAI have been exceptionally cautious about the safety and responsible AI issues of this. And so for the bulk of GPT-4’s public life, it’s been put in a glass box—not allowed to touch anything, not allowed even to read the internet. It’s just, you know, it’s only, what, in the last six months that, you know, it’s been given permission to do things like use the Bing search engine, or, you know, use your camera to see images in the world, or very recently—with advanced data analytics—been given the permission to write small amounts of Python code and execute it as it sees fit. Those kinds of things relate to the AI concept of grounding—of allowing the AI to use tools to ground its answers in some reality.

PETER:

And I think, in the very near future, what I expect is that these grounding techniques, and this ability of—the permission for AI systems to use tools will dramatically reduce the bad hallucinations that we see out of these systems while preserving the, the good hallucinations. And by good hallucinations, I mean things like informed guesses and the ability to imagine. It’s just like giving your intern access to a library to encyclopedias, to the internet, and to a calculator. You want to do those things. And, and those are capabilities that are, that are only now coming online.

ARIA:

You know, you, you put a lot of onus on us in getting to this better future. You know, you say that there are—a lot of doctors aren’t using GPT-4 right now. A lot of consumers aren’t using it. It’s us to, it’s up to us to sort of build the light bulb that is going to create this better future. And so, you know, Microsoft is working with Mercy to figure out the, when, the where, and how AI can be used in medicine. Can you talk a little bit more about the relationship specifically, and, like, how you view partnerships—such as with Mercy or Epic—in the future of medicine to help us get to that sort of better future?

PETER:

Yeah. Again, going back to this copper wire analogy. Okay, so we have this copper wire. What do you do with it? You know, what do you do with the fact that you can carry electricity around? And I’ve felt it is actual healthcare systems, like Mercy, or healthcare technology providers, like Epic. So Mercy is a, a very large and very progressive health system in the Southeastern US. But interestingly, they have very significant in-house engineering capability. They have been very scrappy and very smart in migrating all of their data to the cloud and building a lot of tools and so on. And they’ve shown a lot of capacity there. So, I think they have the ability to invent some light bulbs and toaster ovens and telephones and things like that. Epic is the largest provider of electronic health record systems.

PETER:

Of course, as a health-tech platform provider, they also are basically a big engineering house. They build software. That software is used all day long by millions of doctors and nurses. It’s also, you know, their systems are the subject of a lot of ire because doctors and nurses don’t like spending all their time staring at an Epic screen as opposed to, you know, dealing with patients. And so, there’s tremendous motivation to invent those light bulbs, both at Mercy and at Epic. Mercy is facing a crisis of a huge nursing shortage. I think in the US today, there’s estimated over 50,000 nurses short right now in the US today—and that’s supposed to get potentially into the millions within five years at the current trajectory. And so just finding ways to attract more young people into the profession—also making people more productive—is extremely, extremely important.

PETER:

And so, working with these organizations that have their own engineering capacity—that can work and collaborate and co-innovate with, with us in Microsoft research, and in other parts of Microsoft—I’ve found that to be the most productive way forward. We also acquired, a couple of years ago, a health-tech company of our own, Nuance Communications. And Nuance makes the most popular medical-dictation system. It’s the way that doctors can talk to a computer in order to enter their clinical documentation. All of these companies—Mercy, Nuance, Epic, and a whole lot of others—are right now doing some incredibly great work to integrate GPT-4 in new ways into their existing products. Epic, for example, they have a product called Inbox. It, it’s basically an email system set up so that a doctor can send notes to patients, but it has some intelligence to take information from recent encounters pertaining to doctors and patients and all the clinical histories and so on.

PETER:

Doctors complain about Inbox all the time because it, they say it’s just overwhelming. And it’s a lot of work to kind of rummage through 5, 10, 15—sometimes upwards of 30 or more—documents spread all over the Epic system in order to synthesize a useful friendly note to a patient. GPT-4 is now being used in Inbox to read all of those things and propose a draft note to send to a patient. The doctor can then look at that, edit it, and send it off. Judy Faulkner, the CEO of Epic, reported that in the early studies, patients are preferring the notes generated by GPT-4 and have explicitly said they feel more human than the notes written by the doctors. What they’re finding is that a doctor is too busy to add those personal touches—like “congratulations on just becoming a grandparent,” “best wishes on your wedding next month in France.” Those extra little personal touches, GPT-4 just has the tireless ability to extract those things out of the conversations between doctor and patient, connect the dots between that and the clinical history of the patient, and it makes a real difference. And it dramatically reduces, by multiple hours a day, the clerical burden on doctors and nurses. So these kinds of things are just emerging right now. I just get incredibly charged up and excited about this because, you know, these are the first light bulbs, you know, that are coming out.

REID:

That are coming on from the copper wire, to extend the metaphor. Should we go to rapid fire? Alright. Is there a movie, song, or book that fills you with optimism for the future?

PETER:

There’s a song that’s been on my mind that my son introduced me to. My son, over a year ago—when we were really delving into the capabilities of GPT-4—one of the things that my son had suggested to me, a song lyric for the song “Aquamarine.” So the lyric is by Black Thought, and the production of the music is by Danger Mouse. So Black Thought is the emcee for The, for Roots. And the lyrics—I gave it to GPT-4 and had GPT-4, you know, do an analysis. And the analysis is incredible. And so that’s in chapter three of my book. But since then, you know, that sucked me into Black Thought and his poetry. And the poetry of “Aquamarine” is particularly incredible and is inspiring in this way, because it’s about transformation. And so, one of the things I invite people to do is to download the lyrics for “Aquamarine” and have your own conversation with GPT-4 about it. And ask questions. And I think—I’m certain you’ll be blown away. Not only by GPT-4, but you’ll be equally blown away by the lyrical genius of Black Thought.

ARIA:

By the lyrics. I love that, Peter. What is a question that you wish people would ask you more often?

PETER:

I think the answer that I love getting into a conversation about—so maybe it’s the one I wish people would ask—is, in our conversation, and what we’re doing right now, how much of what we’re doing right now is more than just next word prediction? You know, and I, I’ve just started to wonder—I still feel in my guts that what we’re doing when we’re conversing like this is something very magnificent and complex. That we’re doing all sorts of incredible computation in our brains. But maybe 99% of it is just—is just next word prediction.

ARIA:

I mean, if next word prediction can get you GPT-4, it’s, it’s pretty miraculous. So I think that’s, that’s a really interesting question to ask.  

REID:

Yeah. I think we’re actually—I’m with you. We’re doing a lot more than next word prediction, but it’s still amazing what next word prediction gives you.

 

PETER:

[laugh] Right.

 

REID:

So, where do you see progress or momentum outside of your industry that inspires you?

PETER:

The obvious thing is in, has to do with energy and our insatiable appetite for energy and the environment. And it’s hard not to be swept away by the optimisms that more and more of us are feeling about, they’re making fusion energy more real. But maybe a more interesting answer, just to stick in the medical domain: There’s been this long-term dream—sometimes it’s referred to as “real-world evidence,” RWE—but what it really amounts to is: Wouldn’t it be great if every single healthcare experience that people have—you know, at home, in life, in encounters with their doctors—if all of that somehow contributed to the advancement—directly contributed to the advancement—of new medical knowledge?

PETER:

The example that came out of COVID was, around the end of 2020, you know, when the COVID pandemic was really, really putting people into severe respiratory distress, doctors around the world were accidentally discovering that sometimes COVID patients in respiratory distress could avoid intubation if they kept the patient prone—you know, had them on their stomachs. And there were a few of those doctors that were starting to share that information on social media, like Twitter and Facebook. But it wasn’t until the end of 2021 that a multi-institution, multinational clinical trial verified that that was true. Then it got officially integrated into the practice of medicine. And, of course, by the end of 2021, it was almost too late because the virus had mutated to the point where it wasn’t putting as many people into respiratory distress as earlier. In the future, with AI, we ought to be able to do this routinely—that every single clinical encounter and every single medical experience should be synthesizable into a clinically sound and validated piece of medical knowledge and practice. I think we actually are very close to having the tools that we need to make that a reality.

ARIA:

I love that. So, Peter, can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years? And what’s our first step to get in that direction?

PETER:

In, in the space that we inhabit as technologists, what we see all around us are just incredible possibilities. I, I’ve never in my life been in a context with so much optimism about what can happen in technology. And so, I know it seems dire, like climate catastrophe seems dire. I actually think that the combination of new rapid advances in our ability to use AI to model complex physical systems, fusion energy becoming a reality, I think we’re going to avert climate catastrophe. And, as a result of that, have a foundation to drive all of our technologies that’s going to make much, much bigger part of the world more prosperous.

ARIA:

Well, Peter, we love talking to technologists like you, because you are helping to build that future. So, thank you so much.

REID:

Yeah, Peter, awesome as always.

PETER:

Thank you for that.

REID:

Possible is produced by Wonder Media Network, hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Edie Allard, Sara Schleede, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Katie Sanders, Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, and Ben Relles. We’d also like to thank Katie Zoller, Katy Halliday, Rhoades Clark, and Little Monster Media Company.