This transcript is generated with the help of AI and is lightly edited for clarity.

KEVIN SCOTT:

Technology really does massively benefit society. You wouldn’t go take mechanical engines or books away from society. The idea is insane. Like, you can’t imagine how the world would work. Someday, that’s where we’re going to be with artificial intelligence. You’d be like, “Oh my God. Like, I just can’t even imagine how things would function without it.” And so most of what we have to do now is to choose what kind of wave of change we’re going to have. Is it a printing press wave? Is it a steam engine wave? Is, or is it something entirely new? And hopefully it’s something entirely new, because like we can go look at history, and we can look at what we want for ourselves, and we can decide on a better path.

REID:

Hi, I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know what happens, if in the future, everything breaks humanity’s way. What we can possibly get right if we leverage technology like AI and our collective effort effectively.

ARIA:

We’re speaking with technologists, ambitious builders, and deep thinkers across many fields—AI, geopolitics, media, healthcare, education, and more.

REID:

These conversations showcase another kind of guest. Whether it’s Inflection’s Pi or OpenAI’s GPT-4 or other AI tools, each episode we use AI to enhance and advance our discussion.

ARIA:

In each episode, we seek out the brightest version of the future and learn what it’ll take to get there.

REID:

This is Possible.

ARIA:

Hey Reid. We are back. It is great to be co-hosting Possible with you once again.

REID:

Absolutely agree. Excited for another season together, Aria. And I couldn’t be more thrilled to return to Possible with someone I consider not only to be a generational technologist and thinker, but also a great friend and human. We worked together at LinkedIn over a decade ago and, as we’ll discuss, he’s done incredible things since. Of course, I’m talking about the Chief Technology Officer of Microsoft Kevin Scott.

ARIA:

And one thing about Kevin that I appreciate so much is how he keeps the ultimate goal of technology in sight. This recent comment from him says it all: He says, “The question we have to ask ourselves as people who build technology is: Are we doing the best job that we possibly can to build technology in a way where 40 years from now, our children and our grandchildren will look back and say, ‘Wow, this was beneficial. This mattered. This made all of these things that we care about with justice and the human condition better, not worse?’ We’d better get it right.”

REID:

That indeed is Kevin. Technology and humanity intertwined, and each amplifying the other. And exactly why he’s here on Possible with us today. As the CTO of Microsoft, he leads the company’s technology, vision, and strategy. With over two decades of experience in the tech industry, he was the architect behind Microsoft’s deal with OpenAI.

ARIA:

And what does that mean exactly? Well, Microsoft’s CEO Satya Nadella requested that Kevin look into OpenAI back in 2019. And after seeing that something in the company’s burgeoning technology, Kevin worked with its founders to find a deal that would benefit both companies. He got the ball rolling on what became a multi-billion dollar investment for Microsoft. I honestly could not think of a better person to kick off this new season of Possible. Kevin is a guest who’s equal parts technologist and humanist. It’s so clear that he cares about people. And so here is our conversation with Kevin Scott.

REID:

Alright, I thought we’d start with something that people wouldn’t—most people wouldn’t know about you and wouldn’t expect on this—which is that you’re passionate about cooking and woodworking among other making.And, you know, these are hobbies that require precision and scientific lens. So what appeals to you about cooking, woodworking, and other making?

KEVIN SCOTT:

So I know as a kid I was surrounded by a bunch of people who were constantly tinkering and working with their hands. My dad and both of my grandfathers—like one grandfather was an appliance repairman and like an amateur inventor. And my dad and his dad were both in construction. And everyone was like making furniture, restoring furniture, working on their cars. Like, my mom made and sold crafts her entire life, or most of her entire life. And so it just—like making stuff was normal. I think it partially informed what I chose to do professionally. So like, I make stuff as an engineer. Like it’s just, you know, kind of ephemeral stuff. Like I make it with code. and it runs on computers and it does complicated things. And the older that I’ve gotten, and the more complex the things are that I build professionally, the more I seek out some way to actually do creative things with my hands individually.

ARIA:

Kevin, as the CTO of Microsoft, you have helped lay the foundation for Microsoft to be one of the most formidable companies on AI, sort of leading that charge. And so many people, including Sam Altman, said that you were the real architect behind the deal between Microsoft and OpenAI—which has sort of led to this tremendous breakthrough progress. So can you walk us through that decision? Like why was that business relationship so important? I know many other relationships are critical too. But from the beginning, that was really what kicked some of it off. And walk us through that.

KEVIN SCOTT:

One of the things pretty early in my tenure as Microsoft CTO that was obvious was that the rate of progress in artificial intelligence was accelerating and it was accelerating in this particular way where the entire discipline and the tools that people use to develop AI applications was becoming more and more like a platform where you were going to be able to make single investments in things and then reuse those investments just like you would in any other sort of platform to do a whole bunch of things. And so Microsoft is a platform company. You can’t be a platform company in the modern era if AI is not part of your platform. And so it was just sort of obvious, like we have to, we have to be as advanced as humanly possible in all of this infrastructure for Microsoft to be competitive in the future.

KEVIN SCOTT:

And so if you start with that premise, then the question is like, what are all of the things that you have to go do to build this platform in a very disciplined way? And like part of it is like you go build a whole bunch of compute infrastructure. So, you know, the, the thing that we’ve been seeing is—if you looked at all of the great milestone achievements and artificial intelligence at the very least over the past 15 years there, there was one commonality with every breakthrough thing. And, and it was that it used maybe an order of magnitude more compute than the previous breakthrough. Part of the decision was like, how do we go get focused on building this powerful infrastructure, the compute, the software that, you know, coordinates all of the compute, the networks, the data centers, the power infrastructure, like all of this sort of complicated system stuff.

KEVIN SCOTT:

And then once you have the compute, you have to sort of decide how you’re going to allocate the compute to someone building things on top of it where you’re using it as effectively as possible. So like we, we started in 2018, like early, like thinking about how to go structure all of this in a way that made sense. And so, you know, this partnership with OpenAI was like basically a bet saying this particular team at the time also understood that this was a game of scaling, compute and doing incredible things with it in a very disciplined way. They had a really good way of forecasting like what you could get from investing more compute. And I was like, “if we work with them like they will push us to build better infrastructure and we can enable them to do their best work.” And so that was the, yeah, the, the central motivation behind that partnership. And, and it’s also a recognition. I, I think that some things that you’re trying to do, and this is usually the case with platforms, is you just can’t do everything by yourself. Like you always have to have partners. Like it’s nonsense to like think that you, you can go build a magnificent new world changing platform all alone with zero partners. Crazy. Never works.

ARIA:

I mean, I love that recognition of the importance of partnership. I also think it’s interesting, like you said, it was obvious to you because of the, you know, sort of capabilities of compute—but that’s sort of opposite of the venture model. It’s like when you’re a venture capitalist, you’re like, “great, I have this sum of money. I’m going to sprinkle a little bit all around the different startups and see what happens.” And you were like, “no, no, no, with compute, that is nonsense. And we need to be very deliberate about where we’re going and ensuring that folks have enough compute to get it done.” So, you know, this seemed obvious. You were like: We need partners. We need to make sure to be world class. What was the most difficult thing or surprising thing about trying to get this done?

KEVIN SCOTT:

It was not a widely shared opinion. Like this thing that seemed pretty obvious to me did not seem all that obvious to a bunch of other folks. So it was obvious to me, it was obvious to Sam, it was obvious to like the founding team at OpenAI. And you could go talk about what the roadmap was, what the plan was, like, what we thought was necessary to go build this very interesting thing. And not a ton of people were convinced. And, and you know, if you just sort of plot forward this crazy exponential curve, like you’re talking about an incredible amount of investment. Like even the first increment of the investment we did in OpenAI, which is a billion dollars, seemed like an extraordinary amount of money to go invested at a non-profit research institute that had no revenue and had never made a product before.

KEVIN SCOTT:

But it, it to me—I mean, it was sort of one of these funny things. And like I, I guess in your career, you don’t, you don’t have too many opportunities where you see like a very big thing and get conviction on it, where even though you’ve got a bunch of folks like, “Yeah, I don’t think this is going to work.” And like, is like lots of people. So research scientists saying like, “I don’t believe this scientifically will work.” Business people saying, like, “I don’t think the economics of this are going to make sense.” You know, like lots of folks and, and still even today, like you just sort of see the critics of the whole approach. Like, you can go on Twitter any day of the week and like have a gazillion people telling you that like, even though all of this stuff has been working so far that, like, it, it broke yesterday and will never, you know, like never work again.

KEVIN SCOTT:

So it is like, it is—I’m going to use the word blessing, but I don’t mean it in a spiritual way—but like, it’s, it’s just this great stroke of fortune to like have, you know, like insight and conviction at a point where, you know, you, you even get the opportunity to go convince other people. So it’s, it’s not like the, the opposition or the, you know, the skepticism is a bad thing. It’s actually like a fantastic thing. I mean, it it tells you in a sense that you’re, you’re doing something pretty big because it’s changing in a fundamental way the way that people think about the laws of physics, of technology and business. You have to get really, really sharp about like, why exactly am I doing this? It can’t just be like, “oh, this is my gut.” Like some of it’s your gut, but you, you have to be really, really, really principled and, you know, and it—so that you can bring other people along with you at this scale.

REID:

Yeah. And in part, I mean, the thing that is difficult about making these bets is that they’re risk bets. And one of the things I thought, you know, you and Satya and a bunch of the other folks did very well, you said, “Look, we can make this bet. Downside, you know, billion dollars, it’s expensive, but, but upside is a complete changing the game as to how software is built, how humans are aided. And it’s a turnover the card, which if we’re successful, is going to get a lot more expensive and a lot more continuing the risk bet.” In those early days, what was your wildest expectation—hope—from that first christening of the, of the, you know, taking this little tiny non-profit that had been futzing with some research ideas and then turning it into a, a momentum, you know, getting to, if I may use the word some blitz scale?

KEVIN SCOTT:

I, I wasn’t even thinking about like forging new frontiers. I was thinking about like, how do we go make sure that we can be as competitive as humanly possible in a handful of Microsoft’s core businesses. So we have, we have Azure, which is a hyperscale cloud provider that lots of people build applications on top of. And even in 2018, like people were incorporating AI and machine learning into the applications that they were building. And so how do you go have things that are at least as good as all the competition on our cloud so that people who want to build on top of Azure can build great things. And then, you know, there’s like all of our productivity software, like increasingly—you know, and, and like, this wasn’t five years ago. Like, you know, this has been a, you know, decades-long journey that people have been on to uplift productivity applications with machine learning features and capabilities.

KEVIN SCOTT:

And so this was like, can we, you know, can we push the boundary on that and build more delightful things into our productivity offerings? So, I would’ve been delighted just to have that—to like have a handful of, you know,, APIs, and a handful of features inside of Office that were moving metrics on those businesses. So, you know, adoption and usage intensity on Azure and, you know, engagement and some of our productivity measures in Office. I did not expect things to go quite as fast as they went. And I’m still surprised at how fast things are going. So it’s never been a question of like, “Will the AI systems be able to do a particular thing?” that surprised me. Like I’ve had pretty strong conviction we will get to the point where, you know, like with GPT-4 , like you’ve got a system that can score a “5” on the AP Biology exam. Tthat didn’t seem like a stretchy thing for these systems to eventually be able to do. But  being able to do it when they did it—like that was a surprise for me. And that’s even with me watching the progress of the systems. So all of my surprise so far has been upside—like things just happening quicker than I thought they were going to happen.

ARIA:

No, I love that. And it’s one of the places where I see, we’ve seen, most use out of AI—that like people are using it in their everyday jobs, of course, is Copilot. And folks who are using this every day, making their code better, having it be that assistant for them. And so like, how is Microsoft working to make Copilot as accessible as possible? I know you guys removed the 300-person minimum for the businesses to adopt it. Like how can we get this to everyone? Because one of the things I love, Kevin, about what you talk about a lot is that we need to get this into everyone’s hands. This can’t just be for the elite few. We want this dispersed to everyone. So how are you guys doing that?

KEVIN SCOTT:

Well, I think it starts with thinking about things as a platform. Like one of the hard things about machine learning and AI in previous generations were that if you wanted to make a thing that had some AI capabilities, you had to have some people with PhDs with a lot of expertise. You had to have like a bunch of data. You had to really understand a whole bunch of very complicated state-of-the-art stuff that was tricky to implement. And then you had to go do a whole bunch of work. Yeah, so like all of this defines my first machine learning project. And yeah, the first machine learning project that I did—which is 20 years ago now—you know, it took me, you know, reading a bunch of grad textbooks and a stack of papers and then coding for six months to like do a thing that a high school student can now do in two hours on a Saturday.

KEVIN SCOTT:

And so part of making all of this stuff accessible is really building a platform. So where the platform’s economics like the cost of it is just going down, down, down. And like, that’s not just like how much it costs to, you know, call an API and like send a bunch of prompt tokens in and like getting a response. It is also about the cost of the expertise. Like how much do you have to know as an individual or as an organization to go wrap your hands around this technology to go do the actual thing that you care about—which is making whatever product it is that you’re making, or doing whatever it is that you’re trying to do creatively, or, you know, whatever. The end goal is always the important thing—the mechanism ought to just disappear into the background.

KEVIN SCOTT:

And so I think that’s the important thing. So the scale of the platform makes the unit economics better and it naturally forces you, because you’re trying to have as broad a customer base for the platform as possible. Like, you have to make it simple. You have to make it easy to use. Like you don’t want it to be like a specialized thing that only 40,000 people in the world will be able to even understand. Like everybody has to be able to like jump in and use it in some way to make a thing. And then, you know, like we’re, you know, from a product architecture perspective—like we’re just sort of thinking about how it is that we can take the feedback from things that we’re launching. And like we’re trying to launch things relatively quickly you know, before they’re perfect so that we can get feedback on what users are finding useful, what they’re not, what’s clear and what’s confusing, what. And, and so like, that’s just the classic journey that I think has been crystal clear—since at the very least internet days—that you want your feedback loops to be as quick as humanly possible so that your customers are telling you what they need, directly and indirectly. And like you can listen to the exact things they’re telling you and the data about their usage, and then you can just go make the thing better and better and better.

ARIA:

I think it’s so inherent and ingrained to Silicon Valley and to tech to do things like that, but I still think because not all industries work that way, that people can be skeptical. People can be skeptical about putting things out when they’re not perfect. And I think, again, like talking about that and hearing about it—and no, we’re putting things out when they’re not perfect to make them better to serve the consumer, to serve the user—is, is so, is so critical. So I appreciate it.

KEVIN SCOTT:

Well, I mean, look, this is one of the things I learned from Reid a very long time ago. I think one of your favorite sayings is if you’re not a little bit embarrassed by the v1 one of your product, like you you’ve done something wrong. You know, and, and like that’s another way of saying that, you know, you know, perfect is the enemy of the good. And, and I do think it is one of the mistakes that AI labs were making up to this point. They were sort of assuming—yeah, like, this is probably an overgeneralization—but like you had a bunch of folks who think, who thought like, we’re going to go solve all of the problems of these systems. And like once everything is sorted out and solved, like we’re going to spring it on the world and like, everybody’s going to love it because it will be perfect. And like, that’s just not the way anything works. And it’s honestly, you know, people, people get irritated because you launch things that are like flawed—that need to be fixed. But like, they would be even more irritated to like have this fully conceived thing, you know, thrown at them where they had no ability whatsoever to provide any input in the development process. So if you want people to be part of your development process, like you have to release things before they’re fully developed.

REID:

One of the things I think that people get zero—or most people just don’t understand, and it’s part of, of the reason why we’re doing this podcast—is to understand that if you kind of try to lock in the, the now or the past, you may be doing critical harm to people in the future. So, for example, one of the things I most commonly say about this AI thing is people say, “wow, if I listened to,” you know, I just listened to John Stewart last night on the Daily Show. He’s like, “this AI thing needs to be controlled.” And, and the, and you go, “well, okay is that the most important thing? Or is getting a medical assistant to the billions of people who don’t have access to a doctor the most important thing?” Because if you get those billions of people access to an AI medical assistant, you may affect child mortality.

REID:

You may affect, you know you know, crippling health outcomes. You may affect all these things for people who, you know, would make a huge difference in their life. And that’s actually part of that future possibility that the thing that we need to make real is actually in fact a very important thing. And it isn’t just a, you know, “oh look, there is a—there’s this little percentage chance that you’re going to do this thing that you’re not going to be perfect at, at the very beginning.” And it’s like, well, yeah, and by the way, that’s the only way we learn, the way we learn to be good to people and so forth. And people like to put rhetorically well, who gets to make that decision? And the answer is, to some degree, collectively, the group of us do. Some of it’s the government, some of it’s companies trying to be their best possible selves to their customers or employees or shareholders. Some of that’s press holding us accountable. Some of that—like, it’s these networks of human interdependency. It’s not like, you know, Sue or Fred, they get to make the decision [laugh], right? It’s, It’s, It’s This set of things. And I Think That’s Really Key On This.

KEVIN SCOTT:

Well, you know, it’s also like, to me, the funniest thing in the world to like, think about all of these things—the the media, the government business—being, you know, these fundamentally different entities that are in opposition to one another in, in some way. Like all, all three of those things are accountable to the public at the end of the day. Like, business doesn’t exist if their customers don’t want to buy or use the things that they’re making. Like governments don’t exist if it doesn’t have the mandate of the public. And the media doesn’t exist if the public doesn’t listen to them. And, and so like, we’re all trying to serve the same people and you know, like we’ve got a different point of view about what that service looks like. But like, it, it’s just really important to remember that all of those institutions are trying to do the best they possibly can to serve the public in the particular way that they’re serving it. And, and, and it’s the public that’s the constant across all three of those things.

REID:

One of the things that I actually found that was really illustrative is, because of your background of growing up in a rural poor town was thinking about how these tools can help, you know, revitalize, you know, even communities that are right now you know being left behind in some important ways. And it’s by providing them the right tools for empowerment and AI could do that. Say a little bit about that and your book, you know, for, you know, reprogramming the American dream and and so forth. Because I think that’s, that’s a part of the humanism about what we’re trying to do with AI that too often doesn’t get enough commentary.

KEVIN SCOTT:

Yes. So, you know, this is a mistake that I think product makers, and analysts, and public intellectuals often make, is—they sort of look at their own experience and their own lives and like what’s you know, what’s hard and what’s easy for me? And they make generalizations from that. Like, I, I’ve just sort of seen—and like you’ve seen as well—I know in our product making experience in Silicon Valley. Yeah. Like, we are unusual relative to like the, you know, billion plus people that you hope are going to use your products. Like we, yeah. We live in an unusual place. Like we have an unusual background and, and like all of us are unusual in that way. So like, you’re just sort of trying to find like the common thread across, across things. And it’s—I, I’ll give you this example from you know, from, from my family.

KEVIN SCOTT:

We—living in Silicon Valley or, you know, the Puget Sound area, New York City or like any of these you know, urban innovation centers in the United States and across the world—can sort of forget that, like the healthcare system is very unequal. So if something happens to me, like I can, I’ve got good health insurance and I, I’m in close proximity to like very good doctors. And I can go have, you know, world’s experts like look at things and like, help me be healthier. And, you know, even, even in terms of preventative medicine, like I can go do a whole bunch of stuff proactively. Because again, like I have proximity to like a whole bunch of like preventative healthcare options, and good insurance, and like networks of people who are all thinking proactively about their health. But you know, like that’s not true everywhere.

KEVIN SCOTT:

So my mom, who’s 74 years old, still lives in rural central Virginia, and she has had this condition called Graves’ disease for 26 years now which is a thing that causes hyperthyroidism. And, you know, the treatment for Graves’ disease is like they irradiate your thyroid to make it less active, which then means you have to start taking thyroid hormone replacement medicine. And she’s been on this medicine for 26 years at the same dosage. And you know, your thyroid hormones like do things to your cardiovascular system, they can, you know, affect your heartbeat and your blood pressure and like a whole bunch of other things. And her doctor, her general practitioner in rural Virginia—who was also like, you know, older and tried to retire—was like screwing around with her dose of her medication to try to adjust her blood pressure.

KEVIN SCOTT:

And like, she had some like stress in her life, because she takes care of my brother and she takes care of like my [laugh] my like awesome 93-year-old grandmother who still lives by herself. Like she got into this spiral where she, she was in the hospital six times, like in a handful of weeks with like these terrible cardiovascular symptoms that she was presenting. And the first two times she was in the ER it was pretty clear they didn’t even read her chart. Because if you had read her chart and looked at her symptoms, like the first thing you’d order was something called a TSH test, which would measure your active level of thyroid hormones. And like, if you had ordered that test and gotten the results back, it would’ve been obvious what you go do to get her sorted out.

KEVIN SCOTT:

And they didn’t do that because like, they’re running in the ER. Like, they’re like crazy you know, constraint. Like they’re just trying to like, you know—”are you dying? Like, no. Then like, you know, just go away. Like we’ll sort you out later.” And, and it’s not because they’re ill-intentioned or dumb or anything else. It’s just like that healthcare system is strained. And the reality is that right now what the tools are capable of—again, this is not science fiction. If you had taken her chart and taken her symptoms and just stuck it into GPT-4, like it would’ve told you to go order the TSH test. And then you could have taken the results of the TSH test [laugh] and, and put it into GPT-4 and say, “what should I do?” And it would’ve told you what to do.

KEVIN SCOTT:

And like, I’m not hypothesizing here about curing cancer. Like I actually did this to, you know, validate my hypothesis. And again, you know, like GPT-4 is not ordering tests or writing prescriptions or anything. But you, you still have to have doctors making decisions about what’s happening to people. But, you know, so my mom got out of the weeds because I found her a concierge doctor. Like I sent her to a specialist 300 miles away. Like, I sort of hard intervened in what was going on with what she’s doing. And like, I’m, I’m here to tell you that not everyone living in rural central Virginia or rural anywhere else, like has a Kevin Scott to go intervene on their behalf. And like, that’s, that sucks. Like, everybody should have someone or something who can like, intervene and like help them solve their health problems.

KEVIN SCOTT:

Like, if I hadn’t intervened, I’m not really sure whether my mom would have gotten out of this. Like, she could have been locked into a spiral. She’s, yeah. And, and she’s old enough where like, I think a lot of people would’ve just accepted, you know, the state that she was in. And like, we don’t, we don’t collectively have to accept that. Like we can choose to use these tools in ways to empower doctors who are overburdened, so that they can so that they can be less burdened themselves. And, you know, provide a, you know, better quality or standard of care for lots of people. And like, that’s what I want from AI. Like, I, I don’t, like, I, I want this more than anything else. Like, I want every kid in the world to like have the same quality of education that my daughter has, like, which I wanted for her because I didn’t have it for myself. And like I want everyone to like have the same, you know, quality of expert care that my mom eventually got, but without like needing some like, weird intervention from some, you know, bald tech guy like who is like massively not representative of the population.

REID:

I think. Awesome. And the other thing is also that, that AI is, you know, going to potentially amplify humanity even in jobs and work. And say a little bit about—because you know, this is part of the, the easy, you know, the kind of the cheap shot that, that John Stewart was taking on the Daily Show, which is like, “oh, it’s coming for our jobs.” And he is like, well, look—so the loom was coming for the weavers jobs too, in terms of it being, you know, like, you no longer hand weaving. You got to, we be weaving with a loom, you got to—changing these things. And yes, the timeframe is critically accelerated, which reason we all have, you know, intense concern about people. But this is a thing that’s going to amplify people. The transition’s going to be really, you know, challenging and, and, and, and have some real effort in it. But there is a, there’s a there on the other side that, you know, can help, you know, kind of rural, poor communities, et cetera. And that was one of the reasons why you went and put all the energy into writing a book about that. So I think a little bit of commentary on that humanism too, I think is useful.

KEVIN SCOTT:

One of the books that I read when I was writing my book is from this Berkeley economist, Enrique Morettiand like the book is called The New Geography of Jobs. And yeah, he was basically trying to get to an understanding of why we have this real disparity between urban innovation centers you know, where you have Silicon Valley—which, you know, has a disproportionate amount of the world’s software companies. And like all of the economics associated with that are San Diego, which is like this incredible biosciences innovation center and like, you know, a huge number of biotech companies disproportionately are in like San Diego. Or you’ve got New York City and London that like are these huge finance centers that like disproportionately have a large amount of the world’s like highly technical financial industries.

KEVIN SCOTT:

And, you know, like, I think the question he was asking is like, whether, like this is just the destiny of the world, that like, you’re going to have this economic haves and have nots that are going to be defined by geography. And, you know, like—and specifically between an urban and rural divide. And so one of the things that I’ve observed from my rural community that I still have a connection to is that people who are prospering there, are folks who make the most innovative uses of technology. And, and so there, there are lots and lots of technologies and tools that you can pick up and use that are not geo restricted. Like you can buy CNC machines and you can write you know, programs on top of the Azure open AI, API, whether you live in Gladys, Virginia or you know, Santa Clara, California.

KEVIN SCOTT:

Like, yeah, geography doesn’t matter at all. And in a certain sense, like for the software, like the geography really doesn’t matter. So like, the more powerful the platform gets and the more accessible it becomes, like the more opportunity you’re creating for people to use these tools to like get huge amounts of leverage. And so, like, this isn’t an argument that says “you’re going to recreate Silicon Valley in Campbell County, Virginia.” Because I think that would be very hard. Like, that actually would require, you know, the formation of a new set of network effects. And like, it would, it would just be difficult. And people have had a very hard time trying to like replicate Silicon Valley—which lots of people have been trying to do for a very long time. But like, you don’t need that in order to have people building great companies that do powerful things.

KEVIN SCOTT:

So I, I wrote about this in my book—like, there’s this company in Brookneal, Virginia that does precision plastics machining, and they’re located in a defunct textile mill. So like the whole area was tobacco farming, textile mills, and furniture manufacturing. And all of that stuff left because there’s no tool that you can use in those industries at the moment that gives you a ton of leverage. And so then it just becomes labor market arbitrage. And like the jobs just sort of went where cheaper labor was. But like the, their thing, like, they have a huge amount of technological leverage. And so there’s like, you know dozens—not thousands—but dozens of high paid jobs in this company because they have figured out how to have technological leverage from the tools that they use. And the more powerful their tools—the more leverage they have—the more economically successful they can be, and the more jobs they can create.

KEVIN SCOTT:

And because they’re skilled jobs, like they have the same property that Moretti was describing. So like, you know, you’ve got these high paid individuals who are like doing this super valuable thing, and like they’re using a whole range of technological tools. Like they use the internet to like market themselves. Like they use the internet to talk to their customers. They use these, you know, highly automated machines to build things. Each one of those high paid employees in that company like creates those additional jobs in the community. And there’s nothing, nothing, nothing stopping other entrepreneurs from replicating that pattern. And, and so like the, you know, the things that you need to make this happen are pretty basic. You want technological tools that are as powerful as humanly possible. Like, you want good internet, you want good schools and good education. Like, you want people to learn pretty early, like what it is to be an entrepreneur, to like, give people competence that they can go solve problems and that they shouldn’t be afraid of their tools. I do think it’s a real economic possibility to have this revitalization. Like you can see the kernels of it in a bunch of places and like, and, and where you see the kernels, it’s all about technology.

ARIA:

So in every Possible episode, we do try to bring AI into the conversation as a participant. And so for this episode, we asked Copilot: What are the sort of plausible, bold decisions that Microsoft could make to get its Copilot technology into more businesses’ hands by the end of 2026? So I’ll quickly read the list, and I’m curious what stands out to you? What you’d add? Or if you think it’s terrible. So let us know. This was their suggestions: Introduce a scalable AI infrastructure to offer a range of AI services that businesses of all sizes can use. Launch an AI marketplace—so create a platform where developers can share and sell their AI applications. Provide AI education and certification programs. Offer AI consulting services. Or develop open source AI projects and encourage community driven innovation by supporting and contributing to open source. So Kevin, what do you, what do you think of that list?

KEVIN SCOTT:

Pretty good. Maybe you’ve just uncovered my secret desire for helping to get Copilot built. Like I’m just looking for the day where the thing can do my job, and I can go retire. We’re getting close. It’s awesome. 

ARIA:

I love it. Kevin, you’re not needed anymore. [laugh]

KEVIN SCOTT:

Can’t even tell you how much I long for that day. [laugh]

REID:

It’s one of the things that actually I think people don’t fully appreciate about some of the origins of intelligence—which is it, intelligence, is trying to figure out how do we, how do we get to the point where we can be lazy? Right? And I think that’s actually in fact, part of the, part of the orientation of this. Now it’s my belief. And I’m curious, Kevin, to share your reflection on those too—that we’re actually going to have a much longer period than either the proponents or the critics think of human plus AI. Like, for example: will a human job be replaced most often with a human using AI? So you want the human using AI as a, as, as, as kind of a way of doing it. That doesn’t mean there won’t be some jobs replaced that are, you know, any job where you’re trying to get the human to do or essentially a robot’s job, the robot will be better. But in a lot of places, the, the, the human plus AI—and I’m curious about your reflections on this as we, as we look forward.

KEVIN SCOTT:

I, I don’t find myself like loving many of the predictions that people are making, like, about the, the future of work with AI. And, and it’s mostly because like, I, I enjoy work and, and I think a lot of people get a lot of value out of doing things that are valuable to other people. And, and I think we lose sight of that all the, all the time. Like, it, you know, there, there’s so many things about our labor markets and the economy that let us abstract what we’re doing. And like, I have to tell my children this all the time. It’s like, you know, you, you have to go figure out how to make yourself valuable to other people because you’re consuming a ton of things that other people produce and like, on some level, like, you have to do something valuable in return for like, all of the thing.

KEVIN SCOTT:

So it’s like, yeah, money and, you know, like, all this abstraction sometimes confuses us about this very, very basic thing. That like, you know, we are a society of human beings. We all have a way to make ourselves valuable, and we all consume the valuable things that people do in return. I don’t think anything about AI changes anything at all about that fundamental dynamic. It does change what we’re going to be doing inside of that dynamic. You know, it may mean that we’re doing a lot less of one thing and a lot more of another thing. But it’s still going to be, you know, how do you find those value exchanges between people? What, what I hope for myself—like, I wasn’t even joking, Aria—like, I, I hope a ton of what I’m doing right now gets completely automated away by tools, because it’s not like I’m walking through my daily life saying, “man, I really enjoy all of this crap that I’m doing.”

KEVIN SCOTT:

I want to go figure out how to do new and interesting, different things than what I’m doing right now. And yeah, I think the thing that’s important is like, people just need to like, have stability in what they’re doing. You know, it’s one of the reasons why I think a lot of tech people talk about things like universal basic income—where, you know, like it’s all of this transition is happening. Like, you don’t want people to have to choose what they do, or, you know, because they’re fearful of some, yeah, bad economic outcome. Or like, they have to be anxious about, you know, something that is coming into existence because they think it might displace them. And so, look, I think the story of the human race is—like, we always find ways to be valuable to one another, no matter how sophisticated the tools are that we’ve developed.

KEVIN SCOTT:

I think that will be true in the future. And then I think we have to think very carefully about, you know, what you do to help people who are being displaced. But, you know, Reid, you were spot on. I think—like I, I’ve been watching these predictors of what automation and AI in particular are going to do to the workforce for at least a decade now—like, very closely—and almost all the predictions have been wrong. Like, and, and like wrong for like, a bunch of complicated reasons. Like, you know, some of the reasons are like, actually the tools are not as good as, you know, you might hope that they are. And like, they’re good in the sense that they right now are good enough to help you automate away a task. But like, there’s nothing that can, you know, really replace whole humans right now.

KEVIN SCOTT:

So like, that’s just a technical fact. And then there is the, like, you know, sort of—you know, as the tools come in and make you more productive at task, and like, you have an excess of cognition or like hours in the day to go do stuff, like stuff fills the excess. I, I love, you know, your line Reid—that I’ve heard you say a bunch of times—which is, you know, like if you’re a company and you’ve got a productive sales force, and like you’ve got a tool that can come in that will help you sell more, like you, you, you want to sell more. Like you, you’re not going to like, say, “oh, well now because I’ve got more productive salespeople, like, I need fewer salespeople.” You’re no. It’s like, no, no, no, great. Like now maybe I want even more salespeople.

REID:

Well, and this is actually one of the things that people don’t realize is—in this kind of companies competing with companies, there’s lots of places where they necessarily want to hire people. Like you say, “well, the marketing job will change.” Yeah. But I still need to do marketing to compete with other companies. And if all of us only have GPT-4 is like, we go, “oh, we get rid of our marketing department and we use GPT-4.” It’s like, well, you have an undifferentiated GPT-4 marketing across all of them. You actually need people to be figuring out how to use GPt-4 and, you know, kind of, and, and how to do that as a leverage. And that’s part of the reason why I think this, this kind of person plus machine is a much longer—and maybe very long period in our kind of job productivity that most people aren’t, aren’t, aren’t focusing on. Because it’s not as like, “The robots are coming. Oh, yay!” Or, “The robots are coming. Oh, no!” It’s, like, the robots are coming and we’re going to do a lot of interesting things.

KEVIN SCOTT:

And like, like we, you know, there are going to be a bunch of things. Be a bunch of things that you, the tools will be able to do, and we’ll choose not to use the tools for them. Like it’s a—I, I’ve just recently gotten into ceramics and it’s kind of amazing how little the tools have improved [laugh] over a very, very long period of time. Even though it, it would be quite easy and like, but, you know, part of what I’m doing is like, I, I’m sort of making a bunch of tools to like, help, help me cheat it at pottery. Yeah. Which, like, all of the ceramics people that I’m trying to learn from look at me and they’re like, “wow, you’re a weirdo. Like, that’s not how we do things.” And, and like, there’s a bunch of examples of that all over the place. Like, you know, again, people are like at the center of the story. Like, we will choose how we want to go use our tools, and like we do things inefficiently all over the place just because we like doing them that way.

REID:

There’s all this swirl around AGI and to some degree, AGI is the AI we haven’t, we haven’t made yet—which is part of the reason why it’s like, “well, we sailed past the Turing test and oh, well, you know, that was one.” And, you know, chess playing and all the rest of this stuff. What do you think is the, the good listening and taking away from the, the kind of the way of thinking—like, you know, everyone keeps hearing, hearing about, well, we’re, we’re targeting AGI, we’re making AGI, what’s going for AGI—what do you think the humanist stance on that is? How should we reframe that question and how should we think about artificial general intelligence as something that we can help shape to a better human future?

KEVIN SCOTT:

Well, I, I don’t know that I have the perfect answer there. I, I’ve said a lot that AGI is kind of a Rorschach test for people. And a lot, a lot of the times it’s like, what are you most anxious about in the development of the technology? I don’t even know what it means, honestly. Like in a, in a literal sense, like if you had to sit down and write a technical definition of AGI, I think you would have a very hard time doing it. And like that’s part of the struggle—we don’t have a shared understanding of what it means. So the way that I have been thinking about it for a while—and I don’t know that this is perfect—is I, I think that the world could benefit from an excess of cognition. The same way that the world benefits from an excess of the capacity to do mechanical work.

KEVIN SCOTT:

Like if you could all of a sudden make scientific progress go faster. You could make you know, the design of new things go faster. You could, you know, increase the amount of compassion in the world. You could, you know, have infinitely patient or sources of infinite patience that could help us understand one another better. And that could, you know, provide options in an objective way to, you know, from which to choose to solve some gnarly problems that we have. Like, I think all of that would be good. Like, it’s hard to, hard to say that you don’t want any of those things. And so, yeah, I, I think a humanist way of thinking about this is like, what do you want to encourage more of in the world? Like what could humans benefit from? And like, how does this technology, or how can you steer this technology in directions where you get more of those things?

KEVIN SCOTT:

And you know, you have to be realistic about what the technology is capable of. And, and like, I think also you have to be kind of aware of your biases—in the sense like you, like I, I’ve been thinking a lot historically about two technological revolutions. So one is you know, the steam engine and you know, like we know what that did and what the opposition to it was. Yeah, you’ve got the Luddites throwing wooden shoes into the works of these machines to try to break them so that they, you know, didn’t disrupt their livelihoods. And, and like we also know that, you know, when you got to equilibrium with that technology, that it just massively benefited from the world. And even today, if you look at the balance of economic power in the world like, it’s mostly because certain countries were early to adopt industrial technologies than others.

KEVIN SCOTT:

And so like, in some cases, decisions that got made 200 years ago like have lingering effects today. And then there’s this other, you know, technological revolution—the printing press—that had similar long-term impacts, but like it impacted a different set of people. So rather than factory workers, who were worried about disruption, like the printing press disrupted basically knowledge workers. So you know, people whose job it was to write and to distribute thoughts and ideas via the written and spoken word. And like, there were a lot of people super agitated about the invention of the printing press and the, the—I don’t want to call it the commoditization—but like the sudden ubiquity of, of, you know, thoughts, and like, who got to say what to whom? And you know, like that was a way more controversial revolution.

KEVIN SCOTT:

Like we had like some [laugh] very serious wars, you know, that that basically were precipitated by the printing press because it kind of changed, you know, the power balance of the world. And so I think there are things to learn from both of those. So like, you know, lesson number one is—like, things are going to be okay over time and that technology really does massively benefit society. Like, you wouldn’t go take mechanical engines or like books away from, you know, from society, like people. It’s just like the idea is insane. Like, you can’t imagine how the world would work. And I think, you know, someday that’s where we’re going to be with artificial intelligence. Like, you’ll be like, “Oh my God. Like, I just can’t even imagine how things would function without it.” And so most of what we have to do now is to choose like, what kind of wave of change we’re going to have. Is it a printing press wave? Is it a steam engine wave? Or is it something entirely new? And hopefully it’s something entirely new because like we can go look at history and we can look at what we want for ourselves, and like we can decide on a better path.

REID:

Now let’s go to Rapid Fire. Is there a movie, song or book that fills you with optimism for the future?

KEVIN SCOTT:

I particularly like Adrian Tchaikovsky’s Children of Memory, which I think is a beautiful science fiction book that—I don’t think Adrian was trying to write a, you know, a book about LLMs and transformers and AI—but a lot of the elements of that book are shockingly harmonized with what’s going on in AI. It’s a great book, and everybody should love Adrian Tchaikovsky. He’s awesome.

ARIA:

I love it. So this next question can be personal or professional: Is there a question that you wish people would ask you more often?

KEVIN SCOTT:

[Laugh] So, Reid asked this question at dinner of my 13-year-old, who’s even more introverted than I am, and he was like, “Hey, is there a question you wish that someone had asked you tonight?” And her response was “Do you want to be here?”

ARIA:

For all the introverts out there: get me out of this event.

REID:

[Laugh] Yeah, that was, that was a great answer. Even will.i.am, who was with us, laughed.

KEVIN SCOTT:

She didn’t miss a beat. She knew exactly what question she wanted to be asked.

ARIA:

That’s incredible.  

REID:

Yep. So, where do you see progress or momentum outside of our industry that inspires you?

KEVIN SCOTT:

The progress that we’re making in the biological sciences right now is like, really awe-inspiring. And like, some of it is because biologists are using technology, and like even using AI in incredible new ways. But even if you look at laboratory methods—things are moving at an incredibly fast pace right now. And like everything from how quickly we got to effective vaccines in the pandemic, to you know, these new weight loss drugs that are on the market to—I, I mean, it’s just incredible how, how quickly we are making progress on tackling disease that has been like really difficult to tackle in the past. The rate of change is just high.

ARIA:

Alright, Kevin. Final rapid-fire question: Can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years? And what’s our first step to get there?

KEVIN SCOTT:

Well, I think in 15 years we may have fusion or some other sustainable energy breakthrough, which would mean that we have cheap and abundant energy—which means that you can solve an enormous number of problems. Like everything from, you know, as you have computers that can do more things for us. Like you can just dramatically increase the amount of net computation that is working in the service of humanity. But like you can also solve a whole bunch of like, really nasty problems like water shortages in a bunch of places. Like if energy was almost free like making fresh water is no longer a serious problem. I think we will make real progress on climate change—both in engineering solutions to help environments that have become warmer, more livable as well as addressing a bunch of the factors that are contributing to the warming environment in the first place.

KEVIN SCOTT:

I’m very, very hopeful that we will have made serious progress on complex disease—like Alzheimer’s and cancer—and like have a whole host of new therapies that will reduce the net amount of suffering in the world. I, I am hopeful that we will have a new set of technologies that allow kids everywhere to have access to higher quality education—and to allow elderly people, wherever it is that they live to, like live in dignity with independence longer. I don’t know, like, I’m just sort of like, I, I—the, the way that I I describe myself as short-term pessimist, long-term optimist. Like if I look over 15 years, I’m super, super hopeful. Like, and the short-term pessimism is just being an engineer. Like, you sort of walk into work every day and everything’s broken, and you’re like, “oh my God, what, what are we doing?” But if you’re a successful short-term pessimist, like you have to have the longterm optimism because otherwise you just give up. And like, and that’s the worst thing in the world. Just like don’t give up.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles. And a big thanks to Ben Skoch, Jennifer Janzen, Adam Zukor, and Little Monster Media Company.