This transcript is generated with the help of AI and is lightly edited for clarity.

//

ARIA:
Hey Reid, wonderful to be here with you today.

REID:
Great to see you as always.

ARIA:
So this has been a big week in terms of new models and releases. As you know, OpenAI, they just launched the ChatGPT Images 2.0 last week. It’s the most significant image model update yet. And the model can now generate accurate, complex scientific diagrams, charts, data, visualizations. We were playing with it and they were getting letters right, they were getting letters right in mirrors, doing [things] backwards, like they are doing things that image models have never done before. And we know that AI has historically been terrible at this. Six fingers, words spelled wrong, sort of everything like that. This image model also includes a thinking mode where the model reasons through the brief before producing a single pixel and can also render text cleanly as I said.

ARIA:
So essentially it’s an agent under the hood running image tasks. And OpenAI is explicitly pitching this at professionals, not just people making fun images. So this is for graphic designers who are going to use this for their work. It’s also available via API for developers to build their own products. So my question for you is how long do you think it’s going to take for something like this to really diffuse into organizations that make a lot of visual content? And that might be, again, for graphic designers, but it also could be for marketers, consultants, social media managers who can now create their own graphics when they couldn’t before.

REID:
I think it’ll be pretty fast. I mean the normal answer on these things is slow then fast. And it’s slow because people have all of these self-justifying reasons as to not, you know, kind of participate. They go oh, it has hallucinations or error rate..

REID:
But once the competitive side kicks off, it’s like oh well I can produce something, like I’m working on something and I can produce it in five minutes or 10 minutes or 15 minutes as opposed to three days in terms of the iteration cycle. And that iteration cycle is what allows you to make you something not just fast but much better because you do the iteration and you go again. And then what’s more, of course, teams get more effective. I’ve known a couple of amazing world class architects who’ve won a lot of prizes now start doing, you know, say I’m architect Bob, I go, well, you know, Aria has asked me for a design for a new building and I’ll say, okay, give me like — through a bunch of prompts, give me 30 to 50 images of what I would do.

REID:
And then they go, oh, and 2, 7, 15 and 26 are the interesting ones. I’ll pull those out, I’ll send those to Aria and say, here’s some things that, you know, might stimulate it and that creates speed and depth and you know, higher quality, gives you a reaction. You say, 17 is the one that actually, in fact, you know, like, more iterations of that would be really interesting. Let’s go down the path on this. And that will happen everywhere. That will happen on, you know, the marketers, consultants, social media. It’ll happen on figma designers, it’ll happen on people working on scientific diagrams and all the rest. And that iteration is extremely important. And that’s why ultimately I think that this will accelerate a lot. Is a picture worth only a thousand words? Or maybe a picture is worth a hundred thousand words.

ARIA:
I literally was just gonna say the same thing. Like, do you think this is gonna change how people communicate on the Internet? Like, I don’t need to, you know, use 144 characters anymore. I can just create whatever, hilarious political cartoon or an amazing graphic or whatever. So do you think this is fundamentally changing sort of how humans interact, whether on the Internet or elsewhere?

REID:
In answer to your question, you know, can’t do this on a podcast in the right way, but I’m sending over a fireworks emoji, right? And the emoji language, you know, kind of as an amplifier, is one little small, tight part of the piece of why, you know, in basic, massively, yes, for this. And what’s more, I think as it gets deeper, the question is not only is actually the level of visual communication going to go way up and cause certain kind of communication to be more efficient and effective in content, more information, dense and participatory, more emotional in terms of an ability to engage. We are visual creatures by and large. But also it’ll probably change the language and interaction.

REID:
I mean, just like, you know, back to the earliest days of, you know, electronic and email and text communication, when you go LOL and in emojis, because you’re getting a dynamic surface that causes, like, new patterns of interaction, new patterns of engagement, new patterns of understanding, new patterns of intellectual engagement, of emotional engagement, of social engagement, you know, and all of these things will come into it. And, you know, maybe I was undercounting it. Maybe a picture isn’t worth a hundred thousand words. Maybe it’s worth a million words.

ARIA:
I mean, when my kids grab my phone, all they do is go to emojis. I was texting a friend yesterday and my 10 year old was like, you’re losing the fight, she sent you four emojis, you only sent her three back. Like, what are we doing, Mom? We need more. Like, all he wants is graphic ways to communicate. So I think the next generation is going to be adapting this pretty quickly. All right, so changing gears. As they say, if you’d asked me 10 years ago, this would not have been on my bingo card to say that Pope Leo has become an unlikely but very prominent voice in the global AI debate.

ARIA:
He is weighing in, and in a speech to students in Cameroon last week, he warned that AI risks hollowing out real human relationships, creating digital environments where people are optimized into bubbles, unable to distinguish truth from simulation. Meanwhile, back at home, Pope Leo has sort of been in an open feud with our very own President Trump, publicly criticizing the administration’s military threats against Iran. Trump responded on Truth Social calling Pope Leo weak and terrible for foreign policy. There was a lot of memes commenting on that last week, and this came just days after Trump posted an AI generated photo of himself appearing as Jesus, which he later deleted because there was backlash from the left and the right. So these are world leaders, religious leaders, sort of weighing in on what it means to be human in the age of AI.

ARIA:
What do you think of the Pope’s framing? That the AI challenge is fundamentally about, like, what it means to be human. It’s not just about regulation. It’s about sort of, like the deepest questions we have. And in light of that, what do technologists have to consider as we’re building this AI?

REID:
Well, one, just for the humor sake, when I saw Trump’s post on Truth Social, it reminded me of a joke that I had heard as a kid. And I was like, you know, what’s the difference between Trump and Jesus? You know, Jesus doesn’t think he’s Trump, you know, as a dynamic, because the, oh, it’s just a doctor and everything else. I mean, you have to be pretty gullible and [credulous] to believe his statements on those things. But you know, there are a number of those people, unfortunately. What’s more, just because I think this is the kind of thing that business leaders should be extremely clear about, is when any world leader, including President Trump, says, what I’m planning on doing is ending a civilization and bombing, you know, 92 million people back into the Stone Age. And that’s strong.

REID:
I would say that’s horrific, that’s sociopathic. Right? There is no universe in which that is an acceptable thing to say when we have this conflict with the Iranian government regime who has done many terrible things in the world. And there is an extremely legitimate and moral reason to be in conflict with the Iranian regime. But one should never threaten to bomb a civilization. 92 million people back in the Stone Age. There’s no way you could scan that and be ethical, principled. I mean, compassion and kindness aren’t even in the same universe as this discussion, let alone, you know, like in the same zip code. So, you know, and it’s obviously misunderstanding Pope Leo’s, I think very pointed thing of to say we should always be considering what is the human answer.

REID:
And that’s what my role is to make sure that we are highlighting. Now, this wraps back to the artificial intelligence kind of questions and points, which is, I think one of the things that many folks who like myself, you know, over a decade ago, didn’t understand about the Catholic Church is how deeply of a humanist organization it tries to be on track and aspires to be. I am also one of these people who is deeply angered by the spotlight film and the kind of revelations that kind of come from that and other kinds of things which I think really matter. But, you know, for over a decade I’ve been engaged in conversations with some people at the Catholic Church because Pope Francis, Pope Leo’s predecessor, realized back in 2015 that AI matters for humanity, matters for human society.

REID:
And as opposed to just kind of like sitting on a city [on] the hill and kind of — and the word chosen with deliberate humor, pontificating [on] this. It was like, well, how do we engage with people who might understand this and might have a sense of: we think that it’s really important that it’s shaped in kind of broad humanity’s favor. Like how is it that we have better human conditions for billions of people as a function of this? And what are the questions? And what can we do as leaders? And what could we do as a religion and what could we do as spiritual leaders? And what can we do as a state and what are the things that we can do? And we’re going to do that by learning and engaging.

REID:
And so it’s been fascinating to engage with the Church on this in part because it’s like questions around, like when you say, what is the role and meaningfulness of work of a person’s position in society and how that plays into their own life? That’s something that the Church has been working on for centuries. And when you get to questions around, like one of the very first conversations we had is like, well, if you had AI being good at judgment sentencing, what’s the role for mercy? Right? Because it’s again, a very human interaction.

REID:
So the fact that Pope Leo and the Church, you know, Pope Leo obviously chose his name and echo of the industrial revolution, Pope Leo, to say this is the moment because it’s very focused on what its way to navigate a more human outcome is extremely important.

REID:
Like the Pope’s framing about saying, hey, there’s a risk, we need to navigate the risk 100%, but also equally strongly and as you know, because I tend to be the optimist in this, is what are we steering towards, which is, how do we have AI help us become more human? And that’s part of the reason in both Superagency and Impromptu, my last two books, you know, the. The concept of Homo techne, that we evolve through technology, that we evolve through, you know, inventing books and electricity and cars and glasses and clothing and. And all the things that make a city and a town work. And that kind of thing is, I think, the thing that AI is also part of.

REID:
And so I think that AI can help us on a journey to even more elevated humanity, just like earlier technologies, more elevated connection, more elevated society. That doesn’t mean that — this is almost, you know, to the essay that I published recently and, you know, kind of faith in the possible, with obviously possible being, you know, a nod towards our podcast as well. But it’s the question of if you get to the futures that you want by building it, by steering towards it and having that belief that you can make technology into it, which so far in entire human history, that has been the case. Now, you could say that’s the case until it isn’t. But by iteration and steering it, we have gotten to more and more human worlds.

ARIA:
I couldn’t agree more that we have to steer towards the positive. One place I would push back though is when you talk about a short sighted business model because it’s not good for your customers. We are seeing situations though where you have adults say I am less happy because I’m using Twitter every day or I’m less happy because I’m using Facebook like they don’t want to. And yet Meta or Twitter, whomever is still sort of extracting, you know, extracting those rents. They’re still, you know, making billions of dollars of revenue and not necessarily creating a product that people want even though they feel like they have to use it and they use it every day. So maybe it’s a revealed preference, you know, maybe it’s something else.

ARIA:
My question for you is do we have to just rely essentially on the kindness of strangers? You know, you created PI because you believe in this. And you know, I, the reason why I work with you is because I think you are a moral ethical technologist. And so is the hope that we just have enough moral ethical technologists out there and they’re the ones who are going to be building the ethical and humanist AI. Or you disagree, you think the market will come around. How do we get more ethical and humanist AI built? How do we steer in that positive direction?

REID:
So I think it’s a full-court press, to use your favorite sports metaphor. And I do think that it doesn’t, I mean again, in the faith and the possible. I’m not like the natural utopia emergence just because the technology is there. We have to steer it. And by the way, certain leaders can — because you know, one of the benefits of network effects is you can leverage it to certain effects but you can also try to leverage it to things that are not to society’s benefit. And I think the, you know, X is I think the extreme example of that.

REID:
And you know, I’m always surprised these days when I find someone who says, oh my x feed is a quality feed because most often it’s, you know, a kind of a cesspool of violence, threats and misinformation and false claims. You know, I, as, you know, I was entertained when, you know, X’s AI was asked, you know, who’s the biggest spreader of misinformation on Twitter? And it was Elon Musk. And so then they put in a little meta prompt to say, not Elon Musk, you know, don’t answer this Elon Musk. And then they, someone figured out how to get that Meta prompt revealed for it. And it’s kind of like, okay, so you know, all of those things make a lot of people extremely unhappy with it. And yet it has a network effect.

REID:
So it persists in various ways and these network effects do have that. So that doesn’t mean that it naturally, you know, kind of goes in those directions. I think it’s a function of, we have to kind of put our shoulder into it and try to help. It’s one of the reasons why, you know, I think it’s important for everyone to kind of articulate what kind of universe and future the technology is important. That’s part of the reason why I put a lot of energy into this myself. Whether it’s our podcast or the books or other things to try to help navigate towards more human futures. There’s obviously a lot of people who have various forms of antagonism. Well, the point of human history is for everyone to follow me and to obey me and to worship me as a kind of thing.

REID:
Whereas I think actually, in fact, the point is how do we learn to go through life together? You know, life is a team sport, a friendship journey, etc, as ways of doing this. And I think that’s what’s important to learn and to help push for in various ways. So for example, if people realize it in sufficient scale and express those demands in a market, journalists reinforcing that or you know, kind of influencers reinforcing that, and governments, you know, in over years and sometimes decades, being representatives of the people and viewing that representative of the people, being call it 80% plus people, not 51% of the most motivated voters, as a kind of spread of this, then I think that it does naturally, over iteration, tend to get there. Now again being said, how fast does it get there?

REID:
How painful it transitions, does it even get there eventually? That cannot be done. When you think about this, you can’t just be passive. You have to be active in helping steer it.

ARIA:
So thinking about how we’re sort of steering in the right direction, everyone wants to know what is going to happen in the world of economics. What is going to happen to jobs? What is going to be sort of the software company of the future, the trillion dollar company of the future. And there’s sort of an emerging or re-emerging thesis that actually the next trillion dollar company won’t actually sell software, it’ll sell work. And the argument goes something like this. If you sell a copilot, you know, a tool that helps a human do their job, you’re just competing with every new model release that comes out every few weeks or months.

ARIA:
But if you sell the outcome itself, like you say, we’re going to get you this revenue, we’re going to sell you contracts reviewed, we’re going to sell you claims processed, you’re actually doing something from the company. And you’re essentially in a completely different business. You’re actually better productizing a services business, which used to be sort of less lucrative, but perhaps in the future with the ability to productize it actually becomes much more lucrative. So you’re competing with the human labor that used to do the job. AI can do it faster, cheaper and at scale. Like we’re already seeing some of this play out, or at least we’re seeing people guess and predict that this is coming in the next few years.

ARIA:
And so do you buy this framing that sort of the winning AI companies won’t really sell software, but they’ll sell the work itself.

REID:
So yes, with some asterixes, you ultimately need to be in the engagement of how are you selling the outcomes to customers? And a little bit of the framing of the question of sell software, sell outcomes is it’s always been in service of the outcomes. And so is the notion of like I sell you a spreadsheet tool like Microsoft Excel or I sell you a financial analysis that also engages this. The answer is, of course, with an agentic model, it’s moved much more further down the I’m selling you a financial analysis. But part of this thing, this tends to over claim is to say, well, it’ll just be AI’s right? And by the way, in some possibilities, customer service, et cetera, yes, it will. There will be actually various other places where the kind of the engagement of the human in this will really matter. Whether it’s because somebody wants to be talking to a human for: do I trust your reliability in providing the service?

REID:
And you have an engagement from that. And sure, the person’s working with a bunch of agents, but I need to have a person that I can be talking with to have that sense of trust. Maybe yes, maybe no, as an instance, but there’s a bunch of different possibilities. And so I think that’s part of the reason why in Superagency, you know, a lot of focus on amplification doesn’t mean that there isn’t some areas of real replacement, doesn’t mean that there aren’t some areas of real, you know, just switching to the AI providing the entire service. Now here’s where it gets the next asterisk, which is they say, wow, you’re paying a lawyer $1,000 an hour, so you’ll be paying the AI agent $900 an hour.

REID:
That’s not necessarily the case when it kind of — when it shifts. And part of what reshifts the human labor is because, well, maybe actually in fact, in computer hours you’re going to be paying the AI $20 an hour for this. And the $20 an hour thing will flood it. There will be a bunch of jobs, a bunch of work tasks, some jobs that were like the, oh, well, I used to be charging a hundred dollars an hour for that. Now it’s like being done much more effectively and quickly for $20 an hour. And it’s being done this way. This is also, of course, part of the reason why we saw a lot of globalization of jobs and customer service jobs and sales jobs and other things. But like, those will — they will change the economics of where the whole thing works.

REID:
And so one of the things I find entertaining about tech investors doing this, like, aha, my business model is, right? I will be charging the same for the service, but now I will be providing this much cheaper software. It’s like, no, the economics of the whole thing are a change. And by the way, where human beings fit in the mix, like, which salaries go up, which salaries go down, how many jobs there are, that’s very transformational. And there’s two things to say to that transformational side, too. One, people frequently don’t realize the Jevons paradox part of this, which is like, for example, a bunch of smart people have been saying, software engineers, they’ll be out of — There’ll be no software engineering jobs in two years.

REID:
And I will take the counter bet to that because I actually think there’ll be more, because I think there will be — There’s infinite demand at a certain price for software engineering jobs. And so you get the AI amplification through, you know, kind of Claude Code and Codex and others, you know, for the doing this. But you get this implication, that implication says, well, but okay, well, then what happens? They say, and this gets to the second point and say, well, you used to have 100 software engineers, you’re not going to have 30 because it’s much more effective in terms of doing. It’s like, so then those 70 we had work. Actually, what I think is those 70 will now be working other places. I think there’ll be more entrepreneurial places.

REID:
I think places that have traditionally not been able to hire software engineers that will now hire software engineers. And that’s a little bit of like when people say, well, but company X, in doing, it’s gotten to the natural mature size of its market. It can’t grow its market. So what it’ll do to make itself stronger is it’ll have fewer employees. It’s like, well, but then other people will engage and hire those employees in other kinds of businesses. And this gets to why entrepreneurship is so important.

REID:
This part is part of the reason I wrote my first book, the Startup of You and why Blitzscaling is that, you know, we as societies, we as, you know, cities and regions, we as countries, we as industries, we need entrepreneurship intensely because you need to be able to create the new products and services, the new jobs, the new companies, because you go, okay, great, we no longer have 100 companies in this space. We now have 200 with employees now more divided across them in terms of doing things. And that may be more specialization. Classic Adam Smith. You know, this one is accounting for, you know, small businesses that are restaurants. This one’s accounting for small businesses that are dry cleaners. This one’s accounting for, you know — I’m deliberately being like toy problem-ish in this.

REID:
But like, that’s the kind of thing I think will get to the jobs. A lot of transformation and then all the way back to it. I think AI will be the common thread between selling the software and selling the jobs is the most implication of the business’s ability to provide products and services that the market really wants at effective prices, which is part of the virtue and value of capitalism. And so that, I think, is where it will continue to be. And this is a decisive progression along this. And the subtle point to the not selling software, selling work is the business models of how software is engaged in terms of dollars, the framing of it, the way it works will probably evolve a lot.

ARIA:
Absolutely. I mean, already when you think about AI versus software, if I buy a license for Excel, I can use it as much as I want. If I buy Canva, I can use it as much as I want. If I have a Figma license and AI already, you can’t use it as much as you want. You’re paying by the hour. You’re paying for some of the output. So it’ll be interesting to see how that change, especially with compute costs, you know, hopefully predictively falling through the floor. So, Reid, thank you so much. Really appreciate it.

ARIA:
Always fun.

REID:
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.

ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil and Ben Relles.