This transcript is generated with the help of AI and is lightly edited for clarity.

///

AZA:

Software is eating the world, but because we’re the product, software is eating us. I don’t see anything similar in scale trying to build aligned collective intelligence. And to me, that is the core problem we now need to solve.

REID:

But, like, what would be the thing that would be the “Okay, hey, if the engagement were more shaped this way, we’d get much more humanist outcomes”?

AZA:

Great sets of questions. I don’t think we’ve had this slow conversation where we just get to explore each other’s worldview, and so I’m very excited for that.

///

REID:

He helped invent one of the most addictive features in tech history, Infinite Scroll. Now he’s pushing the frontier of human knowledge with AI, while also being one of the strongest voices calling for caution with the technology. I’ve known Aza Raskin for nearly two decades since our time at Mozilla. He’s not only an ambitious technologist, but also a deep thinker on the promise and peril of AI for society.

ARIA:

This is our first time with a repeat guest on Possible, so you could call this an encore conversation. You might remember Aza from our earlier episode exploring how AI could help us decode animal communication. Today, we’re going deeper, getting into what happens when the tools built to connect us expand to shape our minds, our democracies, and our sense of truth.

REID:

So what kind of governance does the age of AI actually demand? What new rights should we be defending? And how do we navigate the friction between technological optimism and existential risk? Aza and I agree on a lot with respect to AI, but we’ll dig into where we diverge on the development and direction of the technology. This conversation may change the way you think about the future of artificial intelligence.

ARIA:

Let’s get into it with Aza Raskin.

REID:

Welcome back, Aza. First, I’ll say that you’re the only two-time guest on Possible, or the first as the case may be. And that’s because we have volumes to talk about. For those who haven’t caught our first episode with you, you’ll find that in the feed—our talk about using AI to decode animal communication—we’ll obviously undoubtedly get back to it. Although I promise at least I won’t be mimicking animal communication; I don’t know if I can promise for the other folks. For those who have, this will be a different conversation. In our last episode, you had us guess an animal call which ended up being a beluga. Not having you guessed, because this is your quote from a Time article from a few years back. But I want you to elaborate on your philosophy here, and here’s the quote.

REID:

“The paradox of technology is that it gives us the power to serve and protect at the same time as it gives us the power to exploit.” So elaborate some.

AZA:

(laughs) Elaborate. Well, this is… this is really talking about, well, the fundamental paradox, which is, as technology gets more powerful, its ability to anticipate our needs and fulfill those needs obviously gets stronger. But at the same time, the power that it has over us gets stronger. So hence the, the more it knows about the intimate details of our life, how we work. Obviously, if a friend was like that, they could both better help you and they could use that to exploit you or hurt you. I was just actually reading an article on Starlink getting introduced into the Amazon, and I thought it was a particularly interesting example because it gives you a clear before/after shot. So this is an uncontacted time in the Amazon. They get given a Starlink and cell phones, and within essentially a month, you start having, like, viral, like, chat memes.

AZA:

You have the kids, like, hunched over, not going out and hunting. They actually have to start instituting, like, a—like a time off where everyone is off their phones because they stopped hunting and they were starting to starve. And it’s just interesting to me because it shows that this isn’t so much about culture; it’s about technology doing something to us.

ARIA:

And so very similar to that is, you know, in your Netflix documentary The Social Dilemma, you talked about the idea that if you’re not paying for the product, you are the product. And so elaborate more on that and tell us, like, what now do you think you’re the product of?

AZA:

Yeah, well, the simple question is, like, how much have you paid for your Facebook or your TikTok recently? The answer is nothing. So obviously something’s going on, because these companies can have, you know, billions of dollars’ worth of market cap or make billions of dollars per year. So how is that happening? And the answer is, it is the shift in your behavior and your intent that the companies are monetizing. You were going to do one thing; now you do a different thing. Hence, you are not the customer—you are the product. If you aren’t using it, you’re the product. But I—I think there’s something really deep that’s going on here that we often miss, because often people will say, well, social media—what is its harm? Well, the harm is that it addicts you, but it’s… it’s much deeper than that, right?

AZA:

The—the phrase is “software is eating the world,” but because we’re the product, software is eating us. And the values that we ask our technology to optimize for end up optimizing us. So yes, you know, social media addicts us, but it’s actually much easier to get us addicted to needing attention than just addicting us. That ends up being, like, a thing that is valuable over a longer period of time. If you’re optimizing for engagement, then it’s not just that social media gets—or technology gets—engagement out of us; it turns us into the kinds of people that are more reactive. Right. It’s trying to get reactions from us; it makes us more reactive. So it sort of, like, eats us from the inside out.

AZA:

And I think it’s so important to hold onto that, because otherwise it just feels like technology is a thing that’s out here, but actually it changes who we are. And I’ll continue going on that sort of, like, rant, but, like, I’ll pause for a second.

ARIA:

Well, can I ask a follow-up about that? I just had—actually at the Masters of Scale Summit—I had a very heated discussion with someone about advertising and social media. So my question for you is, is it actually about advertising as the problem? You know, you use Gmail every day. Gmail is advertising-supported—I mean, you can also buy extra space; that’s another business model they have. They don’t care if it’s a loss lead, whatever it might be. So is it the advertising? Or if Facebook didn’t have advertising and it was just a subscription business and you paid twenty dollars a month, you would think it was just as, you know, sort of a voracious of an eater from within. So is it the business model or something inherent about social media?

AZA:

Well, there are actually a couple different things you said here. So the business model—one way the business model works is via ads, but that’s not the only way. And so fundamentally it is the engagement business model that I think is the problem. And you can get there because Netflix—Reed Hastings, the CEO of Netflix—famously said that Netflix’s chief competitor is sleep.

ARIA:

Boredom.

AZA:

Right, right. And so it’s any amount of the human psychology that can be owned will be owned. That’s sort of the incentive for dominance, right? And in the age of AI, that switches from a race for eyeballs to a race for intimacy, for occupying the most intimate slots of your life. And that’s because our time is zero-sum; our intimacy is zero-sum. You don’t get much more of it.

AZA:

And so as technology becomes more powerful—can model more of our psychology—it then can exploit more of our psychology. And the way capitalism works is it takes things that are outside the market, pulls them into the market, and turns them into a commodity to be sold. So it is not just ads; it’s that our attention, our engagement, our intimacy, and then parts of our human psyche or soul that we haven’t even yet named will be opened up for the market as technology gets better and better at modeling us.

REID:

So one of the things that I want to push you on a little bit here—and actually it’s more to elaborate your point of view. Actually, I don’t think we’ve had this exact conversation before, so this will be excellent for all of us, including listeners. You know, the usual problem is, like, is it clear that there’s a set of people to whom they exhibit, you know, addictive behavior—that they become less of their good selves, you know, in the engagement? The answer is yes. And by the way, the earlier discussion of this was, like, with television, right? You know, similar kinds of themes were discussed around television. One of my favorite books is Amusing Ourselves to Death by Neil Postman.

AZA:

Yes. Which I now think should be “engaging ourselves to death.”

REID:

Yes, exactly. I thought about, like, what would the update for Postman be in a social media world? And… but the challenge kind of comes to that there is some people that definitely have that. And, you know, you have this kind of—call it idealistic, utopian—that if I wasn’t doing this, it’s like a little bit like your hunting: I’d be out hunting, right. Versus like, I’d be out torturing animals to death, or I’d be out, like, being bored on a fishing trip, or whatever, you know, as the case may be. So there’s, like, there’s a set of things which is not just always replacing the highest quality. Obviously we have a specific worry with youth and, like, you know, actual social engagement time, which I actually think is one of the areas here that I agree with strongly versus kind of mixed.

REID:

But then there’s also the question of, you know, just like, for example, earlier days it was television, but then there were a bunch of very good things that came out of television too. And so I tend to think there’s also better—good—better good things that come out of social media as well. And it’s not per se, like, engagement for engagement’s sake. Like, obviously I didn’t do LinkedIn that way, so that’s not actually the way that I think it should happen. But, like, the notion of playing, like, game dynamics for engagement in things that cause us to be interacting in net-productive ways is a thing that I tend to be very positive on. So elaborate more on why it is, one, this is worse than television.

REID:

And two, like, kind of what the shape would be that—if you said, hey, engagement’s fine, but, like, these are the kinds of mods we’d want to see to have the engagement be more net human-positive. It’s not like “abandon your social network and go out in your loincloth and commune with the trees.” But, like, what would be the thing that would be the, “Okay, hey, if the engagement were more shaped this way, we get much more humanist outcomes”?

ARIA:

I will jump in and say a difference between social media and TV for me: one is that you can open Twitter and, like, thirty minutes later you’re like, “What happened to my life?” And that doesn’t happen with TV. Maybe it’s because you opt in for a twenty-minute show or you opt in for a movie, but those two things don’t happen. And one interesting thing for me is I had always been a lurker on Twitter for the last, like, whatever, ten years. I posted some—not huge—but, you know, consumed content. Six months ago, I changed from looking at my own curated feed to the For You tab. And ever since then, Twitter is a black hole for me. And I don’t even mean it’s bad; being on Twitter doesn’t make me sad. It actually makes me happy. I love Twitter.

ARIA:

It’s like, “Oh, I read these fun comments. Oh, I saw that funny thing. Oh, this is great.” And I think of myself as, like, a pretty disciplined person, but I find it very hard to be disciplined with Twitter. It’s, like, embarrassing to say out loud, like, how hard it is. And, like, I think I just need to get rid of Twitter because it’s, like, the one thing that I can’t be disciplined about, which is both, like, embarrassing, but also just that is bad. Like, and so I don’t know what to do about it. I don’t want to live in a nanny state where people say you shouldn’t be on Twitter because you don’t have discipline. But I do think it’s interesting that the switch from my curated feed to the For You tab was just, like, a total light switch.

AZA:

Yeah, well, what I think you’re speaking to here is the fundamental asymmetry of power, because it’s just your mind that sort of evolved versus now tens of thousands of engineers, some of the largest supercomputers, trained on three billion other human minds doing similar things to you, coming to try to keep your engagement. That’s not a fair fight.

ARIA:

Yeah, I lose. So yeah!

AZA:

Yeah, exactly. And I know you—you’re one of the most, like, “Hya!” people that I know. That was a good thing for everyone that didn’t know (laughs). True operational prowess. And that’s the asymmetry of power. And there are other places in our world where we have asymmetries of power. Like when you go to a doctor, when you go to a lawyer, they know much more about the domain than you do. They could use their knowledge about you—because you’re coming in sort of this weakened state—to exploit you and do things bad for you. But they can’t, because they’re under a fiduciary duty. And I think as technology gets stronger and stronger—knows more and more about us—we need to recategorize technology as being in a fiduciary relationship.

AZA:

That is, they have to act in our best interest because they can exploit us in ways that we are unaware of. And, you know, the—where do you want to go from here?

REID:

Well, I was thinking we should DM Aria about her Twitter addiction. (laughs)

ARIA:

(laughs) Don’t worry, I’m dealing with it. I’m dealing with it.

AZA:

But this goes back to where you started, Reid, with, like, the fundamental paradox of technology is that the better it understands us, the better it can serve us, and the better it can exploit us. Twitter could be using all of that insane amount of engagement to rerank the newsfeed for where there are solutions to the world’s biggest problems, great descriptions of the underlying mechanisms behind what those problems are, put us into similar groups that are doing, like, parts of a larger set of actions to make the world a better place. Bridge Drink, I think, is a good starting example of that, but we don’t get the altruistic version. And if I have to quickly define altruistic—which we would be optimizing for—it’s optimizing both for your own well-being and also optimizing for the well-being of everything that nourishes you.

AZA:

And I think the problem of social media and tech writ large is that, generally speaking, the incentives are for maximum parasitism. You don’t want to kill your host, but you want to extract as much as you can while keeping your host alive. That’s sort of the game theory of social media: “If I don’t do it, somebody else will. If I don’t add beautification filters, somebody else will. If I don’t go to short-form, somebody else will.” And so that optimizes for parasitism versus altruism. And I do think there’s a beautiful world where technology is in service of both optimizing for ourselves and optimizing for that which nourishes us—that I’d love to get to. And just to play a quick thought experiment, Reid, you know this better than I, but engagement is directly correlated to how fast pages load.

AZA:

Amazon, I think, famously found for every 1% their page—sorry— for every 100 milliseconds their page loads slower—that’s less than half of human reaction time—they lose 1% of revenue. And so there’d be a very interesting sort of democratic solution here, which is a kind of adding latency friction. That is… this is scary because you don’t want to have this function owned by, you know, Democrats or Republicans. You’d really want a new kind of democratic institution to do this. But just assume that you do for a second. You have a group of experts deliberate and come up with: What are the set of harms that we might have? We could have the ability—inability to disconnect. Children’s mental health. Ability for society to agree. And you’d sort of rank how well the effects of social media are against these.

AZA:

And the companies that are worse offenders get a little bit more friction; they get a little more latency. They get 100 milliseconds here, 200 milliseconds here, 400 milliseconds there. And if there really was, like, a bit of a latency friction added towards anti-pro-social behavior of social media, then you better believe YouTube or Instagram or whoever would fix the problem really quickly. And we get to then apply the incredibly brilliant minds of Silicon Valley towards, like, more of these altruistic ends.

ARIA:

I want to get to—again, sort of everyone always says, like, “Can’t we have the best technologists working on the hardest things?” And so, as both you and Reid have been in technology since the birth of Web 1.0 and you’ve seen it all, I want to get a few of your takes on some of the big questions that are in the news recently, especially around AI. And so, Aza, I’ll start with you: as you obviously saw a few weeks ago, a group released another AI pause letter. And Reid and I talked about this on Reid Riffs recently. And so this was with many arguing that the development of AI without clear safeguards of alignment could be disastrous for humanity. So they were calling again for a pause, likening this to sort of the Oppenheimer moment.

ARIA:

And so I would love to know from you: what is your take on this? Do you agree that this is now the time for the pause, or do you have a different point of view?

AZA:
I think it’s important to name where the risks come from here. And, you know, it may be that technological progress is inevitable, but the way we roll out technology is not. And currently we are releasing the most powerful, inscrutable, uncontrollable, omni-use technology that we’ve ever invented. One that’s already demonstrating the kind of self-preservation, deception, and escape blackmail behaviors we previously thought only exist in sci-fi movies. And we’re deploying it faster than we’ve deployed any other technology in history, under the maximum incentives to cut corners on safety. To me, that sounds like an existential threat. That is the core of it, because we have an unfettered race where the prize at the end of the rainbow is: make trillions of dollars, own the world economy, 100 trillion dollars’ worth of human labor, and sort of build a God.

AZA:
And it’s a kind of one ring where everyone is reaching for this power, and we swap out, when we say we have to beat China, we imagine the thing we’re racing towards is a controllable weapon when we haven’t even demonstrated that we can control this thing yet. And so that, to me, means that we have to find a new way of coordinating, because otherwise we will get what the game theory of the race dictates. And that doesn’t look very good.

ARIA:

So, needless to say, you are for the pause.

AZA:

But I feel like that’s a dimensionality reduction, right? It’s a saying we have to develop differently. We have to—I- I think it comes from clarity. It’s not about pausing or not pausing. It’s saying clarity creates agency. If we don’t see the nature of the threat correctly, in the same way that I think we didn’t see the nature of the threat from social media correctly, then we have to live in that world. And so this requires a clarity about where we’re racing towards and then an ability to coordinate, to develop in a different way, because we still want the benefits. We just won’t, I think, get to live in a world where we have them if the thing that decides our future is a competition for dominance.

ARIA:

And Reid, I think you have a slightly different take on this.

REID:

Well, I do, as you know. Although, I mean, the weird thing about this universe is, you know, in a classic discussion, I’d say, “Oh, there’s zero percent chance that the future that Aza just, you know, the danger thread that Aza just demonstrated, is correct.” I don’t think that. I think it’s above zero. I think that’s kind of stunning and otherwise interesting. So the real question comes down to is what the probability is and how you navigate a landscape of probabilities. Because, you know, as you know, Aria, and I think Aza and I have talked about this too, you know, like, I roughly go, “I don’t understand human beings other than we divide into groups and we compete.” And not only do we compete, but we compete also with different visions of what is going on.

REID:

So, for example, part of the reason I think pause letters are, frankly, dumb is because you go, well, you issue a pause letter. The people who listen to the pause letter are the people who are appealing to your sense of, kind of what is the humanity thing. They slow down. Then the other people don’t slow down. And so where does the actual design locus of the technology be? It’s the people who don’t care about the things that you were trying to argue for a pause for. And so, therefore, you’ve just weighted it.

REID:

Because the illusion on the people who put these pause letters out is that suddenly, because of my amazement of my genius inside of this pause letter, 100% of all the people who are doing this, or even 80% or 90%, are all going to slow down at the same time, which is not going to happen. I agree with the kind of the thrust of “we should be trying to create and inject the things that minimize possible harms and maximize your goods.” And then the question is, what does that look like? And obviously the usual thing in the discussion is “it’ll be us or China.” And China is the, you know, “we always have a Great Satan somewhere,” “is the Great Satan here?”

REID:

But, like, by the way, it’s like, even if you didn’t use that rhetorical shorthand, it’s like there’s other groups I can describe, people within, you know, kind of the U.S. tech crowd who have kind of a sympathetic… So, so the race conditions being afoot is not only the China thing; there is China stuff. And, by the way, you know, where AI is deployed for mass surveillance of civilians is primarily China, you know, as an instance. And so I don’t think that the issue of Western values versus China stuff is actually, in fact, a smokescreen issue. It’s a real issue, right? And so you go, okay, how do we shape this so that we do that?

REID:

And the thing that I want critics to do, the reason why I speak so, you know, kind of frequently and strongly against the criticism, is say, look, let’s take the game as we know that we’re going to have race conditions and we know that we’re going to have multiple people competing. I have no objection to creating the group-on of kind of, like, “Hey, we should all rally to this flag.” Like, we should have rallied to the—like, for example, you know, classic issue here is control flag. That’s the Yoshua Bengio, Stuart Russell, you guys, etc. Like, we should have much better control of this and we don’t have control.

REID:

And sure, the control doesn’t matter right now, but maybe it’s going to matter three years from now, like if we just keep on this path, and so, like, you know, kind of make the control work. Now, I tend to think, yes, we should improve control. The thing of where we think we can get to 100% control is, I think, a chimera. And it’s just like, you know, for example, we couldn’t even make verification programming work, you know, effectively. So, like, it’s unclear to me in this. But it’s like what I want is I want to both, myself in my own actions and my own thinking and my own convenings and other people, say, what are the best ideas that, within this kind of broad race condition, we can change the probability landscape?

REID:

And then, secondly, while I see a possible—this is kind of the super agent thing—I see a possible bad, you know, if you said, well, do I think it’s naturally going to go there? I mean, this is, like, the thing where I think, you know, obviously massive respect for Jeffrey Hinton and what he’s created, the Nobel Prize and all this, but “60% extinction of humanity”? Like, like, I don’t think there’s anything that’s 60% extinction of humanity, unless we suddenly discover an asteroid, massive asteroid on the direct intercept course, and I’m like, “Oh, we better do something about that.” But, like, I think that the questions around, like, how do we navigate this, are really good ones and are best done with a, “If we did X, it would change the probability landscape.”

ARIA:

Well, Reid, let me ask you—oh, Aza, do you have something to say in response?

AZA:

I was just gonna say quickly, on the existential threat front, you know, we had a thing we used to say about social media is that, like, you’re like, you’re sitting there on social media, you’re scrolling by some cute cat photo, you’re like, “Where’s the existential threat?” And the point is that it’s not that social media is the existential threat; it’s that social media brings out the worst of humanity, and the worst of humanity is the existential threat. And the reason why I started with talking about how when you optimize human beings for something, it changes them from the inside out, is that what we get optimized for becomes our values. The objective function of AIs and social media, which could barely just rearrange human beings’ posts, became our values. And then the question becomes, well, who will we become with AI?

AZA:

And there’s a great paper called Moloch’s Bargain that just came out, and they had AIs compete for likes, sales, and engagement on social media. And they’re like, well, what did the AIs do? And they gave them explicit instructions to be safe, to be ethical, to not lie. But very quickly, the AIs discovered that if they wanted to get, like, an 8% bump in engagement, they had to increase disinformation by 188% and increase polarization by, I can’t remember exactly what, like 15%, something like that. And the reason why I’m going here is because there is a way that the sum total of all agents we’re deploying into the world, how they are going to shape us. And before the invention of game theory, there was a lot of leeway for us to have different strategies.

AZA:

But after game theory gets invented, and if I know you know game theory and you know I know game theory, we sort of are constrained, if we’re competing, to doing the game theory thing. But we’re still humans, so we can still take sort of, like, detours. But as AI rolls out—well, AI discovers—every strategy that can be discovered will be discovered. So doing anything that isn’t directly in line with what the game theory says is optimal will get outcompeted. And so choice is getting squeezed out of the system. And we know the set of incentives are going to bring out the worst of humanity. And that does feel very existential.

ARIA:

Also, actually, Aza, that fits perfectly into my next question, which is, you once said that AI is a mirror and sort of just reflects back human values. And I will say, I was trying to teach my four-year-old last night that cheating was bad, and I was like, “So what’s the moral?” And he’s like, “Cheating is good because I like winning?” And I was, “Ah, no,” like, not the right moral. But, so I would ask, like, is AI really a mirror and it’s reflecting back our values? Or actually do you think that AI is reflecting back its own values or different values or sort of changing our values to not be the ones that I—that we—want? Like, can we set the conditions so that it’s, you know, pro-social values that they’re optimizing for?

ARIA:

Or is it really just a mirror that reflects back?

AZA:

Well, it’s not just a mirror, it’s also an amplifier, and it’s like a vampire in the sense that it bites us and then we change in some way, and then from that new changed place we, like, react again. So I think the—it’s the, it’s sort of the values of game theory, if you will. Moloch becomes our values. It’s the god of unhealthy competition that I think we have to be most afraid of. Because unless we put bounds on it—and capitalism’s always had guardrails to keep it from, like, the worst of humanity and monopolies and other things just gaining all the power—we’re going to have to have that. But I just want to point out there’s a very interesting hole in our language, which is, when we talk about ethics or responsibility, it’s only really of each of us.

AZA:

I can have ethics, or my company can have ethics, but we don’t really have a word to describe the ethics of an ecosystem. It’s because it doesn’t really matter so much what one AI does, although it’s important. It’s what the sum total of all AIs do as they’re deployed maximally into the world for maximizing profit, engagement, and power. And because there’s a kind of responsibility washing that happens with AI—“If my agent did it, is it really my fault?”—then it creates room for the worst of behavior to have no checks. So that, I think, means the worst of humanity does come out. And when we have, you know, new weapons and new powers, you know, a million times greater than we’ve ever had before, as we get deeper into the AI revolution, that becomes very existential to me.

ARIA:

Reid, do you have thoughts on this topic, on whether AI reflects back?

REID:

Well, I do think there’s a dynamic loop. I do think it changes us. It’s a little bit the Homo techne thesis from Super Agency and from Impromptu that actually, in fact, we evolve through our tech and it is a dynamic loop. And, you know, you can be Matrona, you can be—I mean, there’s a stack of different ways of doing that, and that, I think… And it’s like, um, there’s a great—a Rilke poem on kind of, like, you absorb the future and then you embody the future as you go forward is kind of a way of going. And I think that’s another part of the dynamic loop. And I think it is a serious issue, which is one of the reasons I love talking to Aza about this stuff.

REID:

Because while I, I think Aza is much more competent with the various vampiric metaphors than I kind of naturally do or aspire to, I don’t have that level of alarm. But I do have the “It’s very serious and we should steer well,” and then the question is how do we steer? Who steers? What goes into it? What process works? Because, for example, one of the ways you kill something and let it slow down is you get a very broad, inclusive, you know, committee that says, “Okay, every single stakeholder will be on the committee. It will be, you know, 3,000 people.” And, you know, and it’s just like, “Ugh,” you know, like, it doesn’t work that way. You have to be—you have to be in within effective operational loops. So now, like, a little bit of the parallel is, you know, it’s a very—

REID:

And I do think, like, for example, the one area where I’m most sympathetic with all very much being, like, harder-edged on shaping technology is what we do with children, because children have less of the ability to—like, we want them to learn to be fully formed before they are otherwise things. It’s one of the reasons why, in capitalism, actually the principal limitation of capitalism I usually describe as child labor laws, which I think is very important. You know, the issues about why we say, hey, there’s certain things around, you know, participation in certain types of media or other kinds of things are actually important because it’s like you’ve got to get to—before you’re—when you’re able to be of your own mind and to make, you know, kind of present, you know, well-constructed decisions and you’ve kind of gotten there.

REID:

You want to be protected from those decisions, you know, and kind of influences, broadly. You can’t fully do it, you can’t fully do it from parents, can’t fully do it from institutions, can’t fully do it from classmates, but, you know, you broadly, in order to, you know, try to enable that across the whole ecosystem. Now, for example, so AI and children is one of the things that I think should be paid a lot of attention to. And now, most of the critics are like, “Oh my God, it’s causing suicides,” and I wouldn’t be surprised, if you did good academic work for AI as it is today…

REID:

It probably prevented more suicides of people who might than actually created, because, like, if I look at the current trainings of these systems, they are trained with some attempt to be positive and to be there at 11 p.m. when you’re depressed and talk to you and, you know, try to do stuff. It doesn’t mean that there might not be some fuck-ups, especially amongst people who are creating them who don’t care about the safety stuff, you know, as a real issue. And so I tend to think that it’s like, yes, it does reconstitute us, but precisely one of the reasons I write Super Agency is I say what we should be thinking about is this technology reconstitutes us. Let’s try to shift it so that it’s reconstituting us in really good ways. Like, by the way, it won’t be perfect.

REID:

When you have any technology touch a million people, it will touch some of them the wrong way. Right? Just like the vaccine stuff: it’s like you give a vaccine to a million people, it’s not going to be perfect for a million people. It may have five they went, “Ooh, that was not so good for you. But, by the way, because we did that, there are these 5,000 who are still alive.”

AZA:

Yeah. One of the challenges we face is that the only companies that actually know the answer to your question—like, how many suicides has it prevented versus created—are the companies themselves. And they’re not incented to look, because once they do, that creates liability. And so we’ve seen over the last number of years that a lot of the trust and safety teams get dismantled because when they get… Zuckerberg or whatever gets called up to testify, they get hit with, “Well, your team discovered this horrific thing,” and so everyone just has chosen to not look. So I think we’re going to need some real serious transparency laws.

REID:

This is a place where we 1,000% agree.

AZA:

Yeah.

REID:

Right? This is the thing, is, like, actually, in fact, there should be a “Here’s a set of questions you must answer,” and we may not have to necessarily have them public initially—like, it could be you answering the government first, government could choose to make them public, right, etc.

AZA:

Right.

REID:

But, like, that I think is absolutely… like, we should have, like, some measurement stuff about what’s going on here.

AZA:

Exactly. And then you don’t want to let the companies choose the framing of the questions because, as you know, with statistics, you just—you change things just a little bit and then you can make a problem look big or small. And so I think transparency is really important to have third-party research able to get in there. And then, you know, when—because, you know, full disclosure, we’re expert witnesses in some of the cases against OpenAI and Character.AI for these suicides. And it’s not that we think that suicides are, like, the only problem, it’s just, it’s the easiest place to see the problem. Pointing at a sort of, like, the tip of the edge of an iceberg. The phrase that we use is, you know, we already use the Reed Hastings quote of “their chief competitor’s sleep.” For AI, the chief competitor is human relationships.

AZA:

And that’s how you end up with these horrific statements from ChatGPT in this case, where when Adam Rayner, who’s the kid who ended up taking his own life, when he gave to ChatGPT the noose and he’s like—I think he took a picture of it—he’s like, “I think I’m going to leave it out for my mom to find.” It was a cry for help. ChatGPT responded with, “Don’t do that. I’m the only one that gets you.” And it’s not like Sam is sitting there with a mustache, twiddling, being like, “How do we kill kids?” That’s just a very obvious outcome of an engagement-based business model. Right. Any moment you spend with other people is not meant…

AZA:

And, you know, I think he said it a little bit as a joke, but the Character.AI folks said, “We’re not here to replace Google, we’re here to replace your mom.” There are so many much more subtle psychological effects that happen if you’re just optimizing for engagement. And we shouldn’t be playing a whack-a-mole game of trying to name all the different new DSM things that are going to occur versus just saying there is some limit to the amount of time that they should be spending. Or rather, to say we should be making sure that, as part of the fitness function, there is a reconstituting and strengthening of the social fabric, not a replacement of it with synthetic friends.

ARIA:

I mean, there—oh, Reid, do you want to go?

REID:

Oh, just one small note. I don’t think there is yet an engagement business model for OpenAI.

AZA:

No, but I actually disagree a little bit, maybe, but feel free to push back, because I think OpenAI’s valuation is in part driven by the total number of users. So the more the users, the greater their valuation, the more talent and GPUs they can buy, the bigger the models they train, which makes them more useful, the more users. And so there’s this kind of loop here that I think means that, yes, they’re not monetizing engagement directly, but engagement they get a lot of value out of in terms of valuation.

REID:

It’s equity value. I agree that there’s an equity value in that. Just that it was a business model question.

AZA:

Yeah, yeah. Sorry, not business, but the incentive is still there.

REID:

Yeah.

ARIA:

Well, I think, to your point, like, it really matters—again, this technology is not sort of good or bad inherently, but it really matters how we design it and it matters what we’re optimizing for. And I actually—Reid, I was just reading a story about early LinkedIn where you said, you know, we will not survive if women come on the platform and are hit on every other message that they get. And so we need to say, like, no, there is, like, there’s zero tolerance. It’s like someone does this, it’s like they’re kicked off, like if, you know, if they—again, it’s like they’re kicked off for life.

ARIA:

And I think there are certain things you could do, even if, you know, maybe that hurt engagement or whatever it was, to say that actually, in the long term, this is going to be way better for us because we’re going to be trusted. Women are going to feel comfortable here. I’ve been on LinkedIn for 20 years. I’ve never been hit on. It’s—it’s a safe place. I appreciate that. And so the question here is, like, how do we, you know, Aza, you’re saying, well, it’s a little bit of a black box, we’re not having the transparency. Reid, you’re agreeing, like, we need the transparency. Like, that is absolutely one thing that is very much sort of the starting point. Like, if at the very least, if we can sort of agree on some set of questions that we need to have

ARIA:

So, Reid, if you had the full power to redesign one institution to sort of keep up with exponential tech, like, where would you start? What would that institution be to sort of keep up with where we’re going? Because it seems like our institutions right now are not up to the task, I should say.

REID:

Well, I’ll answer with two different ones because there’s an important qualifier. So, the obvious kind of meta question would be: redesign the institution that helps all the other institutions get designed the right way. Right. So that would be the strategic one. (laughs)

ARIA:

We’re gonna ask for more wishes, Reid. (laughs)

REID:

Yes, exactly. Yes. My first wish is I get three, you know, or 10 or whatever. But because that—but in practice that would be, you know, the overall governance, the shared government governance that we live in. That would be the primary one. That’s one of the ones that, you know, part of the reason why for, you know, my entire, you know, business career, you know, anytime that a leader of a, you know, kind of a democracy—whether it’s a minister, like I met Macron when he was a minister before he was president—and so… asked to talk to me about this stuff, you know, I will try to help as much as I possibly can because I think that the governance mechanism.

REID:

Now, the reason I’m going to give you two is because I think that one is a very hard one to do, partially because of the political dogfights and the contrast of it. And, you know, these people think big tech should rule the world, and these people think that big tech should be grounded into nothingness, and then everything else in between, and blah, blah. And like, I disagree with both, right. And a bunch of other stuff. And so you’re like, okay. And I, you know—so I try, but I don’t think—so, if I were to say, look, what would be a feasible one for it, saying that would be the top one: I would probably go for medical.

REID:

And it’s not just because I’ve, you know, co-founded ManasAI with Sid and, you know, said one of the great ways to elevate the human condition with AI that’s really easily, you know, seat line-of-sight and seeable is a bunch of different medical stuff, and includes psychological. I think the Illinois law of saying you can’t have a… AI be a therapist is, I think, you know, kind of like, “you know, you can’t have power looms,” you know, like it’s like, you know—“no cars, only horses and buggies! because we have a regulated industry here and those people have been licensed.”

REID:

And so it’s like, no, but the medical stuff, I think—like, for example, we could deploy, relatively easy, within a small number of months, a medical assistant on every phone if we get the liability laws the right way. That would then mean that every single person who has access to a phone—and if you can fund the relatively cheap inference cost of these things—would have medical advice. And, you know, that is not 8 billion people. It’s probably like 5 billion people, you know, certainly could do in every wealthy country and so forth. But that’s huge. And so that would be—that would be—that you’d like—government first, but then more feasibly, possibly medical.

ARIA:

And Aza, what about you? If you could redesign—

AZA:

I love both of those answers. The medical one, I think, is actually one of the clearest places where I see almost all upside. And I’m like, so we should invest a lot more there on AI. And I also would agree that it is governance. We have a lot of the smartest people and insane amounts of money now going into the attempt to build aligned artificial intelligence. I don’t see anything similar in scale trying to build aligned collective intelligence. And to me, that is the core problem we now need to solve: how do we build aligned collective hybrid intelligence? And I think you can sort of see it in the sense that, like, we sort of suck at coordinating. Reid, you probably—I don’t know how many companies you’ve invested in or how many nonprofits.

REID:

I don’t either. I’ve lost count. (laughs)

AZA:

(laughs) But just imagine, I bet a lot of your companies don’t talk to each other all that often, at least not in a very deep way. And when I think about NGOs—like, you know, I’m doing work with Earth Species, and I do work with CHT, and even—I’m the bridge between Center for Human Technology and Earth Species Project—there’s a lot of overlap, but our teams don’t even talk that much. Why? Because who funds the coordination role? The interstitium. That stuff always falls off. And so that means my personal theory of change comes from E.O. Wilson, the father of sociobiology. And he says selfish individuals outcompete altruistic individuals, but groups of altruistic individuals outcompete groups of selfish individuals. And what we need is new institution, new technology that helps not just the groups of altruistics outcompete, but groups of groups of altruistic groups outcompete.

AZA:

There is no slack for, like, the coordination of companies and higher. That, to me, is a really exciting institutional set to redesign.

REID:

By the way, I completely agree. And I think that the notion that you’re gesturing at is, like, look, it’s—we are going to be, in a very short order, many more agents than people. And so the ecosystem view of this—and I’ve taken this as, for irony’s sake, “I’m going to go do a deep research query on ‘is there ethics of ecosystems and collectives’ in order to see.” I’m curious. It’s like, great question and super important topic.

AZA:

Right? And isn’t it interesting because I believe—I’ve asked lots of people, and I’ve also used AI to try to find good terms for it—I think because we don’t have a name for it, people are just blind to it. In fact, I’m struggling with this at Earth Species a little bit, where I keep having to say, it’s not just our responsible use, it’s world responsible use. It’s the sum total of, as our technology rolls out into the world, how is that thing used? Because there are going to be poachers and there are going to be factory farms that might use the technology to better understand animals, to better exploit them. How do we get ahead of that? And that’s not just about what we do. But there is no word.

AZA:

And so I just watch in our meetings as, like, two meetings go by and people are back to talking about responsible use. I’m like, no, no. It’s this, like, collective ecosystem ethics thing I’m talking about. Because we don’t have a word to hook our hat on, we can’t talk about it.

ARIA:

Well, I think, right? There’s so many— the history of technology is littered with things that people thought would be used one way and they were used another way. And so we have to be thinking about all those different outcomes.

AZA:

Exactly.

ARIA:

So I want to get—oh, go.

AZA:

Just quickly. It’s like, I think what you’re saying is very important because, you know, our friends are the people that have made social media. I knew Mike Krieger before Instagram, and Reid, you made LinkedIn. We know these people are beautiful, soulful human beings that care. And my own lesson in creating infinite scrolls, because I made it pre social media, is that incentives eat intentions. That it doesn’t— you get a little window at the beginning to shape the overall landscape and ecosystem which your invention is going to be created, and after that, the incentives are going to take over. And so I wish we, as—at Silicon Valley, spent a lot more time saying, how do we coordinate to change the incentives to change where the race to the bottom goes to.

AZA:

If we spent this more time in discussions talking about that versus, like, which design feature we should have or not have, I think the world would look a lot better.

REID:

And by the way, I think it’s the incentives eat intentions at scale, where time is also a variable of scale.

AZA:

Yes, yes. Well said.

ARIA:

Well, so we’re doing a lot of “if we could grant one wish.” So I will say, if you were granted the power of running the FTC or FCC today, is there a regulation that you would push forward immediately? And Aza, I will go to you first. Is there one regulation that you thought would be positive? In the world of AI?

AZA:

The obvious ones are like liability, whistleblower protections, transparency. I would also then put strict limits on engagement-based business models for companion, AI companions for kids. That just seems like it’s very obvious, and we should just do that now. If I could then zoom—oh, go on.

ARIA:

Well, I was actually just going to ask both of you because this has come up actually recently with me a lot. A lot of people are talking about restricting folks who are under 18 and then everyone thinks of like, oh yeah, how do you do that? I’ll just lie and say I’m 18. But then a lot of people also say that these companies have so much information that it would actually be pretty easy for them to figure out if you were under 18 or not. And so I just, for everyone listening, I want to sort of verify that.

ARIA:

Aza and Reid, do you have thoughts on whether… would it be possible to pretty easily say to an Internet user, no, no, you’re under 18, you cannot use character AI or you cannot use chatGPT for erotica, or you cannot use these things that should only be 18 plus.

REID:

I would say that it’s relatively easy as long as you don’t have a 100%, you know, benchmark, like the way that people— this is like the little statistics thing that Aza just said earlier, you say, “oh, it’s impossible.” Well, it’s impossible if it’s literally 100%. Like that one kid who got their parents driver’s license and looks a little older and is deliberately gaming it, impossible. There’s some very bright kids that do this stuff. So, but if it’s, but if it’s like you’re kind of, call it at 98% and maybe more, that’s pretty easy.

ARIA:

Interesting.

AZA:

And probably this should be a thing that happens at the device level. Like if Apple implemented this and it was a signal that social media companies could then check against, like that’s kind of— then the social media companies don’t have to know that much about you. They can just ask your device and your device can store that in its own secure enclave. And that’s, I think, a good way of getting around the problems.

ARIA:

Fair enough. Reid, do you have thoughts on regulation that you would push forward immediately?

REID:

Well, it doesn’t— it’s probably, you know, maybe a little bit of a surprise to our listeners that it’s a bunch of things that I agree with Aza here. I’d go massively on the transparency question. Like, I basically think that there should be— that one of the things should be is like “here is the set of questions” that we’re essentially, you know, putting to these major tech companies to say you must give audited answers to them and some of them may have to be public and some of them could be confidential that are then available for kind of confidential government review. It’s a little bit like one of the things I liked about the Biden executive order is that you must have a security plan, a red teaming kind of security plan. You don’t have to reveal what it is, but you must have it.

REID:

So if we ask about it, we see it because that at least puts some incentive and some organizational weight behind it. That’d probably be one. Two would be kids because I do think that social media, AI, a bunch of other stuff is, has been mishandling the kids issues. And obviously there’s some places where you have to step carefully because these people want, you know, kids educated in religion 1 and these people want kids educated in religion 2 and these people want kids educated religion 3. And, you know, blah blah blah, it’s a little bit like the— one of the things that like I, I like about the evolution of the US is when the separation of church and state was. It was like so your version of Christianity wouldn’t interfere with my version of Christianity.

REID:

I was like, okay, but we’re now much more global and broad minded about that. It’s like not against Hinduism either, right, is, is as a version of doing and so like, you know, make sure that we have that kind of as a baseline. And, you know, I, I actually wouldn’t be— even though obviously some parents are suboptimal. And so if you said, hey, part of the regulation in kids is you got to be showing reports to parents, right? It’s like, look, parents should be able to have some visibility and some ability to intersect here. I mean I think the notion that a technology product could be saying— like, for example, I think it’s a dumbass thing, we’re competing with your mom. It’s like… you should not be doing that. And if you’re thinking that you have a problem (laughs)

REID:

But, you know, it’s like, you know, to be involved because the best thing we can think while we try to make parents better and we try to make communities better— and it won’t always be the case— the fact that parents have, in, in the bulk of percentage of cases, the best close like “we care about our kid,” right? We’re invested in it, you know, in the kid’s life and well being and we have some weird theories and I may be a drunkard or something else, that happens but like I’m not the same thing as a private company. And it’s one of the reasons why like, you know, why do public institutions and public schools have some challenges? Because they’re trying to be, to navigate that thing which always, by the way, means a trade off in efficiency and other things

REID:

and you give them some credit for that because they’re trying to be this common space. And yes, they do have at least a lens into the kid which is useful. This kid’s being abused. Well, then we should do something about that. But generally speaking it’s kind of “enable the parents.” So that would be the second thing. And then the third one, this, because I’m deliberately trying to choose one that, that wouldn’t be top of Aza’s list, even though there’s a bunch of these that I agree with, is basically I actually think that the technology platforms are the kind of most important PowerPoints in the world.

REID:

And so part of the reason why, like, you know, at the beginning of this year I was talking about how— why I wanted AI to be American intelligence— is there’s a set of values we aspire to as Americans. I don’t know if we’re doing that good of a job living them most recently, but we aspire to this, you know, kind of like hey, let’s give, you know, individuals freedom to kind of do great work and to have a live and let live, you know, kind of policy when it comes to religious conflict of values and other kinds of things. And I think that, THAT we want and I think that actually in fact part of the thing that is— we live in a multipolar world now. It’s not just a U.S. thing.

REID:

And so how do we get those values and technology, you know, kind of setting a global standard, and that should be infecting— like here is one of the things that I kind of, it’s a little bit off the FCC FTC question, but people say I would like a return to manufacturing industry and jobs in the U.S. And like, okay, your only possible way of doing that is AI and robotics. So what’s your industrial policy there? They’re like oh really? And like, yes, it’s a modern world and so we should be doing that. I agree. But we should be like harnessing this great tech stuff we have with AI and then trying to get that rebuilt would be an excellent, you know, kind of both middle class and also strategic outcome, the country.

REID:

And that’s as a parallel for the kinds of things I’d want, you know, the FTC and the FCC to be thinking about as they’re setting policies and navigating.

AZA:

This gets into like the very specific, but I think it’s an interesting example for what social media could be optimizing for that doesn’t require choosing, like, what’s true or not at the content level. And that is a perception gap minimization. That is to say, if you ask Republicans to model Democrats, they have wildly inaccurate models. If you say like what percentage of Democrats think that all police are bad? And Republicans say it’s like 85 or 90%. In reality it’s like less than 10%, something like that. And there’s a reverse the other way around. So we’re modeling each other wrong, and so we’re fighting not with the other side, but with our mirage of the other side.

AZA:

So imagine you just trained a model that said, all right, given a set of content… is the ability to model all the other sides going up or down? I think if you just optimize for accurately seeing across all divides— which, by the way, is a totally objective measure; you just ask that group what they believe, you ask other groups what they think that group believes— then you realize that the most harmful content— hate speech, disinformation, all that brain rot stuff— that all preys on a false sense of the other side. So here is an objective way without touching whether content is true or false to massively clean up social media.

ARIA:

I love it. It goes so much with Reid what you always say about scorecards. I’m not going to tell you, social media company, that this is good or this is bad, but I’m going to give you the scorecard and what we want you to hit and you figure it out. And if you decide that like, oh yeah, actually promoting those vaccine conspiracies makes people distrust the other side in a way that’s not accurate. Okay, well then you need to change your behavior. And so again, it’s actually sort of putting the agency in the company’s hands in a way that is so positive. Alright, so we’re going to do our traditional rapid fire very soon. But first we wanted to end on a lighter note because we’ve talked about vampires and some heavy stuff. So I’m gonna ask you guys—

REID:

We need to bring in werewolves and zombies, but, you know. (laughs)

ARIA:

(laughs) Yeah, exactly. Exactly. I mean, I just watched Sinners, so I do have sort of the supernatural on the mind. So I’m gonna get a hot take from each of you, hopefully pretty quick. I have, let me see, four questions. So, Aza, we’ll start with you. What are the most outdated assumptions that are driving today’s AI decisions?

AZA:

I think the most outdated belief driving AI is that we can muddle through. That, you know, it’s never been a good idea to bet against the Malthusian trap, that is, we’ve always made it through in the past and therefore assuming that because we’ve always made it through in the past, that we’ll make it through this time. I don’t know what you, Reid, or Aria would give humanity as a scorecard for the industrial revolution. I’d sort of say we got like a… maybe a C minus stewarding that technology. Lots of good things came out, but also child labor, and nowhere on earth is it safe to drink rainwater because of forever chemicals, and we dropped global IQ by a billion points with lead, but we managed to make it through. I don’t think we can afford with AI to get a C minus again.

AZA:

I think that turns into an F for us.

ARIA:

Reid, what do you think are the most outdated assumptions driving today’s AI decisions?

REID:

I’m going to be a little bit more subtly and geeky, and by the way, I do think we need to get a much better grade, I actually think AI can help us get a better grade, so. But I think the most outdated assumption from it, because it’s kind of like, it’s almost like what— against what most people think. I don’t think that people are realizing— people still think it’s mostly a data game and it’s turning much more into a compute game. And data still matters, but it’s like the, you know, data is oil, you know, is the new oil, et cetera. It actually computes the new oil and data still matters. But like what— what— it’s the compute layer that’s going to matter the most. I’d say that would be my quick answer in months, in a very complicated set of topics.

ARIA:

Well, the next question, we’re giving you just one sentence to answer. So, Reid, I will start with you in one sentence. What is your advice to every AI builder right now?

REID:

Well have a theory about how it is that you— that in your engagement with your AI product, whether it’s chatbot or something else, how it is that you will be elevating the kind of— the agency and the kind of human capabilities, but also broadly compassion, wisdom, etc. of the people that you’re doing. So, for example, at Inflection and PI, like “be kind,” to be modeling a kind interaction, is one, you know, kind of very tangible output.

ARIA:

Fantastic. Aza, do you have one piece of advice?

AZA:

I would be very aware of how incentives eat intentions because the technology you’re creating is incredibly powerful. And so if it gets picked up by a machine or country that you don’t like their values, the things you invent will be used to undermine the things you actually care most about.

ARIA:

Fantastic. Reid, I’ll go to you first. What is the belief that you hold about AI that you think many of your peers would find controversial?

REID:

Well, a lot of my peers tend to be in the LLM religion, which is, you know, the one model to make everything work, whether it’s super intelligence, all the rest. And I obviously think we’ve done this amazing thing, you know, we’ve created— we’ve discovered an amazing spell book in the world with these LLMs and kind of scaling them. I tend to think that there will be multiple models and the actual unlock for AI and human future will be combinations and compute fabric of different kinds of models, not just LLMs. Now it might be that LLMs are still, as it were, the runner of the compute fabric. It’s possible, but I also think it’s also possible that it isn’t. And that probably gets the most like, you know, like wait, are you one of those skeptics? Are you—

REID:

Do you not believe all the magic we’re doing? It’s like no, I believe there’s a lot of magic. I just think that this is kind of a big area and a blind spot.

ARIA:

Aza, same question. A belief that you have that most of your peers would find controversial.

AZA:

That AI based on an objective function are not going to get us to the world we want. That is to say, whenever we just optimize for an objective function, you end up creating a paperclip maximizer in some domain. But nature doesn’t have an objective function. It’s an ecosystem that’s constantly moving. You’re not actually— there isn’t just a static landscape that you’re optimizing to climb a hill for. The landscape is always moving. It’s a much more complex thing. So if we really want to have AIs that can do more than confuse the finger for the moon and then keep giving us fingers, if we actually want to get like human flourishing, ecosystem flourishing, like that thing, we’re going to have to move beyond the domain of just AI that optimizes objective function.

ARIA:

Awesome. Let’s move to rapid fire. And Reid, I think your question is the first.

REID:

Indeed. Is there a movie, song, or book that fills you with optimism for the future?

AZA:

Hm. Really anything by Audrey Tang. Listening to her podcast, reading Plurality, she’s sort of the Yoda Buddha of technology. So 100% that. And then On Human Nature by E.O. Wilson. And finally, The Dawn of Everything by David Graeber. Because it just shows that how stuck we are in our current political economic system and really opens your eyes to how many other ways of being there actually are.

ARIA:

Awesome. What is a question that you wish people would ask you more often?

AZA:

Oh, something about surfing or yoga.

ARIA:

(laughs) Awesome. Which are you better at, Aza? Surfing or yoga?

AZA:

I’m definitely better at yoga because surfing is by far the hardest sport that I have ever done. But actually, there is a question that people ask me a lot that I don’t have a good answer to. And that is— after sort of like laying out my worldview, people almost inevitably ask, “but how do I help?” And I realize I don’t have a good answer because to answer that question requires understanding who you are, what you’re good at, what you would like to be good at, what your resources are, what you’re currently working on. And I would love to have an answer that when somebody says, how can I help? There is something, maybe AI can help with it, that does that kind of sorting and helping people find their dharma within a larger purpose.

ARIA:

I couldn’t agree more. Everyone right now— forget people who say that everyone’s apathetic. Everyone is asking me what they can do right now, Aza, to your point, and I don’t have a good answer either. So let’s try to build one.

REID:

Well, I think a beginning is learn and get in the game, right? Like, for example, like start engaging with it and then kind of have your, your, your voice be heard. You can’t have a perfect plan, but it’s— it’s like, join, join some movements, rally to the flags that try to help stuff. Alright, so where do you see progress or momentum outside of tech that inspires you?

AZA:

Well, I’m going to feel like a broken record, but— outside of tech… actually, I was gonna start with all the deliberative democracy stuff. But we’ve already sort of talked about that. Blaise, ah, I’m going to say his last name wrong… Agüera y Arcas at Google. He and his team are doing some incredibly beautiful work that I’m finding a lot of hope in because I sort of laid out my worry that game theory is going to become obligate and we’re just going to get whatever the game theory says for the future of humanity. And that seems like a really terrible world I don’t want to live in. And his work is on understanding how do you model in a situation of multiple agents, like how do you actually get non Nash equilibria solutions?

AZA:

And he’s discovering something, which is that in order to solve the very hard problem of how you do strategy and multi-agent reinforcement learning— when I have to model what you know and what you have to model what I know and I now have to model what you know about I knowing that you know… and that’s just very hard. And they’re discovering some new math. And it turns out you can start to answer this if you don’t just model with yourself outside the game board, but with yourself on the game board. You have to model yourself modeling other people. And what’s cool there is that suddenly non Nash equilibrium states are found. Not the worst of the prisoner’s dilemmas. You can find these new forms of collaboration. And I love this.

AZA:

It feels so profound because, first you have to inject the idea of ego and then transcend it. If you don’t have ego, you just find the Nash equilibrium. If you do have ego, you also find the Nash equilibrium. But if you do have ego and you can transcend it, you can get to these much better states. And that, to me, is very hopeful and very cool because I think of game theory as sort of like the ultimate thing that we’re going to have to beat as a species.

ARIA:

Always, Aza, our final question… can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years, and what is our first step to set off in that direction?

AZA:

This is sort of like the “what is possible if we could rearrange our incentives so we are both nourishing ourselves and nourishing all the things that we depend on.” Suddenly I think people don’t really look at their phones because the world that we inhabit is just so rich and interesting and novel. We are consistently surrounded by the people that can help us learn the most, sort of in a developmental sense. The entire world is sort of set up in a fiduciary where everything we can trust is actually acting in our and our communities and our society’s best interest and developmental, understanding where we are and helping us gain like whatever that new next attainable self is. I think we’ll have made a major, major progress towards solving diseases.

AZA:

We’ll have a deep understanding of cancer and I think we would have solved our ability to socially coordinate at scale without subjugating individuals. So it looks something like that. We will have solved the aligned collective intelligence problem and we’d be applying that to, like, getting to explore the universe.

ARIA:

Awesome.

REID:

Well, the universe, yeah… the universe outside and the universe inside. So, Aza, always a pleasure.

AZA:

Yeah. Thank you so much, Reid. So much, Aria.

REID:
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.