This transcript is generated with the help of AI and is lightly edited for clarity.

ARIA:

Many people saw that Mark Zuckerberg recently doubled down on a vision in which users have AI companions who understand them in the way that their feed algorithms do. So Zuck noted that the average American has three friends or fewer, but has the capacity to have 15. And that makes me sad. As a human, if people only have three friends, they want 15—I really believe in the loneliness epidemic and the idea that, sort of, human connection is so critical. His solution is that personalized chatbots could fill that void. It’s like they could be embedded across Meta’s Facebook, Instagram, WhatsApp, RayBan Meta Glasses, and this could be a solution to address this problem of social connection. So let’s first talk about just the technology. What has to work—memory, voice, visuals—before these AI companions move from, you know, a cool demo, something we’re talking about, to something people actually interact with, like they would, you know, a friend on TikTok or Instagram?

REID:

Well, I think this question is important enough that I’m actually going to start with the distinction between companions and friends, and then we can actually get into the technology. Because I actually think it’s extremely important that people understand the difference between companions and friends. And I think it’s important, whether all social media platforms—you know, Facebook and everything Facebook owns, but also LinkedIn and TikTok and YouTube and everybody—actually understand that friendship is a two-directional relationship, whereas companionship, and many other kinds of interactions, are not necessarily two-directional. They can be, but they’re not necessarily. And I think that’s extremely important because it’s the subtle erosion of humanity, human beings, to allow people to fall into that misunderstanding. And, by the way, there’s a lot of people who naturally would fall into it anyway because they’re saying, “Well, friends are people who do good things for me.”

REID:

So it’s the, “You praise me. You tell me I’m great. You’re there for me at 2AM.” Friendship is, “You’re there for me.” Right? And that’s a massive diminishment of your own character, of your own soul. Friendship is a two-directional thing. It’s not only: Are you there for me? But I am there for you. Now, you can have different theories about friendship. I mean, my own theory of friendship is: two people agree to help each other become the best versions of themselves. And I think that’s an extremely important thing. But, by the way, not only is it important that you’re receiving that help to be the best version of yourself, but the fact that you are giving that help to be the best version of their self, is also part of what makes you better. The kind of thing that’s important in your own growth with friendship is you might show up at a lunch with a friend going, “I’m planning on talking about this really terrible week that I’ve had, and I’m really looking forward to seeing my friend because they’re going to help me with this,” and you sit down, and your friend’s like, “Oh yeah, my mother died yesterday.” And you’re like, “And we’re going to talk about my friend.”

ARIA:

Right. Right.

REID:

And that is precisely what is so important—to be in friendship and to learn in friendship and that kind of thing is very, very important. And so I don’t think any AI tool today is capable of being a friend. And I think if it’s pretending to be a friend, you are actually harming the person in so doing, and you should not. And this is the reason why Inflection’s Pi, when you say, “Hey, you’re my best friend,” says, “No, no, I’m your companion. Hey, should we talk about your friends? Have you seen any of them? Can you see them?” Because helping you go out into your world of friends is, I think, an extremely important thing for companions to do. And by the way, that doesn’t say that I don’t think there isn’t an important role for AI companions for all people.

REID:

I think AI companions can play a very fruitful human role for people. But the important thing is to, beyond that, kind of like, what function are they playing? And it’s extremely important for these companions to not be deceptive about what role they’re playing. And I think that there’s a whole set of theory—now, by the way, once you get into the detail of that, like you say, “Okay, we trained a companion to be there for Human Being Category X. We are trained for this category, and this is our theory of human nature, and this is our theory about how we contribute and should both be, to some degree, pre-advertised, and also there at the drop of a hat when something strange might be coming. And so, like, “I am trained to be a therapeutic companion, and oh, by the way, you’re going into a zone where I’m planning on selling you something. Well, I should be explicit about that.” So now this gets us to the—

ARIA:

Well, actually Reid, just one question about that. So I understand you’re like, okay, companions, many uses—therapy, et cetera. These aren’t good for friends to replace humans or even augment your sort of friend list. But what do we do? Because inevitably these will become people’s friends. There are gonna be some companies that are promoting AI friends. Is this just a public service campaign to tell people not to? Is this talking to the leaders with these AI companies? Is there government regulation? What do we do now that you sort of disagree with that stance?

REID:

Well, I think it’s beholden upon, call it the experts, the influencers, and also, you know, the body politic, to speak out on this. Because the question would be—like, my sense would be, is at minimum it’d be good to have an MPAA [Motion Picture Association of America] on this rhat kind of said, “Hey, here is where you have to be clear about what you’re doing. And you have to be upfront, before. And you have to have these intervention points. And if you say, ‘Hey, I’m signing up for MPA’—you know, movie thing, PG-13,’ then this is what I’m doing—”

ARIA:

Right. I know what I’m going to get.

REID:

Yeah. You know what you’re going to get. And I think that’s extremely important. And obviously if there was such a massive upswell of agreement with me on this, then it could enter into even a regulatory framework if someone were abusing it. But I think I would start with what you’re doing. And, like I said, I think there are people who disagree with me about what the definition of friendship is. Because you know, “Friendship is the person—is the entity that kisses my ass and tells me I’m great all the time. And that’s what friends are. My friends are people who kiss my ass.” And it’s like, “Okay. That could be your theory.” I will never be your friend. But fine. You know, this is the reason why, when I get into my theory of friendship—as you know—it’s actually, I think friendship is a skill as well. It’s not just something in nature. Friendship is things where you have duties of loyalty, but it’s actually loyalty to the better selves. It’s a set of different things that are not just a, you know, “Aria, let me tell you how magnificent you are. Did I tell you how wonderful you are today?”

ARIA:

This is okay, Reid. This sounds great to me.

REID:

And, by the way, sometimes that is the role of a friend, right? A friend shows up kind of dragging, and you’re like, “Ah, I should buck him up some.” Yes! Of course, that is sometimes the role. But a friend just comes in and has been a complete asshole to somebody, your role as a friend is not to say, “Oh yeah, that person, they suck.” It’s like, “Whoa, whoa, whoa. Hey, you know, your better self actually is better here,” right? And that is, in fact, part of the role of a friend. And people sometimes need to hear, “Hey, you know, you’re great. It’s wonderful,” and sometimes need to hear, “No, no, you should actually consider changing.” And that’s actually part of the role of friendship. Now, part of, I think, the broad space in companions—and I think this is one of the things that this whole new world of AI is going to make us need to be more sophisticated, even as everyday human beings—is what are the different kinds of roles that people play in your life?

REID:

And like, for example, there are work friends, and then there are friends that you might talk about like, “Oh, I’m having difficulty with my life partner or my child.” You know, one of the reasons why therapists are different than friends—and yes, friends can help with that—but a therapist, you can go talk to about like, “Oh, I’m having these really self-abnegating thoughts,” because friends are just people, and a therapist is there to talk to you about, like, you can go into a therapist and say, “I’m having fantasies about becoming a cannibal,” right, and your therapist can talk to you about that. This is, I think, the kind of thing that is—the reason why when we’re going to be training, I think, this literal pantheon, this panoply of different kinds of AI companions, it’s, well, what theory of human nature, of human being, are they trained on? What theory to the human good are they trained for? And, explicitly, where are they trained to be 100% on your side interacting with them? As opposed to saying, “No, no, no, abandon your human friends. No, no, you don’t need any other human friends. You only need me. Because me, I’m going to be selling you stuff, and I’m going to be draining your time, and I’m going to be putting ads in front of you. So no, no, abandon your human friends. Talk only to me.” I think that’s a degradation of the quality of elevation of human life. And that should not be what it’s doing. What’s the theory? And it has to be explicit about this. And I think this will be very important to do. And I think we as a market should demand it.

REID:

We as an industry—a la MPAA—should standardize around it. And if there’s confusion on this, I think we as government should say, “Hey, look, if you’re not stepping up to this, we should do that.” Because I think that’s—this is a super important thing. Now let me get to a nuance that most people have not really tracked here, which is, you know, part of the Wild Wild West of the internet is done on Section 230. But Section 230 protects human beings. And it protects the technologists for saying, “Hey, I’m facilitating when a human being gets on and starts saying anti-vax things, that’s the human being’s responsibility, not the platform responsibility.” That’s the Section 230—and we could mod it some and so forth. But an AI agent is not a human being.

REID:

It’s not protected under Section 230. So you can tell that we haven’t gotten to the point where like, “What are our protections going to be around AI agents?” Because at the moment it’s all on the tech company that’s providing it. Right? And, by the way, I think we want to evolve that. So for example, one of the reasons why, I think—which will really impede a medical companion—is because, well, all this kind of medical liability stuff. And actually, in fact, we want to have medical companions. Because medical companions like, 24/7, at 2AM on a Saturday morning, my choice is I go to the hospital if I have access to one? Well, okay, it’d be great to start with it. Because, by the way, you talk to the companion, the companion says, “Get thee to a hospital right away.  doesn’t matter if it’s a three hour drive, go,” you know, et cetera, all of that’s very good to have, but we’ll have to sort out all of the liability issues and kind of safe harbor, and all the other things around that.

ARIA:

It’s about transparency, accountability—like, we need to know what we’re getting ourselves into. 

REID: Yes. 

ARIA:

And so, when you think of the technology, do you think we’re there right now? Or what needs to progress until we get there for even AI companions, you know, if not AI friends?

REID:

I think we’re sufficiently there for a panoply of companions, if just the training and meta-prompt guardrails were kind of put in the right way, right? It’s the, “Hey, I’m trained for this,” and, “here’s what you should expect from me relative to your good.” It’s going to, with rapidity, get better in months and months and months. It’ll get better with memory and knowledge of you and what really helps you, and, of course, the way that it can interact with you, and being much more emotional and having judiciousness. And having, for example, an agent who’s cross-checking it. So when it says, you know, “Hey, I think you should get a second opinion,” it goes, “Well, yeah, you can get a second opinion. But, by the way, here’s all the things that really matter in this, and in this case, a second opinion could be good, but your doctor’s giving you pretty good mainstream advice here.”

ARIA:

Sp Reid, you brought up a great point in your overview of friendships versus companions that you might have a different answer, or we might have different considerations, when it comes to young people under the age of 18, and also senior citizens. So when it comes to young people, Common Sense Media just came out with a report, and they have deemed companions unsafe for teens—and younger—under the age of 18. Yet at the same time, Google just announced its plan to roll out its AI chatbot, Gemini, for kids under 13. So obviously there’s some nuance here: What does a chatbot mean? What does a companion mean? What exactly are we talking about when this AI is interacting with young people? But in general, sort of, what is your take? What are the pros and cons here? What should we be thinking about as we’re rolling out these companions to young people under 18?

REID:

Well, so the very first thing I’m going to do—which will blow everyone’s mind—is to give a prediction that I’m near certainty about. And it’s not just, of course, because I’m giving a prediction that has near certaint. That itself is, as you know, highly unique. But also because I am quite certain about this, and we’re about to be in it, which is: In some small N number of years, it will start becoming typical that when your child is born, you also have an AI companion for them that goes with them through their entire life. Or certainly their entire childhood, but probably their entire life. And what does that mean to have that companion doing that? How is that companion great for elevating the kid? Because obviously it can be a tutor and a joyful explorer of the world and all the rest, and there can be all of this really great stuff.

REID:

But what does that mean with regard to the parenting relationship? And how does that companion relate to the parent, because this parent wants a child to be raised Catholic, and that parent wants the child to be raised as a loyal New Yorker, you know, and all of the rest of these things. And what does that mean? And for example, of course one of the things will be, the parent may select, “I want the agent to tell the child, ‘this is completely confidential,’ and, “Oh, by the way, parent, we just had this conversation.” So where does this all play out? It is going to be really, really interesting and challenging. And very, very legitimately, the parent’s going to want to say, “I am responsible for the child, so the companion is something that I have a very strong voice in.”

REID:

And, by the way, we might even, you know, as society say, “Well, yeah, you have a strong voice from age zero to X that’s kind of unilateral to you. And then from age X to Y, new things apply to have some limitation.” It’s part of the reason why we have social workers. Like if someone’s theory of parenting is “beat the child,” we as a modern society say, “No, not so much. That’s not allowed.” Right? And what is that nuance? And, for example, does that companion then go, if a child’s talking to their companion saying, “Well,”—and you know, it’s usually men, of course that are physically abusive, but not always—like, “Oh, my parent is beating me, and da da da,” does the companion have a job to call social work and say, “Wait, I have a problem. We’ve got to do something.”

REID:

It’s this really tangled thicket that’s going to cause us to confront a ton of issues. Like, seriously important. Now, obviously, the way that tech companies are going to start is to try to start with just a very narrow scope, like, “I’m going to try to stay out of the parenting lane altogether. I’m going to try to take no responsibility. I’m going to try to just be there as an informal Wikipedia, generally say positive things and try to help you.” Like, if you go, “I’m really lonely,” like, “Oh, let’s try to help you not be lonely and so forth.” But we’re going to have all of these issues of—because now all of the sudden, tangibly, you have an agent that’s in direct interaction with a kid—well, who else is that agent accountable to?

REID:

Accountable to the parents? Accountable to the school? Accountable to the society? And look, you know, we already have troubles with public schools. Right? Like, are you allowed to teach scientific evolution, or where does religion play a role in the schools? I mean, we have this craziness, you know, in the U.S. of trying to ban certain kinds of textbooks and other kinds of things. I was like, “Well, this is going to make that a million X!” I will confess, it is such a tangled thicket, that, you know, when I started LinkedIn, it was like, “No, no, no, 18 or older. Where society judges people to be adults, that’s where I’m going to play.” Because I precisely think it’s nuanced. This is going to be very challenging ground. Because there’s huge things that we can’t even agree on that I think are relatively straightforward.

ARIA:

Absolutely. I mean, Reid, I did not think you were going to say that prediction. And I’m excited to see how it plays out. So here we are.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.