MAJA:
How do you even develop machines that are going to be really helpful to a certain population? For example, having these robots for kids with autism, and therapists were worried, they said, “you’re trying to replace therapists.” Not at all. But then they actually saw it from the other side. They said, “oh my goodness, this is great, I spent one hour or two hours a day with a particular child and then I want them to practice things at home.” And so, isn’t it grand if you can have this robot that ends up playfully doing this stuff? So that’s the vision. The vision is that it’s complementing the ecosystem of human support, which is never perfect.

REID:
Hi, I’m Reid Hoffman.

ARIA:
And I’m Aria Finger.

REID:
We want to know what happens if, in the future, everything breaks humanity’s way.

ARIA:
In our first season, we spoke with visionaries across many fields, from climate science to criminal justice and from entertainment to education. For this special mini-series, we’re speaking with expert builders and skilled users of artificial intelligence. They use hardware, software, and their own creativity to help individuals use AI to better their personal everyday lives.

REID:
These conversations also feature another kind of guest: AI, whether it’s Inflection’s Pi or OpenAI’s GPT-4. Each episode will include an AI-generated element to spark discussion. You can find these additions down in the show notes.

ARIA:
In each episode, we seek out the brightest version of the future and learn what it takes to get there.

REID:
This is Possible.
ARIA:
So excited for the second episode of our summer arc on AI and especially because we’ve been talking so much about large language models and how people used to talk about robotics, but now everyone’s only talking about software.

Not true. The guest today is an incredible researcher, practitioner, who is talking about AI and hardware, which means robotics.

REID:
While, of course, we see in the so-called “digital world” and so forth a rapid evolution in the large language models, ultimately, a lot of our lives are embedded and a lot of the things we need to learn, need to do, are embedded. And part of what, in particular, about our selection of today’s guest is the understanding that embedded doesn’t mean, like, “I am robot.” [laugh] Right? But actually something that is there with us. And actually, in fact, focuses on things like social interaction and human assistance. And first everyone talks about LLMs, a little about robots, and then no one’s talking about this really important part of robots that, I think, is an important theme. It’s not all of robotics, but it’s a super important theme to add strongly in the discourse. And that’s part of the reason why we couldn’t be more delighted to talk to Maja.

Maja Mataric is a renowned computer scientist, roboticist and professor at the USC Viterbi School of Engineering. She’s founding director of the USC Robotics and Autonomous Systems Center and Co-director of the USC Robotics Research Lab.

ARIA:
Maja, we are so thrilled to have you here today. You are known as a pioneer in the field of robotics. Can you talk about your earliest memory of seeing or engaging with a robot and what about that moment made you want to pursue robotics?

MAJA:
Well, my path to robotics is actually, I would say, unusual in the sense that I’m not one of those people that tinkered in the basement. We didn’t have basements. I come from the former Yugoslavia. We lived in a high-rise, so there was no option for a basement. Instead, I came upon the path because when I was in college, AI just started to emerge. It was at that time in what was called an “AI winter.” But I thought it was really interesting. But for some reason, even from then, from my college days, which now seems like it was in the dark ages, what I really was interested in is behavior in the real world.

And so I was interested in psychology and people, but I was also interested in AI, and if you think about bringing together a lot of aspects of AI and the real world and people, well, that’s really robotics. Because robotics is AI in the physical world around us. And so that’s how I came upon it and I read up on it and I really read books and stuff and I thought they were super boring, I’ll be very honest about that. I did not probably see my first robot, really, until I was in graduate school and looking around at possible labs. And it was the robotics labs of my former advisor that caught my eye because they had the funnest robots. So, that’s why. So a very unusual path. No early dreams of this at all.

ARIA:
No, but I love it. Like you said, you were interested in the real world, you were interested in how to help people, you were interested in having fun. I think we shouldn’t forget that those are things that can lead to amazing careers in STEM and computer science and AI and all the things.

MAJA:
In fact, what I would love to tell people is that there is no standard path. And even though, I think often in the media in particular, we get this kind of typical path showcase like, “oh, you know, I’ve been dreaming about this since I was a child.” But, in fact, I think it’s perfectly lovely to dream of many things and arrive wherever you arrive and keep moving. I think it’s not at all necessary to have thought about this before.

My dad was an engineer, so I have to say probably I had some early influences, and it turns out I’m really an engineer at heart. But how that was going to manifest itself, I didn’t know, and it’s still unfolding. And that’s a great thing about just learning continually. So no strict paths, people, come on, let’s keep it open.

ARIA:
Love it.

REID:
Yeah, indeed. And, you know, part of that is to think about the actual, interesting space of robotics, because probably most of our listeners are open to robots at home and workspaces, but, you know, not primed to the full range and some of the stuff that you really do, which we’ll go into. Pop culture shows us doomsday scenarios, killer robots, it goes all the way back to the eponymous Terminator. So, why do you think people are more fearful of physical robots than non-physical AI? How does that affect how you do your research, do your work, what kinds of things you’re doing?

MAJA:
You know, it’s really interesting that you would say that people are more fearful of physical robots because all studies show that when you take people who have not interacted with robots and you put them next to real robots, people are curious. We project our internal expectations onto machines: if the machine behaves in any even remotely lifelike way—no matter what it looks like, it may even be just a boring box—but if it moves in a particular way, we’re going to be interested, we’re going to be engaged, we’re going to expect it to be intelligent, emotional, intentional. And so, in fact, in reality, people aren’t scared.

Now, of course, in the media, there have been a lot of portrayals. I used to just call it so boring. And I’ve worked with really fun folks in the entertainment industry and I would always start out by saying, “please don’t lead with killer robots. So boring. Come on, be interesting.” So I think the media will tell us that people are afraid of robots, but in reality, when people are confronted with a physical machine, we’re actually quite interested. We are very social species and we’re social with machines as well. So, that is the truth.

ARIA:
So that actually tees me up for my next question so well. You pioneered a subfield of robotics called socially assistive robotics. Can you tell us about that space and how does it fit into the broader landscape of robotics?

MAJA:
I’m always happy to talk about my field. [laugh] So of course.

So, when I started out in robotics, of course, when you start out you work on stuff that’s out there. And so my first work was on getting a robot to navigate—and people are still working on robots navigating, although we’re pretty darn good at it, we have autonomous cars, so the field has moved along. And then I worked on teams of robots because I always liked social interaction. I got a team of little [robots]—we called it a “nerd herd”—and I had 20 because I was very fortunate that my advisor could make that happen. And so I was herding these robots, it was crazy. And it was really only after I had kids, or at least my first kid, that I started to think about not only what I’m doing, which is fun and intellectually curious, but really why? You know, kids will do that, or the real world and life will do that. And when I started thinking about why am I doing this, I wanted to have a really good answer when my kid asked me, not just what do you do mommy, but why do you do it? And I realized that “why” had to be more than, “I get a lot of papers published and people think it’s nice.” But who are these people?

I want to be able to tell my kids that I make robots to help people. And so I then really started to look at how can we actually make robots to help people? And that’s hard. And I looked at rehabilitation robotics, I looked at people who needed physical help right after an injury or a disability. And that was really hard. It was a field that existed, but I really wanted to do something that people would have in their lives continually, because even back then I had the sense that people are on different paths all the time and we need help occasionally, if not all the time. Different kinds of help. And I don’t mean physical help, I mean really sort of emotional support. As I was working through that, I realized – and this is 20 years ago – “you know, AI and robotics is about ready to go into a field where you can create machines that will be with you in your home even, every day, not rolling around your home, not manipulating objects in your home. We’re still not there, even now, 20 years later. But we can have machines that will be there and support you on your journey of the specific thing you need.

Whether you are trying to teach a child with autism social skills like eye gaze, whether you’re trying to help an Alzheimer’s patient just exercise their brain and talk to someone, or a stroke patient to just do the boring daily rehab, exercising. That’s the niche that I wanted to get into. And it’s great because it’s very easy for me now to say, “hey, mommy makes robots to help people.” So I got there.

REID:
Well, yes, for sure. But this is a great setup to one of the things I think is really a key part of your work that made us really excited to have this podcast and talk to you, which is how is that helping people, that interaction, the tracking eye gaze and so forth—how does that help us become more human? Right? What’s, in that helping, obviously there’s help and tasks and helping navigation of the home environment or other kinds of things, but what’s also the way that it extends our humanity through it?

MAJA:
I am so glad you asked. So, first of all, my first premise that I arrived at by talking to a lot of stroke patients and their families, families with kids with autism, you know, we spent a month and more in homes with kids with autism collecting data, learning from what people said they needed. There was a very loud and clear message from everyone that people wanted to have a sense of purpose. They wanted to have a sense of ability and autonomy. Like, you know, “I am who I am because of all of these things that I can do.” It’s difficult to know who you are if you don’t have a sense of anything that you can do and feel good about doing. The thing is that when people are empowered, they also start to think outward. And so we have had robots, for example, that just behave empathetically to the user and, in that way, make the user more empathetic. And the interesting positive cycle that gets created there is, you know, studies show that if you’re behaving empathetically, you are actually healthier. You get a health benefit from being a nice person. So, even if you’re totally selfish and don’t even care about anyone else, you should still be nice because it benefits you. But, as it happens, it also benefits other people. We’re wired to be these social, helpful, empathetic entities.

And so when we create these machines, we’re very conscious that we’re not just trying to help one person with the exclusion of others, but we’re saying, “look, the robot is modeling good behavior for you, positive behavior, helpful behavior that empowers you, which makes you the kind of person who does that in return.” I know it sounds, like, earthy, crunchy, but we are really completely based on the literature from, basically, we read a lot of behavioral economics, behavioral science, social science, and neuroscience, so that we can understand what works here in the human head and what makes us humans feel better and behave better. And then we implement that on our robots.

ARIA:
That’s fantastic. I love the conversation about empathy and it’s interesting how it sort of spans hardware and software. Everyone always says, to your point, the best engineers and the best product managers are empathetic. They understand their users, the consumers. And so, when we’re thinking about empathy with the robots, like, if these are empathetic acting robots, what relationship should the robots have with empathy? Should they feel it? Should they understand it? Should they recognize it? How does that work on the robotic side? In addition to, of course, it helping humans to be more empathetic themselves.

MAJA:
So, empathy is a really interesting challenge because, first of all, if you look at the science of empathy, there’s disagreement there already. In the neuroscience of empathy and cognitive science of empathy.

Some people will say, “empathy is what you feel.” Right? So, I feel empathetic towards you. Other people, like Simon Baron-Cohen, who is a well-known neuroscientist from Europe, have written a lot about empathy being actually what you do. So it’s about the behavior, not about what you feel. That’s really important, because that means if we believe that empathy is how we behave, then robots can be empathetic. Because robots cannot feel, there is no feeling, why can’t they feel, they don’t feel because they don’t have the mush that we have. You know, they don’t have neurotransmitters and they don’t have hormones and they don’t have all these other things that make us feeling creatures.

We’ve done a bunch of studies where we’ve created robots that are empathetic, and that’s very easy. Others have done this as well, right? You can basically have a robot that says things like, “oh, I’m so sorry you feel that way,” and, “I know how you feel,” and, “I’ve been through that as well.” So, you can make a robot appear and behave empathetically and therefore, according to Baron-Cohen, also be empathetic. What’s more interesting to me is not how a robot should be empathetic, but how a robot can get a person to be empathetic. Because, as we know from science, again, if the person is empathetic towards themselves, self-accepting, and towards others, their health outcomes will be better, they will be a better person to be around. So, it’s altogether a good thing. So we’ve been really working with what should the robot do to make you empathetic? And it’s really interesting.

We’ve done some studies with relatively pathetic robots, like, really needy and like, “oh, I’m failing! Oh I’m failing again!” It’s very funny. And it’s amazing, people are just really helpful. Frankly, I was surprised to find out that a snarky, funny robot was very unpopular and that just a,
“blah, I’ve failed again, my cameras are not receiving information,” was more well-received, and then the really needy, “oh no, I can’t, the force is not with me, I’m banging my head against the wall,” that robot got the most help for the longest time. So, I don’t want to over-generalize it, I think if that were in your home, you might grow tired of it. But if you’re just encountering a robot in the real world and it’s needy, it turns out that it is actually better received than a very transactional, cold robot. Or a really, like, “oh, well I guess I lost my leg, but whatever.” Right? So it’s very interesting how empathy is very—I’m just going to use Star Wars technology: it’s strong within us. So, it can be, and it is something that—[laugh] sorry—it’s something that we can evoke and we can reinforce.

REID:
Well, I love it. And I think that part of the notion of the human condition, the human elevation, is we also like to feel wanted, like to feel needed, like to feel that we’re important. And that’s part of the work you’ve done and you’re on the path to. One of the things that I think is also super interesting here, because, obviously, a huge amount of the public discourse is around the software of AI right now, A100s and H100s, but what are the things that are particularly important for the hardware side to contribute?

MAJA:
Well, thank you for asking about the hardware side because we are right now, in the world, so embroiled in the AI which is purely software-side, that we’re kind of pushing this aside; robotics is almost still fringe. But the reality is it’s the physical embodiment, it’s the physical manifestation of intelligence in the world. And, as we know, how people appear and how we physically behave has a huge impact on how we relate to one another. Every little bit about how we design and build a robot is important and that’s why it’s so hard. First of all, you know, there’s the basic stuff about safety and I don’t even want to talk about that because we already know robots must be safe, okay? I mean, that’s a given, right? And that’s still hard. And so that’s why we still don’t have robots all over the place, but we’re getting there. And that’s why the robots that my lab works with are small and safe and often soft. Because we want to make sure safety is not even an issue, but after safety, now begins the hard part.

So, what should it look like? How tall should it be? Turns out just how big the robot is has a huge impact in how you psychologically perceive it and how you respond to it physiologically, without even realizing it. If it’s more than about three quarters of your size, it’s going to impact you physically differently. And so you’ll be reluctant at a certain level. You can accustom yourself to it, but you’ll be more reluctant than if it’s smaller. How does it look? Does it look like an animal? Does it look like a biologically existing creature? Does it look like a human or does it look like nothing at all that’s really familiar to you? This is incredibly important. I’m engaged in conversations both with companies and, obviously, in research about, you know, “what would you want a robot to look like?” And it really depends, right? If you start talking about humanoid robots, then you have this huge load of human expectations. If you build something that people really have to get used to and they have to get over certain things, well, you know, that’s not good design. Good design is it has the metaphors that evoke just the right expectations that you lean into and you really enjoy. So that’s why the design is really important and it’s very wide open.

So now it has an open field for it to disappoint you or engage you and endear you. And that’s a lot of expectations of that robot, right? Wow. But if you do it right…. And so my favorite example is WALL-E. WALL-E is just one of my favorite robotics movies. Actually, it is my favorite robotics movie. Because here’s the thing that looks like – “what is that? Like, some kind of old, rusty, mechanical garbage collector?” But it’s so completely endearing and you just have got to love it. Now, that is good design. Whereas, in comparison, in the same movie you have Eve, which is egg-like, so, eggs, we get that that’s sort of biological, cool. It’s also kind of like Apple, white, plasticky. Nothing warm about Eve. Nothing. Yeah, she’s very out there, “ooh,” high-tech, but you don’t want to hug her. But you want to hug WALL-E, and you know he smells, but you still want to hug him. So that’s what I mean by good design, and those are, you know, creative people who drew animation. But in the end, when we start designing robots that will really be wonderful for people, that’s the kind of creativity that we need.

ARIA:
No, I mean, it’s so interesting, like you said, it’s obvious, but still so critical. Like, what is the design of this robot going to be?

Let’s go down a level. If you’re an average listener of the Possible podcast, how could their lives look different if they were using socially-assisted robots on a daily basis?

MAJA:
Socially-assistive.

ARIA:
Oh, socially-assistive robots on a daily basis.

MAJA:
Yeah, just to explain that: the idea is that we were looking at assistive robots, assistive robots to help people, usually people who really need help, right? So I don’t mean like, “yeah, it’s going to go fetch you a beer.” You should do that yourself. But socially-assistive is that they’re assistive, they’re helping you, they’re assisting you, and then they’re doing that socially, through social interaction rather than physically.

For example, if you’ve had a stroke and you can’t reach something, it reaches it for you—what we would like to say is, like, “if at all, could we get you to reach it yourself?” That’s what we’d like to like, “I’m going to give you grit and support and make you feel better.” So here’s the idea. Imagine that you’re getting up in the morning and you’ll have some challenge. Maybe you’ve had a stroke and so a part of your body is disabled something, maybe it’s your dominant arm, right? So, you’re supposed to exercise, you’re supposed to do things like reach for your coffee maker with your stroke-affected limb. Well, that’s going to be really inefficient and it’s going to look like crap and it’s going to demoralize you every time because, you know, you’d just like to be the self that you were before this stroke happened. Now, maybe you’re fortunate and you have amazing people in your life 24/7 who are going to be like, “you can do it!” And, by the way, they’re not going to enable you by reaching for that instead of you, because if everybody does it for you, you will forever be disabled. And if you use the other arm, you’ll forever be disabled. So you have to fight your brain to get better. Who is going to be there to constantly support you and say, sometimes, “that is fantastic, great job,” or, “you dropped it, but you know what, you tried. That’s better than not trying,” and sometimes when necessary will say, “okay, really, are we going to sit now again? Come on, get off the chair, come on, let’s go reach for that thing.” So you need a coach, and people in your life are not necessarily always available or able. They have their own stuff that they have to deal with. So the idea is we want to have this companion robot that’s going to help you through what is these days called the journey. There’s technology that can help you, you know, get you out the door.

Now, we’ve also worked with kids with autism and they would want to understand things like, “is this person interested even in talking to me, how can I tell?” So they can have this companion robot to talk them through it, to practice social gaze, to practice, like, “okay, look at me, but now don’t keep looking at me because that’s creepy. Occasionally look away.” “Where do you look away?” “Doesn’t matter. Just look somewhere and then look back.” And they can practice. Because, unfortunately, other kids are not going to practice with them. So, you can imagine, elderly with Alzheimer’s, they’re lonely, they’re isolated, they could be staring at some screen, whatever, and they can look at pictures of their family, but for how many hours a day? So this is a thing that can talk to them and talk to them about their family and always be there, always be pleasant, always be happy, never get tired. It could tell jokes, just the right vintage of jokes. We’ve done that.

The point is in everyone’s lives, there are many challenges. And there’s a lot of expectation that other people will help us with these challenges. Well, every person has their own challenges. So here, the idea was we want to create these socio-emotional but physically embodied companions. They’re not lovers, they’re not friends, in this sense of, “this is not your friend,” although people sometimes perceive them as friends. So they fill a certain niche in people’s lives, but they’re not replacing therapists. They’re not replacing friends. They’re not replacing teachers. Because they can’t. That’s not their purpose.

ARIA:
Right, absolutely. It’s so interesting, last week we spoke to Mustafa Suleyman who is doing a personal intelligence through, not hardware, but software, and talking about AI and how, to your point, this is not a replacement for humans, this is for the millions of people who, to your point, don’t have someone for them 24 hours a day.

You hit on this a little bit, but can you talk about why it’s important for the physical embodiment of a robot to be there as opposed to just a screen or just a speaker? What is the importance of that actual robot versus just the software?

MAJA:
There’s been a very long debate about, by people who are not in robotics and also by people who are not in neuroscience, why we need physically-embodied companions and social partners. And if anything is going to demonstrate to us why, the after effects of the quarantine and the pandemic will.

So we will see effects in early child development, in teen development, in adult isolation. We’re seeing all of that. We can be fully connected through various social media, we can be fully connected through screens and video conferencing, and yet what you’re seeing is extremely increased rates of anxiety, depression, isolation, child development delays. Kids who missed one or two years by not being around their peers, their social development is now two years delayed. This is happening because we humans are evolved to be social creatures.

So we need, from day one, we need to look—if we’re sighted—we need to look at human faces, we need to see the smiles, we need to see the crinkled eyes, the Duchenne smile. We need that for feedback. And you know, back in the sixties, there were these wonderful experiments—or maybe slightly cruel and and sad experiments—but they took baby monkeys and they put them on artificial monkey mothers, some that had wonderful fur on them and others that were just metal, but they had a bottle of milk. And what did the baby monkeys prefer? If we were just transactional creatures, we’d just go for the milk. But no, all the monkeys went to the furry mother, even if they were starving. We humans are fundamentally social creatures. We need social support around us. And by that I mean really around us in the physical world. And there have been, I would say, thousands of studies in science to show this. So we actually did a meta review. This is something we do in science, right? We look at all the studies and we do a summary of it all statistically. And they show, basically, side by side comparisons on if I take a human of any age and have them compare a screen-based interaction with a real robot interaction in the real world, the real robot interaction is going to make them learn more, retain the information longer and report enjoying it more.

ARIA:
I have a friend who’s an occupational therapist at a local elementary school and I could imagine her being, you know, so much augmentation, amplification, positive things happening if she had a robot in her classroom to help with some of her students.

MAJA:
We’ve actually found that after the initial stuff of talking about, for example, having these robots for kids with autism, and therapists were worried, they said, “you’re trying to replace therapists.” Not at all. But then they actually saw it from the other side. They said, “oh my goodness, this is great, I spent one hour or two hours a day with a particular child and then I want them to practice things at home.” And so either the parents have to do it, and parents have enough and parents don’t actually want to practice therapy with their kids, they’d like to just be parents, but they don’t have the luxury of being parents, right? So now they also have to be therapists. So isn’t it grand if you can have this robot that ends up playfully doing this stuff? And then parents can be parents. So that’s the vision. The vision is that it’s complementing the ecosystem of human support, which is never perfect.

ARIA:
As a parent, I will second that notion.

So this brings us to our story generated by ChatGPT. It’s about a family with three generations under one roof and it spotlights through the family members: there’s Mr. Johnson, he’s an 80-year-old grandfather who needs help with this medication; Lisa is a 40-year-old mom who needs support with preparing to repair a roof; an eight year old, Ethan, who needs assistance with his homework and getting to soccer practice. And so this robot steps in to support each of their needs. And so, first of all, if you hate this story, that’s fine, I didn’t write it. You can critique it. But I want to ask, like, what did you see in this story that you were like, “oh, that’s interesting, that could happen, that could be in our future,” or what was wrong? What seemed promising, what seemed way off? Like, what are your reflections?

MAJA:
Actually, I love the story. In fact, it’s interesting and in some ways not surprising that this vision comes out, because among other things, I was part of a recent grant proposal in which we had a very similar vision except it wasn’t necessarily one robot, because the state of the art now is that you will not, at any time really soon, have a robot that can do many physical things, but it can certainly in terms of intelligence, talk to various different people. As long as it can uniquely recognize you, then it will be able to help you and talk to you. And so I think that’s very likely and very realistic and very needed. And so we wrote a grant proposal in which we literally came up with a vision of a family, which is very much like in the story.

So, you know, you have the busy mom who’s trying to take care of her elderly parents but also her kids. And the kids are maybe having one of these very, very common issues now, right? They might be suffering from anxiety, all this kind of stuff. Or bullying or something like that. So I would say that the story is spot on. And then, so, what is the solution? The idea of having one robot? You see that in the movies, you see that in a lot of literature, it makes sense. It’s the butler notion that people had, or a maid notion. I’m not a huge fan of those because it puts a tremendous burden on this one entity. It may be a robot, but think about it. What if three of us are in the house at the same time and we all need something? Now who gets to go first, right? So there’s actually a whole debate about it: are we going to ultimately be creating this other race of servants, right? And I prefer to think of it as your buddy. It seems much more likely to me that the kid will have a buddy that they can play with and what the mom will have is something else, maybe an assistant identity. So I think there are different versions of who fills what roles. But what does the future look like? Maybe people will be filling these roles and then robots will be doing something else. I think it’s just really important to keep thinking about this and not just drill down one path because, “oh gee, we can!” and not consider possible outcomes really, you know, a bit long term.

REID:
I love all that. I mean, this is part of the reason why I wrote Impromptu, is human amplification and such. I’m actually quite bullish in the fact that we’ll always figure out things to do, because even if the robot was doing all the manufacturing, we’ll go out and play pickleball or do other kinds of things, because of that human to human. It’s the exact—like, go touch the fur. Also why Mustafa and I—I’m very curious if you ever play with Pi, what your kind of thing is, because it’s the same thing, it’s like, how does it help you in your life versus draw you away from it?

And I think this also gets to—I completely agree with the whole embedded, keep you engaged in your life. Have you also thought about this in VR and AR? Because, you know, there’s been an ongoing discussion of metaverse and other kinds of things. I’m curious if you’ve thought about that environment as well as the real world one, and what your reflections are between those.

MAJA:
Indeed. You know, I said early on, I made this point that there’s a big difference in how our brains perceive interacting with the screen versus interacting in the real world. Now, if you go into virtual reality and the virtual reality environment and immersion is really well done, then you can almost trick your brain. So your brain feels like you really are interacting almost in a physical world. You don’t have touch, which is important. Touch is incredibly important. I mean, the lack of physical touch is a part of the loneliness epidemic that we have, actually. So our brains are really wired for this physical experience. We want touch, we want smell. But I do think the metaverse is coming no matter what, but how soon it’ll come and what economic bumps will happen along the road… whatever.

The issue is that our physical bodies are not just vessels, they are how we experience the world. The science of embodiment shows us that that’s why things like mindfulness work so well, because when you’re in the moment and you’re experiencing things, you’re happier. Oh, lo and behold, how come? Well, that’s what we’re built to do, we’re built to do experiences in this physical world. And so that’s why I think there are a lot of wonderful things you can do in a metaverse, virtual reality. The things that excite me about it are, for example, you could teach tolerance, right? If you want to understand what it’s like to be an elderly person, you can be put in an environment that’s immersive and maybe put on a suit and you can really feel like you’re an 85 year old. And that is going to, in 30 minutes, make you understand, and possibly, let’s say, be a better engineer for people who are 85 than any amount of books you can read and so on. I’m excited about that. People have done virtual reality training for understanding the climate crisis and just, like, putting yourself in various places and experiencing it. Fantastic, right? So we could really expand our space of experiences and yes, we can democratize experience, where now everyone can go to Everest, right? But the downside is if everyone can go to Everest in VR and no one goes, well, maybe that’s better for Everest, right? Like, let’s protect Everest. But some other things, if you never get out of your house…. So, it’s actually really exciting, there’s a whole new field that just arose in the last two years, and I just know because one of my PhD students was literally doing the work, Tom Groechel, so that’s why I know – this is how we luck out, we professors, we seem to know everything, but really we just have a lot of smart students. So there’s a new field called VMHRI and it really stands for virtual and mixed reality for human robot interaction—or human machine interaction. So it’s really interesting, right? What I love is augmented reality. So augmented reality is the idea that you can put on lightweight goggles, like glasses. Our perception of the world is augmented and it could be augmented in a shared way. So you and I can now have a shared world and we can be in this world, but also this world is much more interesting because we have the shared world between us. Now, that’s exactly what’s happening with humans and robots. A human user can wear these lightweight glasses and can see things that the robot can also see, and the beauty of that is we humans experience the world in much richer ways than robots do, but when we create the shared world, this mixed reality world, now the robot can experience so many more things.

For example, we had kids playing with physical robots in a shared world in which there were these floating code blocks and they were coding, they were moving the blocks around them, they were pushing them and shoving them and throwing them. And can you imagine? That’s pretty fun. Like, how fun is coding? Usually not this fun. These kids were having a great time, they were fifth graders, they were having great fun. But the most important thing is that later, we tested them on their coding skills and they’re way better coders. By playing, by integrating play and freedom in what they were doing, they were not afraid, they were more creative, they were more curious, they learned way more. And so this is just an interesting way to think about – imagine learning in this augmented world with companions. You go to school, you interact with your friends, you do all this stuff, and then you come home and you interact with your, you know, learning buddy in this interesting augmented world and you’re not missing out on the physical experience, but you get this extra layer. So I think augmented reality is going to do a lot to improve our experiences without leaving our full brains behind. I worry about that in complete immersion.

REID:
One of the things that your answer reminded me, and kind of a follow up, is that you are one of the very few people I know who, when asked about the metaverse, goes immediately to: here’s how we can increase empathy, right? Like empathy for old people, et cetera. And it reminded me of one of the questions that I had for you, which is, how do builders and designers, like, what are the things to really increase empathy? What would be a couple of bullet points to just, you know, to just go, like, “here is what’s really important for getting this empathetic interaction.”

MAJA:
That’s a really good and hard question. And so I’ll try to get it right, but there are bigger experts on this. But I, at least, would say two things that have been shown to work well. So one is, and we all know this, listening, right? So, asking a lot of questions like, “how do you feel? How did it go?” And then not solving problems. So empathy is all about, “tell me about you.” It’s about you. And then the other part is using feeling-oriented language. This was actually very surprising to us. We ran a study to see if it was okay for robots to talk about feelings, because, remember, they don’t have any, and we can pretend that they do, but they really don’t. They don’t. So is it okay for a robot to say, “I know how you feel?” Because it really doesn’t. And I was actually surprised—and this is good, because we should, as researchers, be surprised, if I’m never surprised then it feels like I’m biasing my studies – so I was surprised, because I would feel like, if a robot said to me, “oh I know how you feel,” I’d be like, “mm, do you really? Because I don’t think you do.” But, actually, people like it when a robot says, “well how did that feel?” and, “oh, I know how you feel.” And we were dealing with a group of users also who were actually suffering from anxiety, and then we were dealing with users who also were grappling with recovery from cancer. They actually liked the robot companion referring to understanding their feelings. So feelings-oriented language really comes across as empathetic and is well-received.

And people often think intuitively that if you have a supportive agent that is supportive consistently, that you’ll get bored of that and you won’t like it. But I don’t know. Think about humans in your life: “oh my gosh, I have this parent / friend who is always supportive, oh I’m so bored with that, I don’t want that anymore.” Right? Who doesn’t want that anymore? So, the point is, if you have an empathetic agent that is consistently empathetic, it just cannot be rote repetition, but if it is meaningfully empathetic, people do not get bored with that. Everybody needs support.

ARIA:
Totally.

Maja, we could talk for so much longer, but we want to get to our rapid fire questions. So, you actually already mentioned some movies that I also love. But is there a movie, song, or book that fills you with optimism for the future?

MAJA:
WALL-E, actually, I’m going to go back to that, but WALL-E does fill me with optimism because WALL-E actually does a lot to show some bad things that people can get themselves into by not planning ahead, and then the way that we are infinitely malleable as a species. We can do better and be better and so can the machines that we create. So I am going to still go back to WALL-E, but if you want another robot movie, which is more fun than maybe optimistic, but I do like Robot and Frank, that’s an often overlooked robotics movie, and I think it’s quite well done, really great acting, great understanding of what robots are like. It has a scene in which two robots come together and instead of some kind of taking over the world, one says, “I’m operating at expectation level,” and the other one says, “me as well.” And then they just go their separate ways. I thought that was good.

REID:
Where do you see progress or momentum outside of your industry that inspires you?

MAJA:
Yeah, no, I’m actually really, really excited and I feel like if I had taken a different path, I would’ve loved to have been in bioengineering. This intersection now of biology and engineering where we’re looking at things—like, everything from, on the one hand, prosthetics, okay, that’s obvious, that’s kind of even closer to what I do, but you know, restoring vision, restoring physical ability—and this whole mush, where we’re going from genes to cells to physical ability. Even, you know, gene therapies and things like that.

I am extremely excited about this, and I think there are areas there with AI that will actually just really make huge impacts. Getting into the whole area of personalized medicine, where we’re going away from one size fits all—“oh my God, we ran this trial and now we have to use this”—instead, it’s like, “we’re going to understand you as a human so thoroughly that we can really not only help you with a specific issue that you’re having, you know, cancer or something horrible like that, but also we can predict and anticipate and hopefully prevent.” That’s huge. So the field of medicine is just so exciting. I could see an alternate reality in which I do that, but I’m good with what I’m doing.

ARIA:
I love that. Personalized medicine is so fascinating and could make such a huge impact.

And so, our final question, can you leave us with a final thought on what it’s possible to achieve in the next 15 years if everything goes humanity’s way? What’s the first step to get there?

MAJA:
I thought about this, and it worries me. This worries me because I think it’s very rare that everything goes humanity’s way for all of humanity. I want to be optimistic, but I’m a bit concerned about the particular place we’re in with AI because it’s going to disrupt the economy in a massive way, and it could be really positive, but it may not be. So I would really like us to do some serious thinking. I’m not necessarily saying we pause, but I will say one thing—which isn’t what you asked—but I’ll say, here’s the thing we should do: the big tech folks, OpenAI, Google, all the other folks, they need to not just say that they’ve welcomed regulation, they need to tell us what needs to be regulated because they know the best. It is not the job of academics and it is most definitely not the job of politicians, because they have no clue. So the people who are creating technology need to be responsible for also suggesting specific regulation. I understand this’ll be super biased, but everybody’s biased. It is their responsibility, because I know on the inside there are a lot of really responsible folks who care, but they aren’t proposing what should be done. I want them to work with us, let’s say, in academia, because we can’t do this alone. So, I think if we do that now, if we think very hard about how to put proper guardrails around, by the very people who are creating the systems, that’s when we can end up somewhere really much, much better.

ARIA:
That’s the step in the right direction, is government, academics, and industry all working together to be inclusive and think about everyone for AI. I’ll take it.

MAJA:
But in terms of vision, and I don’t think it’s 15 years, but in terms of this grand vision, you know, there’s always this discussion about, like, “oh well, you know, people need to be taken care of, and people will take care of people while technology will do all the other things.” And the part I don’t know is there’s like a big trench in between where we are now and that, and I don’t know how we get through that trench, right? Because you cannot just take 60-year-old people who have worked in, let’s say, food delivery with trucks and suddenly make them caregivers for people with Alzheimer’s. So, what do you do? How do you transition to get into, eventually, this other world in which it would be fantastic to think that we have a lot of leisure time and we’re taking care of one another and machines are doing all the crappy stuff and then some.

And so I want us to think about that trench, like, how are we going to get through that trench? If we can figure out bridging that thing, then we can get to the other side, which is going to be really awesome, I hope.

ARIA:
I love it. Figure out how to transition to more care.

Maja, thank you so much for being here. It was eye-opening and I loved hearing about robots. Really wonderful.

MAJA:
Oh, thank you and thank you for asking me such wonderful questions! I don’t get to talk about this often, so this is great.

REID:
That was super exciting. It isn’t often that you talk to people who are very deeply engineering-sophisticated, who are solving problems like engineering and the engineering problem they’re focused on is empathy and is the amplification of humanity through that. So, you know, as you noted, Aria, on the pod, we could have talked to her for another hour or two and completely lost track of the time.

ARIA:
And, to your point, she literally was using the same words as Mustafa, so it was so interesting to hear someone working on software and AI and large language models talking about empathy, how we can have people as therapists, how we’re definitely not replacing humans, this is an addition, this is a complement. And Maja was saying the same exact thing, she was just talking about hardware and she was talking about robots and she was talking about how we can have them sort of present in our everyday 3D lives. And I thought it was especially interesting when she wove in the AR and VR and how that actually, when you look at people’s brain scans, does give the same stimulation, at times, as in person. And so that could be such an iteration on the field of empathy and helping folks out, having therapists and coaches and all that stuff.

REID:
Well, that’s definitely one of the things we’re going to get, both by her robotics and by, you know, various AI, chatbots, Pi, others, which is we’re going to actually be running—what is the real typology of how you have empathetic interactions, compassionate interactions, understanding interactions? And we’re going to begin to understand this in a much broader way. I think her neurologist point was as simple as: empathy is as empathy does. I thought that was, you know, that’s a very important lesson to remember.

ARIA:
I mean, if you had asked me before this episode, can a robot be empathetic? I would’ve said absolutely not. Like, that has to do with intent. That has to do, X, Y, Z and it’s like, right, well, actually all that matters is the person who’s feeling it. And if you are a stroke victim and this robot can perform tasks that are empathetic to you, then that’s incredible.

And also, she talked about it so many times, the classic “teach a man to fish” tale, but we’re not doing anyone a service for certain things when we’re just doing everything for them. And we certainly know that when you have five, six, and seven year olds, but it’s also true as people age, or whatever it might be. How do we use these robots to help people help themselves? How do we use these robots to amplify what everyone wants to be doing? And again, like, take away the drudgery, but for the stuff that we want to be doing, so for our own independence, how wonderful to have someone right there with us helping us along on that journey.

REID:
And it makes sense that she’s starting with the most needful, right? Whether it’s children on the spectrum or injured or people who experience some kind of disability, maybe recovery. Because that’s obviously the most important thing to show the lens of it. I’d be super interested as that work also broadens out to Susie and Joe Average. [laugh]

ARIA:
One of the questions that we had on our long question list was about commercialization and was about scale and, you know, how does she take that from the lab, or helping a few folks, to broad adoption? And who are the future customers?

We’ll have to have her on the pod again because in a year or two, we have got to hear about how that scale is working. So I’ll be interested to watch the space.

REID:
Me too.

Possible is produced by Wonder Media Network, hosted by me, Reid Hoffman, and Aria Finger. Our showrunner is Shaun Young. Possible is produced by Edie Allard and Sara Schleede. Jenny Kaplan is our executive producer and editor. Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, and Ben Relles.