This transcript is generated with the help of AI and is lightly edited for clarity. 

ARIA: A warning that this episode contains mentions of suicide. So please do take care.

HANNAH FRY:

When it comes to questions of risk, human brains do this black and white thinking. I don’t think when you’re diagnosed with cancer is the time to have a statistics lesson. And I also don’t think that we are getting informed consent from people if you just shout numbers at them. But what I think that you could do is you could have somebody who turns it round the other way, right? So who sits down with you and says to you, “Okay, what is important to you in your life? What are the things that you value, and what percentage chance of it working would you be willing to tolerate?” It’s noticing how you feel and interpreting it mathematically rather than trying to put the numbers on how you feel.

REID:

Hi, I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know how, together, we can use technology like AI to help us shape the best possible future.

ARIA:

With support from Stripe, we ask technologists, ambitious builders and deep thinkers to help us sketch out the brightest version of the future, and we learn what it’ll take to get there.

REID:

This is Possible.

REID:

Just over a decade ago, a charismatic mathematician took the stage at TEDx Binghamton University and delivered a bold thesis: Math can help you find the love of your life.

ARIA:

Her talk, fit with wisecracks and diagrams, wasn’t just a platform for relationship advice, it was a portrait of how math is deployed in our everyday lives; of how numbers, theories, data, and algorithms can allow us to better understand ourselves and each other, and build toward the best possible future

REID:

Millions of views and hundreds of lectures and broadcasts later, Hannah Fry is a leader in a global movement to make math cool. She was a professor of the Mathematics of Cities at University College London before joining the mathematics faculty at Cambridge University. Hannah is also a bestselling author and host of many shows, including BBC‘s The Secret Genius of Modern Life and Uncharted, along with the Bloomberg Original The Future with Hannah Fry and the podcast DeepMind.

ARIA:

Put simply, Hannah finds and shares the many ways that numbers, algorithms, and data are at play in daily life, from dating apps to traffic jams.

REID:

Of course, math also forms the foundation of AI, providing the tools that enable machines to operate intelligently. Today, Hannah joins us to talk about how math moves us and shapes us, and how we can use technology like AI to elevate humanity. Here’s our conversation with Hannah Fry.

REID:

Hannah, it is excellent to be here in LinkedIn London with you. Thank you for joining us.

HANNAH FRY:

Thank you for having me. A delight to be here.

REID:

So apparently you’ve said that if you weren’t going to be a mathematician, you’d be a hairdresser. 1) Why? And 2) If you were a hairdresser, how would math play into hairdressing?

HANNAH FRY:

Okay, I remember saying this, and it is absolutely true as well. When I was 16, it was what I wanted to do, and my mum was like, “Okay, look, once you finish your GCSEs, just do your A Levels and then we’ll see where we are.” And then I finished my A Level,  and she was like, “Okay, just go and do a quick degree in maths and theoretical physics.” And it sort of went on and on and on. And then, eventually, the hairdressing has fallen by the wayside. But it was like a genuine ambition of mine, and I can still tap into that feeling of what I wanted. Because I think that there is something, first of all, quite interesting about constructing a three-dimensional shape. You’re on a sphere—there’s sort of some interesting geometrical properties there. But also, I think what it is, it’s about doing something and then immediately seeing the value in it. And I get the same kind of buzz, I suppose, from doing Instagram videos or YouTube videos. You know, you do something and then you immediately see the effects of it. It’s not like in the academic world, I mean, things are slow burn, right? Slow burn. So yeah, I think that was what it was about.

ARIA:

I was just going to say, it’s like when you go to the gym, and you have a personal trainer, and you want them to be fit—you have great hair.

HANNAH FRY:

Thank you so much.

ARIA:

You know what I mean? It’s not a bad backup profession is all I’m saying.

HANNAH FRY:

Thank you so much. Yes.

ARIA:

But you did end up—you call yourself an accidental broadcaster, which I love. And so, are there things that you can’t wait to focus on? What’s percolating your head right now?

HANNAH FRY:

I’ve been trying to get some stuff about artificial intelligence done properly out there for so long. I mean, it’s been really difficult to persuade people that this is something that’s worth time and attention that can be told in an entertaining way. And I think that this switch has just happened in the last 18 months or so, where people are like, “Okay, no, we will really address this.” So I’m really looking forward to doing some of that. I also, I don’t know, I’m in this quite philosophical mood at the moment where I’m thinking a lot about what knowledge means, and the edges of knowledge, and how we can deal with the fact that there are things that we will never know.

REID:

You know, I think that most people don’t see how math is so relevant to their lives. One of the things I think is really excellent about your work is to say, “How do I make it tangible, practical?” When someone starts from a “I don’t understand math” or “I’m a little alienated,” what are the first things that you try to light up their world and their universe? Because for example, on the AI point that you were making, part of what I was doing with the Superagency is to try to move people from AI fearful, skeptical to AI curious. And I see a similar arc, which is you’re trying to make people math curious. And so, what are some of the things that you do to get people to suddenly start becoming math curious?

HANNAH FRY:

Okay. So I think the very first step is that it’s not your job to change people’s mind about how they feel about the subject. I think a lot of people are kind of really traumatized, genuinely traumatized, by the math that they encounter in schools. And I think it really divides people. Some people—like us, I imagine,—you know, being really drawn to the subject, and I think a lot of people really turned away. And you are never going to switch that. So I think it’s about acknowledging where people are and using that as your baseline. But then I think it’s about really doing it by stealth. That’s maybe…

ARIA:

it’s the vitamin and the Twinkie.

HANNAH FRY:

Exactly. Exactly. It’s the vitamin and the Twinkie. Exactly right.

REID:

That’s if you like Twinkies.

ARIA:

Fair.

New Speaker:

Maybe vitamin and the chocolate chip cookie.

ARIA:

Done!

HANNAH FRY:

Just vitamins anywhere. You know, just like force them in. Because I think that when you have had the luxury of really seeing the world through a mathematical lens, I think that you fully understand that there is almost everything can be viewed through that perspective. I think that it has incredible insights that it can offer you on literally anything. The explosion in productivity, and everything that the world has seen in the last 15 years, has been based on that. Right? It’s like the era of big data was the kind of mathification of industry. When you understand that, then I think that you can start elsewhere, right? You can pick basically any topic you like and then show the insights. I often make programs where I never even say the word maths, I never even mention it, but it’s just about providing these curious, counterintuitive, surprising ways to turn something that people feel like they already know, turn it on its head, and show it to them in a completely different way.

ARIA:

I just think math is such a way to explain the world. And to your point, I think it’s very different by country. At least in the U.S, people are math people and not math people. And that’s so ingrained in our head. There’s no growth mindset. It’s just, “Well, I’m bad at math. I can’t understand it.” And you actually had a colleague say to you, “People are scared of mathematicians. Let’s keep it that way.” Why do you think people feel that way? And how can we not leave math to the pros?

HANNAH FRY:

I mean, I think that you can weaponize it, actually. I think that when you have knowledge, I think it gives you power, basically. I mean, in short, if you are able to create mathematical models that can impact the way that things are run, and you understand them, I think actually it’s very easy to just, like, draw a wall around you and be like, “Sorry, you can’t come in.” But the thing is, at the same time, we are quite literally designing our collective future. Particularly now. Everybody deserves to have a say in that, you know? And I don’t think that people should be excluded from it. I’m not saying for a second that everyone’s going to be mathematicians. Of course they’re not. But what I would like to just move the dial a little bit on people understanding that there is this connection between a subject that they think is like numbers and textbooks and whatever, and actually this really living, breathing language that is allowing us to effect change out in the real world. I think I’d like people to understand that this is also maths, right? This is also maths. And be invited in on the conversation. I know you’ve spoken a lot about ethics and doing things safely. I think those kind of conversations, yeah, you should be drawing people into those.

REID:

In the AI revolution—which I refer to as a cognitive industrial revolution—what are the ways that people should think about engaging with AI that’s mathematically informed?

HANNAH FRY:

So I think a lot of it is about critical thought. Historically, computers have been kind of deterministic machines, right? It’s like you put the same sum into a calculator twice, you get the same answer out. And I think that actually, we are moving away from that and more towards this much more sort of stochastic, probabilistic space. And I think that people haven’t yet adjusted their mindset to what that actually means. And I think that you see this with hallucinations from large language models, of course, but I think that you also really see it in the ways that people are applying mathematical models, or artificial intelligence models, to determine particular outcomes. So, for example, let’s say that I, somehow or other, came up with an amazing new algorithm that could find your perfect partner in the entire world, right? But it did it with an 85% accuracy. I think that people are not very good at understanding what that means and what the wider implications are of that. And there are countless examples of that nature—I mean, more and more and more and more—where these things fall short of perfection, and will always fall short of perfection, which is fine. But that gap between perfect and what you end up with can be all kinds of potential problems that arise.

REID:

You’re highlighting a particular thing—Aria has heard this from me before: If I had a mathematical wish for humanity, it would be understanding that everything is in probabilities. That it’s almost never zero percent, and almost never a hundred percent. And how you’re configuring your navigation path depends on the intersection’s probabilities. And yet people tend to collapse it into a hundred or zero, which frequently—like if you have an 85%, you skate through it 85% of the time fine, but then you’re ambushed by the 15 when you shouldn’t be.

HANNAH FRY:

I mean, I tend to think of almost everything as sort of spectrum. So I completely agree with you about between zero and a hundred. But I also think about when you’re having arguments with people, it’s like right and wrong also exists along this spectrum. And I think the big societal debates that we’ve had where people feel so completely polarized, there’s a mathematical trick where you deliberately push something to an extreme. So you think of, “Okay, what would the case be if this particular variable went to infinity? Or what would it be if this was zero?” And you use that to give you information. And I think that the Jesuits have a similar philosophy, right? So when there’s an argument about something, you sort of imagine an extreme version of the same problem to help you understand it a little bit better. That thing of like not seeing the world in black and white; it’s not 100% or zero, it’s not, yes or no, it’s not true or false. Everything exists along a spectrum. I think that’s a really helpful way to see the world.

ARIA:

I mean, that to me—besides weeping alongside you in your documentary about cancer—that actually was the most illuminating moment. I mean, I was talking about it with Reid this morning. So often with health and medicine, I mean especially as women, it’s like, “Well if you do that, you’re 10 times more likely to get cancer.” Well, the chance before was 0.001, and now it’s point… Like your mind can’t comprehend. And especially for people who don’t have a facility with math. And so, in that documentary you were talking about if a hundred women were diagnosed, and 80 of them would’ve been okay without going through chemotherapy, and what are the odds? What was the response that you got to that? How do you think about that? To me, especially something so important is your life, the math becomes very important, but then also not important. Because you say, “Of course I’m going to do this.” How did you think about that?

HANNAH FRY:

Yeah, so there was one woman in the documentary in particular. This woman Anne. I mean that’s a conversation that will stay with me forever, right? For people who haven’t seen it, she was in her late sixties, and she’d just been diagnosed with breast cancer. And she’d had the lump removed, but the doctor was working out what future treatments she should be given. And they said, if we don’t touch you again, you have an 84% chance of living another 10 years. But if we give you everything we’ve got—so chemotherapy, radiotherapy, you know, hormone therapy, everything—we can increase it to 88. I thought that was a really difficult decision, right? Do you go through it? But I spoke to her outside, and I was like, “What are you going to do?” And she was like, “I have to have the chemotherapy because otherwise I’ll die.”

HANNAH FRY:

I was absolutely astonished that the numbers were not getting through, right? The numbers were not communicating the message. And then I really thought about it. I went back through to the doctor and was like, “She did not understand.” You go to the doctor, they tell you the treatment, you take the treatment, the thing goes away, right? It’s exactly as you said—100%. 100%, 100%, 100%. And the doctor was like, “I think if she did understand, she probably wouldn’t go through the treatment.” And if that happens over and over and over again, more people will die. Which is also completely true. What that started in me and has continued—so the book I’m writing at the moment is about doubt and uncertainty—is that I’m not sure I even know what those numbers mean. I’m not sure I understand the difference between 88% and 84%.

HANNAH FRY:

Is the human brain really capable of that? I don’t think it is. And so then it’s like, well now you’ve got these numbers that are applicable and very useful at the population level and are kind of meaningless when it comes down to you as an individual. And so then, it’s like, well, what does probability actually mean? It really comes down to how you feel about risk. Ultimately, I think you’re quantifying how you feel about risk. And that makes sense at the population level because we can do it that way. But when it comes down to you as an individual, I think things get very, very hazy.

REID:

Well, there’s a couple things there. One of the things that happens frequently with math and most people’s psychology is that they get false precision. It’s like really? 84, 88? How do you derive that? That’s, you know, your area of expertise. The next thing is, what’s the actual fitness function? What’s the actual game you’re doing? Because, in this case, I should think the audience survey is fairly obvious, if 84 and 88 are correct, because it’s probability of maximum number of quality of life days. And chemotherapy is brutal. So you go, “Okay, actually the higher probability of quality of life days is not doing it.” And so frequently, I think one of the things that people mistake is, what’s the game? It’d be interesting if figuring out which game—I mean, you mentioned Gödel earlier and incompleteness theorem—but one of the most important things is figuring out what game is before you even get to what the thing is. And that’s another part of this.

REID:

And then, the last part of it is to think about, okay—exactly as you’re mentioning—how does this apply to me? Am I a person who is willing to take some risk? And by the way, most people—this is part of the reason I think they get to zero in a hundred—because most people want to imagine they’re not taking risk. They don’t realize that when they get in a taxi cab and go somewhere, there’s a risk! You walk across the street, there is a risk! And they go, it’s just zero, zero, zero, zero, zero. Because, otherwise, brain goes “ZZZ.”

HANNAH FRY:

“I’m immortal.”

REID:

Yes! And so part of the thing that I think is excellent is to get a little bit more fluid in the applicational probabilities. One of the reasons I love the mission you’re on. So have you thought about the heuristics of, if I were going to say something, to one of my godchildren, about, “Hey, think about math this way in your life.” What would be a heuristic? A principle? A, “Need to remember this as I apply to this.”

HANNAH FRY:

Yeah. Okay. So, actually, can I say something about the game bit first? Because I thought that was such an interesting point, because I totally agree with you. It’s something I think about a lot. You have to decide what you are actually optimizing for. And I think so often people don’t. And one of the things I’ve been thinking about a lot recently is about prisons. And if you look at the data, why would you want to send somebody to prison? When somebody commits a crime, you need some way to rehabilitate them, right? You also need to actually take them out of the system. You need some deterrent effect, and you need some sort of sense of retribution for the crime. But when you actually look at the data, on every single one of those—apart from maybe a feeling of retribution— prisoner is like the worst possible answer.

HANNAH FRY:

It’s like, if you want to reduce crime, is that the game that you are playing? I do think that actually people go through a system without ever stopping to think about what the question is—what the game is exactly as you describe. And then, in terms of how do you think about maths? How do you kind of translate it? So going back to that example about cancer, one of the things that I found really noticeable is when you are diagnosed with cancer, you have somebody who will take you into a room, away from the doctor. And they will sit with you as long as you want. And they will go through the procedure, they will answer every question that you have. I mean, they’re essentially a translator between you and the medical profession, right? And I do sort of wonder that when it comes to questions of risk, because human brains do this black and white thinking.

HANNAH FRY:

I don’t think when you’re diagnosed with cancer, it’s the time to have statistics lesson, right? I don’t think, if it didn’t work before, it’s not going to work then. And I also don’t think that we are getting informed consent from people if you just shout numbers at them. But what I think that you could do is you could have somebody who turns it around the other way, right? So who sits down with you and says to you, “Okay, what is important to you in your life? What are the things that you value? And how can we design a treatment system that does that as well as possible for you?” I spoke once to a intensive care nurse. Because of course when you’re in intensive care, there are all sorts of situations where these probabilities arise, right? Like you can resuscitate somebody, but the chance of this repercussion is really high, or whatever it might be—this procedure has these consequences. And she said that what she’d started doing with families, rather than saying, “This has this percent,” she instead says to them, “Okay, this is what we are thinking. What percentage chance of it working would you be willing to tolerate?” So it’s the other way around, right? It’s not you’re taking a number and trying to attach a feeling to it. It’s the opposite. You’re taking a feeling and trying to attach a number to it. And that I think is a much, much better way to try and do things. So, in terms of intuitively thinking about mathematics, it’s switching it around. It’s noticing how you feel and interpreting it mathematically rather than trying to put the numbers on how you feel. So, actually, with my kids for instance, if we’ll walk into town, I’ll ask them to come up with a route that minimizes the number of crossings of roads that we do. Seeing the world in a way where you are systematizing, and critically thinking about things, and noticing that you are doing it, rather than necessarily just trying to put numbers on everything.

ARIA:

Your point about pre-deciding, especially before the emotion takes over, I think is so critical. Most parents go through that during childbirth. You’re faced with all these decisions: Should you get this test? Should you do this? And my husband and I would say, “Well, will we make a different decision based on the information? If we get an information of 60% chance of this, or 20%, will it change our mind? Well then why do we need to know?” One of the things you’ve talked a lot about is that—perhaps because again, of this sort of innumeracy or people’s non-facility with math, that they tend to trust the outputs of math blindly, or perhaps of LLMs, or computers. And so I’m obsessed with criminal justice reform and the prison system. One of the things I’m excited about with AI is that—it’s hard to wave a wand and fix racism, but maybe we could wave a wand and make an LLM not racist. You know, t’s hard to fix system-wide things, but if we can change a code, if we could reinforce, if we could do all these things—and you’ve talked about how, with AI there could be increased bias, but there’s also ways to reduce bias with AI. So how do you think about that?

HANNAH FRY:

Yeah, absolutely. There’s a really interesting paper that came out a couple of years ago, and it was called Women Also Snowboard. It was looking at image recognition, where you can’t tell the gender of people. But what was really interesting was that it demonstrated how these are not like stable equilibrium, right? So if you have a bias in the initial data set—which is where the original bias came from in the labeling, where people were seeing pictures of snowboarders and assuming they were men rather than women—but it can be exacerbated once it goes through the algorithms. That’s something that, exactly as you say, you have to be extremely cautious about, and make sure you are putting in the correct safety procedures to minimize and mitigate against that. For me, the crucial point about this is the one that you made.

HANNAH FRY:

That you cannot wave a wand and fix systemic issues or societal problems. And so then I think that the question changes. This is something that doesn’t have a finish line, right? It’s not like, “Oh, well done, you did fairness,” right? Like, “Oh, unbiased, congratulations.” I was writing a book a few years ago, and I once spent like a week trying to research and look for any system in the world that has ever been perfectly fair. Doesn’t exist, right? There’s nothing—forget about algorithms completely for a second. And so then I think if you say this is not something that has a finish line, and instead you accept that there will always be bias in your system, and therefore commit to continually hunting for it and repairing it, I think that’s the way that you have to approach this.

REID:

Your comment just made me think about there are ways to make perfectly fair systems, but they will be very unjust systems. Because you could go zero percent all the time. A hundred percent all the time. Or 50, 50 random. And it’ll be fair, it’ll apply to everybody, but oh my God, would it be an unjust system?

HANNAH FRY:

Yeah. You’re absolutely right.

REID:

So how has this AI revolution made you think about education? Because one of the things you do at Cambridge, obviously, is trying to make math education widespread, impactful, et cetera. But let’s generalize a little bit to education generally. And obviously there’s been a lot of turmoil within the academy when it comes to ChatGPT and everything else. What are your thoughts on it, and how should universities be thinking about AI generally?

HANNAH FRY:

Yeah, I mean, obviously it’s been a massive disruption, so thanks tech guys.

REID:

Yes, we try.

HANNAH FRY:

But I’m really optimistic. I feel really optimistic about it. And I think part of my optimism actually comes back to the point that you made about the game earlier. Because the thing is, the real disruption has happened in the way that we assess students’ performance. But if you rewind what the question is about the assessment in the first place, the real question that we want to answer is, how do we know whether our students are getting a good education? Whether they’re understanding the concepts and are ready for the next step that they go onto. And you can’t ever really answer that question. You have to use a metric in order to get there. And the metric always falls short of precisely the thing. You can’t use numbers to perfectly capture these things.

HANNAH FRY:

I’m fine with the fact that written essays have to be considered in a different way. I’m fine with the fact that we have to move more towards oral examinations, change the types of questions that we’re asking. I’m genuinely fine with that. I’m okay with it. But where I’m really excited about it is this: we’ve known for a really long time that different people learn in different ways, but it’s really hard to do that when you are one person standing at the front of a room full of people. And I think just even from my own experience, using AI for research is just like on steroids. It’s incredible how much faster you can accumulate, and assimilate, and critically think about knowledge. I think it’s giving them extra tools, giving them extra learning. I think they’re learning faster and better than they were before. But I also think that the AI tutors that I’m looking forward to, which will be adapted for individual people’s learning styles, and can really clearly identify gaps in their knowledge, and then appropriately construct questions to reinforce those. I really think there’s a lot of really good stuff that’s going to be coming.

REID:

I agree.

ARIA:

I was thinking about the switch from written to oral examinations, and I used to teach, and my students—no offense—were just horrific writers. And so even if they understood the concepts, I would always ding them for their writing. But some of them were amazing orators. All of these things are coming in and we’re never going to get it perfect. And maybe AI can help us actually get a more holistic picture of the person we’re thinking about.

HANNAH FRY:

One thing, just very quickly on that. Historically, every culture, every society has always really valued wisdom. And I think of wisdom as an ability to take in a wide range of variables and come up with a very particular response to whatever that might be. And I think that historically, we’ve sort of been at the exact opposite end of the spectrum, which is one solution for everybody. And I think we are moving more towards AI, which is much more individual.

ARIA:

Absolutely. I rewatched your TED Talk last night, which for those of you who haven’t seen it, it was all about using math to find love. And so I’d ask you, in the age of AI, I mean, people are having AI friends, AI girlfriends. They’re using AI on dating apps to give them the best poem to send to someone, whatever it might be. How would you update your TED Talk in the age of AI?

HANNAH FRY:

Oh my gosh, it’s such a big question. So I did that TED Talk over a decade ago. I should also tell you that when I did it, I was just engaged, or just about to be married, and I was very excited about the world. And I gave all of these bits of advice about how you can use math to optimize your own dating life. And now divorced everybody. Okay? So basically, don’t listen to anything I say.

REID:

No, understand its probabilities.

HANNAH FRY:

You’re absolutely right. But in terms of updating it for the age of AI, I think that something that’s really interesting has happened actually with dating apps. But I think it does again, come back to what is the game, right? Because the thing is, the hard bit about dating is finding somebody who you can integrate your life with in an effective way. Where you have these shared goals, where you support each other emotionally in terms of your career, all of that stuff. Finding somebody where you can do that with. And the process of doing it. Communication ultimately is the thing that’s difficult about relationships. The thing that is not difficult about relationships is which of these 2D images of people do you think are better than others? Right? That is not what it is. And yet, somehow or other, that has become what dating is about. And of course, it’s because it’s the bit where you can find profit. This is one of those situations where the technology has started changing human behavior. Just the way that we have relationships has dramatically changed because of the impact of these apps. So in terms of what I would do with AI, I don’t know. Maybe it would be around what the question would be. Probably be something about improving communication between individuals.

ARIA:

I mean, my favorite thing about the TED Talk was about increasing your odds. To your point, understanding the game, AI can just help people understand that communication relationship. You can talk to your AI to figure out what you’re supposed to say. You can get advice. There are positive things that it can help you to do. But to your point, maybe looking at two static images and deciding which is most attractive is not going to get you to your future husband or wife.

HANNAH FRY:

Absolutely. Yeah. I mean that whole point about, you think that you’re appealing to the masses, you’re not. You’re not appealing to the masses. You should pick whatever makes you unique and just go with that.

REID:

So, before we get to your new BBC series—which I have several questions about—I’m curious if you have a heuristic about how people can learn a good mapping between an emotional state and a number or a percentage? Because I agree with you, because it also is what’s your aim? What’s the game? Plays into the emotional state, but it’s like, well, what percentage would be acceptable to you as a way of doing it? But of course, that requires practice and a reasonable mapping function. Have you had any insights about how people can learn that mapping function better?

HANNAH FRY:

I think practice is the key. The super forecaster stuff where you look at different future events and then you’ve put down a probability, and then you check back and recalibrate your percentages that you are giving it based on what actually happens. And so I think it’s that. I think it’s, you have to continually reassess it. The other thing actually, the precise situation where you’re like, “What’s the number I would be happy with, and what’s the number that it is?” I don’t think those come up that often. I think the one that I use more often is regret minimization. So, where I imagine being in the future and looking back, and then choose the path which I would regret the least. That’s the one I use almost all the time.

REID:

No, no, and again, depending on the game, very useful tool. So let me move to your new BBC series. Start with a confession, which is, I find it very difficult to watch Black Mirror. Because when I watch Black Mirror, I’m like, “Oh, I know how to fix that.” I know how to not make that be the dystopia. It’s not inevitable. Sure, if you’re dumb, and build it this way, and society somehow orients around making it dystopic, it can be terrible. So it was described as real-life Black Mirror.

HANNAH FRY:

Yeah. [Laugh]

REID:

So I had a bit of an aversion response. Which surprised me because I see you as a optimist about how we create the future. So what is this future coming, and what is this series?

HANNAH FRY:

I agree with you about there’s dumb ways to design things, but I think the slight problem is there are dumb people designing stuff, right? And I think that there are some really astonishing stories of things that have already happened and are continuing to happen. I saw, I went to go and meet this company in California. So now new cars have to be built with an internal-facing camera, right? And part of that is technology to check when the driver is falling asleep at the wheel in order to alert them. Which I think is a really positive thing, will reduce deaths on the roads. But what you can also do, if you are a dumb designer, is you can use an AI to determine the emotional state of the driver. Now, I probably should have put that in air quotes because the actual science behind it is just junk.

HANNAH FRY:

It’s based on the idea that you smile when you’re happy and you frown when you are sad. Which it’s just not true at all. And yet, because there’s nothing stopping people selling this stuff, there are companies who will take those feeds and then send it out to insurers—that you are a grumpy driver or whatever it might be. And their sort of thinking is that it will change your insurance policy, but maybe also if there’s a particular emotional state that you are in, then your car wouldn’t work. And so there are these stories where people have done things that don’t make sense, that need correction. Part of that correction is also shining a bit of a light on them. I think it’s about having those conversations in the public space. But I also agree with you that I am ultimately an optimist.

REID:

One hundred percent. And by the way, I agree with everything you’ve said, and outside commentary and revelation is an important part of this. But I think one of the things that I’m curious about if you’re going to do in this story, because if you go, “Well, it’s a probability curve.” And I’ll give you an example of something that I encountered last year, which is there’s a lawsuit against Character.AI for a child—tragedy, obviously—who was having interaction with a chatbot and committed suicide. And so there’s a lawsuit. Now, as far as I can tell, from a perusal of the chat manuscript—that the chatbot wasn’t doing any of the obvious things that would trigger a lawsuit, which is, “You should consider committing suicide,” or that kind of thing. It had some irregularities in the conversation, but nothing that was persuasive, manipulative, et cetera.

REID:

But the problem that I saw—and this is part of how we as human beings take narration and make bad judgements because of it—is if you asked me to guess right now, would chatbots as they’ve existed and been deployed, increase or decrease suicides across the entire population? My guess is it’d be decrease.

HANNAH FRY:

Decrease. Yeah.

REID:

Right? Because there’s a sympathetic ear to talk to you at two in the morning. I know the vast majority of these chatbots are trained in a way of going, “Oh, are you unhappy? Well, let’s talk to you about. Let’s try to help you get back to a thing.” And so when you say, okay, we have this narrative where this bad thing happened, you also have to be like, where does it fit within the probability curve? And given you, and the intelligence around math, I presume that’s also part of this evaluation?

HANNAH FRY:

Yeah, because it’s nuance, right? Yes, I totally agree. But then I also think in that example, I would probably take the conversation to a higher level. Which is about the ways in which we anthropomorphize these chatbots and consider them to be human. Where people are forming genuine emotional connections and genuine bonds almost, with these chatbots. And I think that we should be having public conversations about how much we want our chatbots, or whatever it might be, to act as though they’re human, bearing in mind that humans have this real habit of—exactly as you say—putting a story on something. But this reminds me, I think of what was happening in the earlier days of driverless cars, when, yes, driverless cars were reducing the total number of deaths on the roads, but equally, there were deaths due to driverless cars going wrong. For me, the top-level question with driving is, okay, well humans are going to be in cars, right? So, what is it that humans are not very good at? Well, we’re not very good at paying attention. We’re not very good at acting well under pressure. And we’re not very good at being aware of our surroundings.

REID:

And self-assessment.

HANNAH FRY:

And self-assessment, absolutely right. And so I think that when the design of those systems, in those earlier days, was, “Okay, well, we’ll just get the machine to drive the car, and then the human will step in,” and it’s like, well, hang on a second. Humans aren’t very good at being aware of their surroundings. They’re not very good at performing under pressure. And then they’re not very good at paying attention. So you are expecting this, the human to step in at the moment when their attention is elsewhere? Just doesn’t work. And then I think that there was a reframing of it, which is, “Okay, well hang on a second then, if we start with what the human can’t do, pay attention, whatever, keep the human in the driving seat and then have the machine fill in the gaps of the stuff that the human can’t do—so the car is the collision avoidance systems that you’ve had. Well, that’s just a much better pairing of the two together.” And I think that having had that now for a number of years, the technology has developed where you can go back to the other system. And I wonder what the equivalent is—for chatbots to have an answer for this, by the way—but given that humans have this habit of imposing personalities and characters on things that are inanimate objects? How do we create these systems in order to mitigate against the worst risks that can happen when they do?

ARIA:

No, I think that’s really important. I also think there’s all these second order effects. I was reading about the decision not to require child car seats on planes. Because actually, if you have a six-month-old, if they’re in a car seat on a plane, they’re safer. Mostly nothing will happen. But it’s twice as likely not to have a neck injury, whatever. But if they required car seats on planes, parents would just drive. And, you know what’s dangerous? Driving. So it’s like, oh, actually, we can’t just look narrowly at this math statistic that is happening on planes. We have to say, “Oh, bigger picture: Let’s look at parental travel and how do we keep our children safe?” But I think there’s a natural tension—sort of what Reid was saying—between storytelling and statistics, or storytelling and facts. You see every day on Twitter, someone saying, “Oh, silly me. I thought facts and statistics would change someone’s mind.” But we all know it doesn’t. And so in your job, in your professorship, as trying to explicate the world through math, trying to make it more intelligible, how do you see that disconnect between stories and statistics? Because people’s brains parse them so differently?

HANNAH FRY:

Oh my gosh, this is such a good question, and it’s so hard. So I did this program during COVID. Essentially, the idea was that we would take seven anti-vaxxers and for a week I would be with them, and we would have all sorts of conversations. And then over the course of the week, we would see if anybody changed their minds. Now, the thing about this program, I didn’t like the cut that went out. I thought there were some problems with it. And I think that the issue was that, exactly as you say, right? We know that statistics doesn’t change people’s minds. You cannot just throw statistics numbers at people and then be like, “Oh, of course. Like that’s obvious. I was right.” It doesn’t work. It’s called the deficit model of public communication, which is, if only people knew what we knew, they would see the world in the way that we do.

HANNAH FRY:

And what I wanted to do with that program was actually really sit down and understand where these people were coming from. And over the course of the week, I just found it so interesting, and it changed my mind on so many things. For instance, there was one guy who was a nurse, I really got on very well with him. I think he’d had some issues when he was younger where he had been put medication against his will. And so he was like, “Look, I just believe in informed consent, right? Everybody does. And if somebody comes into the hospital and they have gangrene in their leg or whatever it might be, and they refuse treatment, we have to accept that. And this is a vaccine that doesn’t have a societal responsibility because it doesn’t change the probability of transmission.

HANNAH FRY:

Certainly not after a couple of weeks. And so it’s my decision to make, and this is the stand that I’m making.” And you know what, actually, I agree with him. There was another woman who was pregnant and she was a Black woman from Lambeth, who had a Black husband. And at the time, the vaccination rates in Black Londoners particularly was really down. And so they’d been targeted campaigns to try and increase their participation. And she was like, “Okay, well for starters, right? I’m pregnant. I’m not going to take any unnecessary risk.” Which in the fullness of time, I also really see that. But she was also like, “Why, all of a sudden the government has something they want to give young Black men in Lambeth?” But she also made this really interesting point, which was about how when you go to vaccine centers, honestly, it just feels like you’re going into a prison. And it genuinely had not even occurred to me that that was a triggering experience for somebody. And so I think that actually getting people to understand numbers, the first bit of it is you have to understand them. It’s about listening and not listening while thinking of the next thing that you want to say. I think you can’t really change people’s minds. People have to change their own minds. And I think that that the best way that you can do that is to approach a conversation with empathy.

ARIA:

And I think to your point, you have to understand what their point of view is. What’s the game? What is their reason for not? And then we’ve seen so many studies that actually AI and LLMs are the best at combating conspiracy theories, or whatever it might be, because they can understand where the person’s coming from and then give some nuanced, reasoned argument. And so it’s not that people’s minds are totally closed, it’s that everyone has a different reason. And so when you just attack them with your one-size-fits-all response, of course they closed down. Because you didn’t know it was about prison, or if it was about this, or whatever.

REID:

Actually, I think the way the LLMs work is less, “I’m arguing with you,” and more, “I am asking you questions.” Because it’s the you don’t change their mind, they change their mind.

ARIA:

Exaclty. Right. The Socratic method.

HANNAH FRY:

They changed their own mind. Do you remember that amazing study where they got people—I think this is in the Americas, they were Republican or Democrat—and they asked them about their feelings about Obamacare? And then they asked them how strongly they felt about it. And then they gave them a new sheet, and they were like, “Oh, do you know how a toilet works?” And everyone was like, “Yes, obviously I know how a toilet works.” It was like, “Okay, how confident are you?” And people were like, “Ten, come on.” And then they were like, Okay, here’s a diagram. I want you to explain to me how a toilet works and label the parts, and I want you to give me a full rundown of where the water goes, and exactly what happens in the whole thing.” And then suddenly people are like, “Okay, actually, maybe not. Yeah, okay, fine, fine, fine, fine, fine.” But what they found was that by doing that, then they asked them how strongly they felt about Obamacare or whatever it might be. And just that act of questioning themselves also made them question themselves more widely on other topics. So you are right, it’s asking people questions, but to find out the answer, not to sort of humiliate them or anything like that, but just to find out the answer. I think there’s something in that.

REID:

Going back to the probability thinking and AI, as AI develops, it’ll get higher and higher probability of accuracy of information. And I think that one of the things that we’re going to need to do is have an assessment of: where it will most likely be right, where it might be wrong, when you want to look at it more. So, for example, there was a research thing that suggested that AI, GPT, by itself was better than AI plus doctor. But I think the reason was is because the doctors hadn’t yet learned how to use GPT. They didn’t know where they should go, “Oh, right. That’s different than I think, and they’re probably right,” or, “Wait, this case is where I actually want to do more investigation.” And I think that’s part of what we’re going to need—part of the reason why I’m excited about your work and it’s good—is that thinking about okay, when is this likely to be right, and when is this likely to be wrong, is going to be part of our AI future and evolving.

REID:

And so I think part of it is we’re going to have to learn heuristics. And the heuristics as opposed to the, “I just feel…” For example, one of the heuristics I use in AI is if you ask it a general principle thing, like, “What are the seven rules of entrepreneurship?” It would generally be pretty good. If you said—one of the things that I did because I had early access to GPT-4—I said, “Has Reid Hoffman made a knockoff of Settlers of Catan? And if so, what is it?” Because I have, and it said, “Yes, absolutely.” I was like, wow, itt discovered that. There’s almost very little information on that., And then it said, “What Reid has made is a game called Secret Hitler,” right? And there is a game called Secret Hitler that the Cards Against Humanity people have made a version, and it created a Wikipedia answer that was completely fictitious. And that was when, because I have these evolving set of heuristics, principles, it’s like, “Well, when you ask a specific question or give me a quotation, I’m always a little bit more suspicious.”

HANNAH FRY:

Absolutely. It’s the edge cases.

REID:

Yes. Or specificity. Because it’s trying to be helpful to you. And it fundamentally still doesn’t recognize that in a lot of these cases, error is extremely expensive.

HANNAH FRY:

Yes, absolutely. I totally agree. One of the things I think is absolutely amazing about AlphaFold, which is—I know you had Demis on your podcast—is the way that it also gives you a confidence. So it doesn’t just tell you, “This is the folding.” So then you can see when the model is in its comfortable zone and when it’s struggling. And I think that that’s really, really helpful, especially when you’re talking about situations like doctors assessing data and information.

REID:

I think a lot of the training and meta-prompting will be—for example, you can get GPT-4 even as it is, to be a much better medical thing when you do a meta-prompt with how do you do Bayesian reasoning with medicine? Because then all of a sudden it goes, “Right, I’m going to give you a Bayesian answer,” and then all of a sudden it’s much, much better. Because it’s like, “Well, there’s a 64% chance that this is saying, and there’s a 30% chance…” But it’s giving you a list, in order, with probability assessment, if you do the Bayesian meta-prompt the right way. Which is what I would do.

HANNAH FRY:

Yeah, that’s so interesting. I mean, I remember seeing, there was a paper about chain of thought reasoning and the difference that it makes. And now, of course, the people have started putting reasoning into their models. It sort of feels like magic. You know what I mean? Like it does feel like magic, but you are right, it’s about just looking again. Having confidence in the places where it’s confident, and going over it again. I think subconsciously, I’ve been doing a similar heuristic.

ARIA:

So, on many episodes, we have an AI element where we bring AI into the chat. And so, since I think I know that you’re a Jane Austen fan, and you might hate math jokes, but we’re going to bring them in anyway. And so we asked Pi, the personal intelligence that Reid co-founded with Mustafa Suleyman, to give us some math-themed Jane Austen jokes.

HANNAH FRY:

Amazing.

PI:

It is a truth universally acknowledged that a young woman in possession of a large number of admirers must be in want of a better statistician to calculate her chances of finding true love.

HANNAH FRY:

I mean, amazing. It’s taken a quite famous line from Jane Austen and just stuck the word statistician in there, which is great. It worked for me.

PI:

Mr. Darcy’s pride and Elizabeth Bennett’s prejudice may have been quite irrational, but it was their common interest in geometry that brought them together in a most acute love triangle.

REID:

Doesn’t quite do the dad joke thing. Yes.

HANNAH FRY:

No, needs to work on the sort of build-up and release of tension, and that dragged a bit more.

ARIA:

Alright, we have one more.

PI:

When Ms. Bates heard of Mr. Elton’s engagement to Augusta Hawkins, she was quite put out. Why? “It’s simply not fair,” she exclaimed, “For everyone knows that three is a crowd and four is a quadrilateral!”

HANNAH FRY:

So bad.

ARIA:

Yeah. Hmm. Okay.

REID:

I see the comedians still have a job.

HANNAH FRY:

Yeah. I think the specificity there is where we’re really at the edge, aren’t we?

ARIA:

Alright, so one out of three.

HANNAH FRY:

One out of three was good.

ARIA:

Decent show.

HANNAH FRY:

It was decent.

REID:

Quasi one out of three. It was one kind of sufficiently crossed the line. And the other two probably would’ve had tomatoes thrown to the stage.

ARIA:

Okay. We have some work to do. We have some work to do. Okay, great. Well, that was our AI element for today.

REID:

Alright. Should we do rapid fire? Is there a movie, song, or book that fills you with optimism for the future?

HANNAH FRY:

I’m going to go for When We Cease to Understand the World because it’s just, okay, so beautiful, but it just really captures how exhilarating it is to be at the brink of new knowledge I think. It really makes me feel great. Read it about six times.

REID:

And by the way, since we just did the comedic thing, Stephen Fry read that book and reached out to Labatut. Because he also shares all of our passion. It is a great book.

HANNAH FRY:

Yeah, it’s really amazing. Really amazing. Because they did the Hay Festival together, didn’t they?

REID:

Yes, that’s right. And that came out from that reach out.

HANNAH FRY:

Wow. Amazing. Amazing.

ARIA:

Alight. What is a question that you wish people would ask you more often?

HANNAH FRY:

Do you want another drink? No. [Laugh]

ARIA:

<laugh> done <laugh>.

HANNAH FRY:

I don’t know. I don’t know? What do people answer for this one?

REID:

Perhaps the funnest one was I asked a friend of mine’s kid, and the kid looked at me and said, “Do I want to be here?”

HANNAH FRY:

Amazing. I’m sticking with my first answer then.

REID:

Right. That’s better. Alright, so, where do you see progress or momentum, outside of your industry, that inspires you?

HANNAH FRY:

Oh, I think the stuff that’s happening in biological spaces, it’s really incredible. Physics is lucky that it has equations to discover. You can look at all of that data of the galaxies, and then you can come up with the “equals MC squared.” I mean, almost impossibly simple, right? And biology doesn’t have that luxury, but I think that we are now at the situation or almost, where you can take the unimaginable complexities of biology and extract a working model for how it fits together. And I think that is really, really exciting.

ARIA:

Awesome. Alright, can you leave us with a final thought on what you think is possible to achieve in the next 15 years if everything breaks humanity’s way, and what’s the first step to get there?

HANNAH FRY:

Let’s go crazy optimism for a minute here.

ARIA:

Please.

HANNAH FRY:

Because I think that the history of humanity has always been a story about scarcity, right? It’s been about resources being divided. And I do think that there is a way for science to make a gigantic difference for everybody. There are so many different areas where, if science makes a breakthrough, like desalination or in nuclear energy, or whatever it might be—here’s a number of different areas—good battery design. These kinds of things where we just need a little bit of a breakthrough. And I think everything, everything, everything can potentially change.

ARIA:

Thank you so much for being here.

HANNAH FRY:

Thank you, that was so fun. Thank you.

REID:

It was great.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles. And a big thanks to Taylor Forster-Cornes​, George Kingston, Irenia Alvarez, Jenna Antonic, Ai Kanno, Natascha Mainz, Joshua Balogun, KJ Arthur, and Sophieclaire Armitage.