This transcript is generated with the help of AI and is lightly edited for clarity.
///
REID
A lot of people can’t afford going to a therapist!
ARIA
Absolutely.
REID
Right? And so you’re like, well, how do we get something that’s available to everyone? Perfectly fine thing for Society 1, to say, “Hey, look, if I’m going to be adopting Society 2’s social media platform, I need to understand more about how that algorithm is affecting my society.” The more that we defer this—this kind of future—the more that you cause a lot of human suffering.
///
ARIA
Reid, as AI is progressing and heating up, lawmakers are paying attention. So, the states of Utah and California have enacted rules requiring businesses and government agencies to disclose when they are using AI systems. Utah’s regulation, via its Department of Commerce, mandates that state-regulated businesses inform consumers when an AI system is used in communication or service delivery. Consumers in Utah must be able to ask whether they are interacting with a human or an AI, and if AI is involved in any way, the chatbot or system must truthfully indicate that they’re using AI.
ARIA
Also, California expanded its earlier chatbot disclosure law, which was first passed in 2019, to cover law enforcement agencies, requiring disclosures when AI assistants are drafting incident reports. So, these California and Utah laws—these are all on the heels of Illinois, which passed the Wellness and Oversight for Psychological Resources Act. And this act prohibits using AI for therapeutic decision-making in mental health care. That law allows AI to be used for administration and supplementary support for licensed professionals, but it’s actually illegal for AI to provide mental health therapy itself. On the heels of three states coming out with prominent AI legislation, where do you think that legislation like this is coming from? And then, what do you think the impact is going to be?
REID
Well, the most general, though it’s a specific, is that people say, “Hey, while taking you seriously, this AI thing is going to be really important.” And, you know, it’s usually one of the reasons why people tend to be negative on the future of technology. You’ve got, say, 100 people, and two people are creating the technology. The other 98 people aren’t, but it’s affecting the way the other 98 people work. They all go, “I don’t want any of this change. I’m perfectly happy with where I am. I can advocate reasons for it—it works just fine. You know, the horses, you know, are a much better way of doing transport than the car, you know, blah, blah, blah, blah.” And so you tend to get that, and that tends to get reflected in politics.
REID
And so, you know, there’s different cases in these things. I have, generally speaking, been a pretty strong advocate and positive on transparency. So I think it’s transparency, you know, like the Utah law—like, am I interacting with an AI? The answer should be not deceptive; e.g., it should be yes or no and accurate, or yes in some ways and no in others, but it should be an accurate statement. I think it also is useful to say, hey, there’s certain kinds of reports on usage of it and how it’s used and what’s going on, either from the tech companies—you know, Copilot, OpenAI, Gemini, Claude, et cetera—or others, and saying, here’s where AI is at. And some of that may require a consumer or public disclosure. Some of that may require a kind of an audited internal document that a government can request if they need or is filed with government stuff. And I think transparency is, generally speaking, very good. That’s also including the California law of saying, hey, it’s got to be clarity about where and how it’s being used. Now, that being said, I think that the tendency will be to say, well, this is the reason why we’ve got to have, you know, licensed programs for hairdressers, for hair cutters, right? And you cannot offer a service cutting hair unless you have a license.
REID
And it’s like, well, it’s unclear why you shouldn’t be able to offer, like, cutting hair without a license. You may have to say, I am licensed or I’m not licensed, right? But it’s kind of the question about regulatory capture and all the rest of that. So there’s the corporations or, you know, trade associations or other things mixed into this. It’s undoubtedly—like, not knowing the specifics—but if, like, I were to hazard a guess what’s behind Illinois, it’s the associations going, oh, the AIs are much worse than human beings for medical things. So sure, you can use them the way we want to use them for, like, scheduling our appointments, but no, you can’t use them for talking to people, because it’s just not as good as us in talking to people.
ARIA
Well, Reid, let me actually push on that thread because I think most people actually would agree that they want therapists to be licensed. Forget hairdressers. I want all mental health therapists to be licensed. And that seems like, oh, that’s probably reasonable. So the Illinois law, what if instead of banning AI therapy, it just said you had to be licensed? You have to pass a certain standard of care that, you know, regular therapist has to jump over as well, like, would that be okay?
REID
That would definitely be okay. Look, I do think that there are cases where licensing is important. I do think that one of the things that the problem with a licensing regime tends to be is, like, say, hey, there’s an option where you must be licensed, and there’s an option where licensing is irrelevant, and there’s also an option where you must disclose if you’re licensed or not, right? And, you know, too often it kind of goes all the way to the license bucket, you know, constantly as you accrete over time. And just like laws get longer and more complex and so forth rather than getting refactored. And I think it’s useful to have them refactored.
REID
So… but I certainly have no objection to, for example, saying, look, we think the following kinds of therapeutic mental care must be licensed in the following ways, and then we’re going to develop a regime for whether or not your chatbot—you know, your chatbot—is licensed or not for this, and we give it some clear thing. Now, the tendency will be a little bit like the autonomous vehicles, which is, well, you should only release the autonomous vehicle when it has zero chance of accidents, even on a road being driven by other human beings, some of whom are tired, some of whom are drunk, you know, et cetera, et cetera. And you’re like, look, what you want is you want it to be much better than the average driver. But once you get much better than your average driver, the more you’re trading average driver for the AV, you’re actually improving safety overall on the road, and that should be the criteria.
REID
And so the similar thing would need to be on the equivalent of what is licensing or certifying an AI would be better than the average therapist. It’s a little bit like, for example, your radiology screen today. If you had to pick between an unknown but licensed doctor and an AI, you’d pick the AI from the capabilities today, from the getting it right. Now, what you actually pick would either be the absolute best radiologist or, like, a really good radiologist—absolute best—together with the AI. That’s what you’d want to have. And so that’s what we should be driving to. And we shouldn’t be kind of trying to… because, you know, the more that we defer this, this kind of future, the more that you cause a lot of human suffering. Because, for example, like, take the, the, the, the mental health thing, you know, currently a lot of people can’t afford going to a therapist—can’t afford, right? And so you’re like, well, how do we get something that’s available to everyone, available to the, the, the, the college student who, you know, maybe has access to college facilities or maybe not, or it’s 11 p.m. on a Thursday and dah, dah, dah—like, like, you want to be able to, to, to navigate these kinds of things. And that’s really important.
ARIA
We now have this patchwork of AI laws, some of which you think are good and reasonable and some of which you think are harmful. So Reid, my question is, where do you think state level legislation like this is coming from? And what do you think the impact is going to be?
REID
Well, generally speaking, state by state tends to be inefficient, because it tends to accrete, like, you’ve got 50 different legal codes you need to get to, you know, or even more as you get to other countries and all the rest. And that tends to be—to impede progress and create weird liabilities. The states might even bounce off each other in various ways and do contradictory things. Anyway, so there’s a stack of stuff there. So I tend to be, like, hesitant on—but, on the other hand, there’s a feature of the state-to-state thing too, which is, you know, part of the thing we get—we benefit too little from in the U.S. is, hey, let’s—let’s have Petri dishes, and which ones are really working? Let’s learn from that and then go. And so, like, some of that I think is good. Um, so it’s not categorically all state good, all state bad. Um, you ultimately want to get to kind of simple things that allows innovation, allows business operation, allows iterative deployment, so you’re learning and getting better and doing that without, like, undue liability issues, in order to kind of improve through it. That’s what you target want, but some state-based legislation can be good within that target.
ARIA
Absolutely. All right, Reid, we’ve all seen this headline: ChatGPT will no longer give health advice or legal advice. OpenAI has introduced new restrictions that prevent ChatGPT from giving personalized medical, legal, or financial advice. They’re repositioning it as sort of more of an educational tool, and the change, effective October 29th, aims to limit liability risks and comply with tightening regulations around AI-generated guidance. I have to admit, I absolutely use ChatGPT specifically for health advice, so I’m not sure how I feel about this change, because ChatGPT will now only explain general concepts and principles, not prescribe medications—I don’t know if they could ever prescribe medications; they could suggest—write legal documents, or give investment recommendations.
ARIA
Similar limits are going to apply to emotional, legal, and financial questions. So ChatGPT will be able to define terms or processes, but they must refer users to qualified professionals. I think privacy concerns also play a role here about people sort of sharing personal or sensitive details with AI. Could that expose you to data risks if this data is then entering bigger data sets? And so again, this is something I absolutely use. I would never use it as a replacement for a medical professional, but you and I have talked a lot about how second opinions are so important, and that’s actually a great use case for ChatGPT. So I’d say to you, what do you think about OpenAI actually self-censoring, making this decision, as opposed to—you know, there isn’t federal legislation about this—but OpenAI did this proactively.
REID
Well, I mean, currently what’s happening is there’s kind of a bonanza in a certain sort of trial lawyer, you know, kind of attacking the AI companies, because the presumption outside of legal guidance on this topic is that they’re infinitely liable for any possible misuse. So, like, for example, if I went to ChatGPT and ordered up a recipe for arsenic and made it and then swallowed it, it was like, that was ChatGPT’s fault, right? Because maybe I wouldn’t have been swallowing arsenic otherwise, even though, like, I went and asked for it and I made it and I ate it, you know, I consumed it, etc. And so this is a natural way that a business should, you know, can protect its liability.
REID
And it’s naturally then bad for society, because if you think about it, you know, like, in the U.S., 340 million people, you know, probably the number of people who have access to immediate medical interaction, you know, is very few, probably fewer than a million, maybe it’s a million, who can go, like, I can get my doctor on the cell phone and call him right now and ask, you know, ask her or him about it. And then there’s a larger number that has an availability for emergency rooms and clinics and, you know, other kinds of things. But that’s still probably not, you know, like, over, call it 200… 250 million, you know, within the entire U.S. population, and that’s a… sometimes that might be a two-hour drive, or sometimes that may be too expensive or not available and certainly not immediate, etc.
REID
And so I actually think, and, you know, I myself know people who have, who have had, um, who have saved their friends’ lives because they got an opinion, opinion was questionable, you know, asked, you know, ChatGPT, and ChatGPT said, take your friend to a different hospital. And the different hospitals like, you know, glad you got here; you would have been dead two hours later, and we wouldn’t have been able to, to, do anything to help you. And so, so I think it’s a huge suffering and loss to society, but is a natural thing for the companies to do unless, you know, as I’ve been trying to encourage various, you know, kind of mostly national governments to do—U.S. and Europe and others—is to say, give a kind of a safe harbor line.
REID
So, for example, what if the safe harbor line was, say, look, I’m not a doctor, and you should consult with a doctor if you can, right? But if you can’t or not immediately available, like, here’s what I would think. But as long as I said that, then it’s okay, say. I mean, that’d be overly simplistic, but something like that, and to give some channel for it for medical, which I think is essential, because I think you can get things for your parents, your kids, or your partner, or your friend, or your family member, and all the rest of the stuff there. And there’s a lot of communities that do not have access to medical care.
REID
Same thing is true for legal or financial, and even educational, you know, could be the thing to add it in. It’s like, well, I guess education, there’s no legal certification, so it’s like you can’t be sued for it by trial attorneys trying to cash in a buck by claiming, you know, absolute liability for an individual, you know, misfire. And so I think a safe harbor channel would be good to create and will be essential to trying to make, you know, the elevation of what could be great for people in medical, legal, financial, etc. And I think it’s, you know, sad that it’s updated its policy, but it’s understandable relative to, you know, how our legal system works.
GEMINI AD
This podcast is sponsored by Google. Hey folks, I’m Amar, product and design lead at Google DeepMind. We just launched a revamped vibe coding experience in AI Studio that lets you mix and match AI capabilities to turn your ideas into reality faster than ever. Just describe your app and Gemini will automatically wire up the right models and APIs for you. And if you need a spark, hit “I’m feeling lucky,” and we’ll help you get started. Head to ai.studio/build to create your first app.
ARIA
I think all of those are examples where, you know, if you need a lawyer, most people don’t have one. They don’t even know who to call, let alone do they have, whatever, $1,000 an hour to pay someone. Financial advice—many more wealthy people have a financial advisor, tells them what to do. Most people who are middle income or low income, they don’t have a financial advisor. So ChatGPT is sort of the only answer. Could you see a world where, in the future, sort of the medical advisor, legal advisor, financial advisor sort of becomes more like a public utility that’s provisioned by the federal government? I mean, obviously we want the companies doing the tech, but the federal government is ensuring that it’s free for all, or that there’s some sort of regulation so that they’re giving them safe harbor or something like that?
REID
Well, it’s definitely the kind of thing where I think we should be—we should give some channels to allow these tech companies to iterate and deploy and to learn from it and to potentially make it available within a safe harbor parameter. The medical one’s one I’ve given the most direct thought to, which is like, I could easily imagine kind of a, hey, look, I’m not a doctor, or maybe even to the earlier question, like, I passed the following kind of exam, right? But I, you know, I can’t offer medical advice and assume any medical liability. So you should talk to a doctor if you want someone to assume medical liability for talking to you. But if you want the advice independent of that, I am happy to give it to you.
REID
And, by the way, there may still be legal liability, but the legal liability is, say, for example, someone trained an AI to give, you know, people of a race that they don’t like deliberately bad advice to try to harm them or something, then you go, okay, that doesn’t matter. That doesn’t factor into safe harbor, right? Even if you said those words, that doesn’t matter, right? But it’s kind of a presumption of it will create a lot of value for society and for humanity. And so I think it’s very important to do that. And, by the way, same thing on legal. Most people cannot afford any serious legal controls. One of the asymmetries frequently—part of the reason why I think cities tend to do more for renters than for owners—is because, well, owners can afford legal more than renters can. And so you go, well, what’s the issue? Well, if you’re actually giving more ability of legal AI assistant, then it’s like, okay, well, what are my rights here? And you might not have to unbalance—to disbalance the scale so severely to try to make it a more human process. Anyway, so, like, I think, or any number of other things in legal, and then obviously similar with financial.
ARIA
So, we’ve talked a lot on this podcast about sort of the need for global cooperation as it relates to AI and sort of really pushing the sort of Western values that we’re interested in. But just recently, China’s Xi pushed for a global AI body at APEC to counter the United States. And so, Xi Jinping used the APEC leaders meeting to push a new global body to govern AI, positioning China as an alternative to the U.S. on AI and trade. He proposed a World Artificial Intelligence Cooperation Organization to set governance rules and make AI a public good, with officials indicating it could be based in Shanghai. The U.S. has rejected efforts to regulate AI in international bodies, and so this sets up a contrast with Beijing’s multilateral pitch. So the question is, if China creates a global AI governance body, does that redefine who sets the moral and technical standards for artificial intelligence? And should we have a global body? This seems like a good idea, and it doesn’t seem like a good idea for the U.S. not to be a part of it. What do you think?
REID
Well, one of the things I think is going quite wrong with U.S. foreign policy right now is, as opposed to, you know, keeping good alliances, partnerships, and friendships, we’ve kind of got this hostile tariffs negotiation. And if you were trying to drive more of the world into the Chinese trade ecosystem, the Chinese tech ecosystem, you know, you’d have to, like, you’d kind of be unclear if you’d do any better job of it than you’re doing right now. So I think we have a kind of a serious problem there. And this is just an instance of it, because part of his—is the things that I read is that part of what Xi’s trying to do is say, let’s create an anti-American alliance. And obviously we saw this as a, like, a picture with Putin, Xi, and Modi, um, you know, kind of a couple of months ago, um, and these kind of things. And I was like, no, no, actually, in fact, like, what, unfortunately, too few Americans—too many Americans—don’t understand is that, like, our global position is part of what helped enable American prosperity. And it isn’t just like, hey, our soybean farmers used to be able to ship, and now we’re not—we’re not selling product, and we’re doing, like, you know, we’re kind of doing, like, like, just public funds paying them versus the—the economic industry going, which is a—which is obviously a terrible thing for the American taxpayer, um, and where the money could be going to other things like, for example, healthcare or other sorts of things that would be good for the American taxpayer. This is the kind of thing that is creating a problem for the future American prosperity. And America should be in a leadership role here. As I’ve punned at the beginning of the year, AI should be American intelligence in terms of how this is operating. And so I think that this is an instance of something that’s going quite wrong with the general policy guidance of the current administration.
REID
Now, should there be a global body? Maybe, if it’s sufficiently lightweight. The problem, of course, when you create regulatory agencies is they tend to just keep expanding their purview. And it’s one of the reasons even when things like the U.N. have been created, things like the international criminal courts, there ends up being places where you’ve got clashes with them because they’re expanding their purview. And I think that creates some problems. But that being said, if you said, hey, we’re going to have international laws on some transparency about what’s going on, we’re going to have international laws on bioterrorism and security, and we’re going to have international laws on cybercrime, those would all be very good things.
REID
Now, it’s a little bit of wishes were fishes, because, like, for example, we should absolutely have laws about how, you know, cybercrime works. We’re literally at a global cyber warfare scenario that is kind of la la la, not paying any attention to this, which, you know, causes all kinds of impact, like, for example, hospitals or electricity things being held hostage by, you know, ransom attack groups from North Korea, you know, looking for, you know, kind of payments in cryptocurrency. And so then we have that kind of going on and attacks of companies. And all of our companies are being attacked by other state actors, including China, in terms of how this operates. And so you go, well, shoot, we should have a global set of agreements on this and a body, and you can’t even do it on cybercrime. So I would love it to happen on AI, but the people who are advocating on AI don’t realize, well, we haven’t even got it for the state of cyber warfare, of which one of the things that AI could accelerate, let alone AI itself. So it’s kind of like, I would like there to be some, but it’s challenging to see how it’d be made.
ARIA
And so, do you think—obviously, nations are racing to control AI’s future, like the U.S. is the same—are we watching the early stages of a new kind of global alignment that is now built on algorithms, data sovereignty, and what AI models people use?
REID
Well, I think it would be good if it were thoughtfully being done so with that global alignment. I mean, in fact, what you tend to have is, you know, a couple areas in the world that have the strongest global tech development—most notably U.S. and China—you tend to have the European Union trying to exert some shaping of it by imposing fines, so kind of proxying growth tax revenue, but without really growing the industry and having the industry grow, which would be much better for not just Europe, but for the world in terms of how it operates.
REID
One of the things that I go and spend time in Europe, you know, most notably the U.K. and France and some Italy. I’ve yet to have… oh, actually, no, I’ve now had one conversation with a German minister who’s like, how do we learn from software in Silicon Valley in order to improve it? So, like, maybe the Germans are finally getting into the game. And so, you know, like, that kind of thing I think is super important and would be good to have happen. And so I think it’s a good thing if we have, um, this kind of, you know, general tech development and maybe some, you know, kind of collaboration on principles.
REID
Um, but also because, for example, like, let’s—let’s—let’s, like, for example, if you said, what would be one simple thing that you wish would happen is global collaboration on—all forms of technology to minimize, uh, bioterrorism risk and, uh, cyber warfare, right? Like, that would be just—it would be very simple and kind of a beginning of something to do. And, like, get that done. And then, by the way, then we could see what works and doesn’t work. And would we need to iterate to anything else in terms of, you know, kind of—now, transparency of algorithms? I think it’s, generally speaking, a good thing. And I think it’s a perfectly fine thing for society, one, to say, hey, look, if I’m going to be adopting societies to social media platform, I need to understand more about—and be—and you need to be accountable more to how that algorithm is affecting my society. And I think that’s a perfectly reasonable thing. And, you know, but we’re a ways away from that.
ARIA
Yeah. Again, if wishes were fishes. So I want to take us from sort of global geopolitical to very interpersonal. A new Penn State study found that ChatGPT 4.0 performed better on multiple-choice questions when given rude prompts rather than polite ones. And I have to admit, this is personal for me because my husband and I always say to our kids, the number one most important thing is to be kind. And our kids are going to be interacting with AI, learning from AI. But researchers found they tested over 250 unique prompts ranging from very polite to very rude, and they found that rude commands produced 84.8% accuracy, which was four points higher than polite ones. So not a huge spread, but four points can matter when you’re trying to get it right.
ARIA
And so examples of effective rude prompts included blunt phrases like, hey, gopher, figure this out, compared to polite versions such as, would you be so kind as to solve the following question? Related research shows that AI chatbots can mimic human persuasion and emotional manipulation and may degrade when exposed to poor-quality data, a phenomenon so eloquently labeled “brain rot”. And so Penn State professor Akhil Kumar said the findings highlight both the nuance of conversational interfaces and the continued importance of structured input, like APIs, for consistent performance. So what does it say—like, everyone talks about whether AI is human and how do we stay human in the age of technology—what does it say about machines, or about us, that they respond better to rudeness than politeness?
REID
Well, it probably has something to do with the training regimes that could be improved, roughly speaking, because, I mean, it is the case right now that if, for example, you put in certain points of a prompt into all caps, like shouting, and it responds more to that. There is a thing because AI, generally speaking, is trained to be a little softer. You say, be brutally honest in your answer, etc., and you actually get something that’s crisper in terms of its utility. And so I think it’s—but I do think it’s not even before—or call it before, like, baby AI, which is—Alexa, I was this worried because if kids learn, “stop!” etc., as a “shut up, Alexa!” etc., as ways of doing this versus a dialogue, because we do pattern off dialogue. And even though it’s good for us to understand the difference between an AI and a person—like, AIs currently aren’t in the shape that they can be friends, etc.—something we talked about in Reid Riffs, you know, a few weeks back.
REID
But I think it’s—we should have regimes by which the training has good interactions, like, models the kind of things we want with human beings there. That doesn’t mean that if you say, “stop, Alexa,” it says, “fuck off.” Like, that’s not the thing, but it’s like, like, “hey, I would really appreciate it if you were, you know, kind of more sociable and civil in your interactions, you know, kind of da-da-da-da,” as kind of a way of doing it and kind of staying there. And then producing responses based on the quality of kind of human interaction that we’d want. So someone who’s going, “hey,” as opposed to, “hey, gopher, be my slave in doing this. Oh, it gives me a better answer,” versus the—“hey, I really appreciate the help you’re giving me. Can you help me with this?”
ARIA
But it’s obvious the companies didn’t train them to respond to rudeness. So, do you think it’s possible that they can be trained to respond better to kindness?
REID
Oh, that’s easy. I don’t think they were.
ARIA
But why aren’t we doing it then?
REID
Well, I think, look, I think just because it was unclear that that, like, that you needed to put some of that stuff into the training. By the way, it may take an extra, you know, X hours, you know, like a thousand hours of different training to make it generally possible that maybe there is some stuff in the data set that plays out that way. And the fact that the, “hey, gopher”—because I actually think one of the things is, is, is doing angles and differences in the prompts, you know, can cause some differentially quality things. And so, you know, what they were experimenting with was rudeness as a way of doing that. But it may just be a certain amount of creativity as a way of doing it. Like, I definitely have been, you know, kind of indexing the set of cognitive tools by which you would—when you’re talking to an agent, like, do you say, “hey, take the role of a historian of technology who is skeptical about, you know, technology’s near-term impact on society. Now critique my essay and be as decisive in your critique as possible”? Then you get something better. And that’s actually a good thing. But maybe you need to kind of, in the training, say, “okay, that’ll help”. But softening some of the, you know… you know, “hey, dumbass, you know, give me a better answer to my question.”
ARIA
I’m not going to lie. I can’t wait for the next round of parental controls to be about this, because my seven-year-old only uses voice. And so I can’t wait for—he’s always like, “Please, Mom,” like, “Thank you.” And it’s like, no, the next round of AI is going to be trained so if you don’t ask in a nice voice and a nice tone, you don’t get it. And that would be fantastic to teach kids. That’s, like, the under-18 mode—that you have to be kind, with a nice tone, to get what you want. So I’m here for that.
REID
So, in addition to being voice-pilled, you should be civility-pilled.
ARIA
Yes, absolutely. I mean, again, my seven-year-old’s already voice-pilled. He’s on his way to civility-pilled. We’re not quite there yet. Reid, always a delight. Thank you so much.
REID
Always a pleasure.
REID
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.
ARIA
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.
GEMINI AD
This podcast is supported by Google. Hey folks, Steven Johnson here, co-founder of NotebookLM. As an author, I’ve always been obsessed with how software could help organize ideas and make connections. So we built NotebookLM as an AI-first tool for anyone trying to make sense of complex information. Upload your documents and NotebookLM instantly becomes your personal expert, uncovering insights and helping you brainstorm. Try it at notebooklm.google.com.

