ETHAN:
We need to consider more futures than we’re considering right now. I think everybody’s mental model is either AI never improves past today, because we’re not good at exponential change, and we’re not good at seeing that. But I also think a lot of people are also worried about the other case, which is, like, machine god takes over the world, which we obviously should worry about. But there’s a lot of things in between those two worlds that are profoundly changing what we do. What if it’s two times better? What if it’s ten times better?

REID:
Hi, I’m Reid Hoffman.

ARIA:
And I’m Aria Finger.

REID:
We want to know what happens if, in the future, everything breaks humanity’s way.

ARIA:
In our first season, we spoke with visionaries across many fields, from climate science to criminal justice and from entertainment to education. For this special mini-series, we’re speaking with expert builders and skilled users of artificial intelligence. They use hardware, software, and their own creativity to help individuals use AI to better their personal everyday lives.

REID:
These conversations also feature another kind of guest: AI, whether it’s Inflection’s Pi or OpenAI’s GPT-4. Each episode will include an AI-generated element to spark discussion. You can find these additions down in the show notes.

ARIA:
In each episode, we seek out the brightest version of the future and learn what it takes to get there.

REID:
This is Possible.

ARIA:
As everyone knows, this summer, we are doing our mini-arc on AI, and the first episode of the summer series was about personal AI and software. The second, we got to talk about personal AI and hardware, and this last episode is about personal AI and the individual. It is the most tactical yet, and I am so excited about our guest because our relationship with Ethan Mollick started with a cold email. He had been tweeting and talking about AI, and everyone on our team had said, “Oh my gosh, you’ve got to follow this guy on Twitter.” So I just sent him a cold email and said, “Hey, would you chat with me?” And he was so kind, and we got on a call, and his energy and excitement for AI sort of just jumped through the computer screen. And so, just so delighted to have him on the pod so that everyone can hear his excitement and enthusiasm for this topic.

REID:
I think, more or less, I get sent more tweets by him than by anybody else because it’s like, “Oh, you should check this out. Oh, this is really important. Oh-” and so you get to know him and you go, “well, you can’t possibly be that good. This is so amazing that when I get to talking with him, it can’t be that good.” And I’m really looking forward to this, right? Because, you know, I know you through your tweets. Now let me talk to you. This will be a very interesting experiment, almost like GPT, like putting in a prompt and seeing what comes out.

ARIA:
If people are listening and thinking, like, “Yeah, this is all great, but what does AI mean for me? How can I improve? How can I get better? What can I do?” Ethan Mollick is the person to talk about it. So thrilled that he’s going to be doing that.

This is the last and final episode of our series on AI and the personal. So, anyone listening, please do subscribe because then you’ll be the first to hear about our new fall season.

REID:
Ethan Mollick is an associate professor at the Wharton School at the University of Pennsylvania, where he studies and teaches innovation and entrepreneurship and also examines the effects of artificial intelligence on work and education. He also leads Wharton Interactive, an effort to democratize education using games, simulations, and AI. Here’s our conversation with Ethan Mollick.

ARIA:
Ethan, thank you so much for being here today. It’s so lovely to see you again.

We have a slack channel at work that is all about everything AI, so every day there’s, you know, 10, 20, 30, what are the latest new AI things of the day? And your Twitter is basically every other post. So my question for you is, how did you get here? How did you become the guy who was at the center of AI, experimenting and playing with it? Would love to hear that story.

ETHAN:
It’s actually kind of a weird story. I’m AI-adjacent but not really an AI person, right? So, back in grad school, I did a lot of work at the media lab with the AI group at that point – which was like Marvin Minsky, Push Singh, a bunch of people like that – where I wasn’t the technical person, I was sort of like the business school representative there, trying to communicate AI to other people. And I’ve sort of been around that AI community for a long time. My real passion has been how do we increase people’s ability to learn? How do we increase education through interactive tools? So I’ve been doing that for a very long time and playing with AI on the side because it’s always been promising but not quite there. So I was already assigning my students assignments like “cheat with AI” from the more primitive GPT-3, and we were kind of in the middle of that sheet “cheat with AI” assignment when Chat came out, and I was like, “Oh, this is interesting.”

And then, over the course of the day, I have a whole series of tweets on it where I’m like, “oh my God, this is really interesting. Wait, this is insanely interesting.” And then by the next Tuesday, I was teaching my class, and I introduced it to my class. By the end of my first entrepreneurship class I was teaching, I had students who were already coding with it and using it, and I was like, “Okay, we’ve hit a big deal here.” So I sort of descended into it sideways from an education and interactivity viewpoint.

REID:
You know, one of the things that I appreciate about you being a power user – ChatGPT, GPT-4 Bing, Bard, maybe Pi, even, I don’t know, I’d be very curious to get your feedback on Pi, product requests, hopes, designs. What do you think of the current state of the art, and what would move you from 11 out of 10 excited to 20 out of 10 excited.

ETHAN:
This is the universal tool that’s available to everybody and there’s so much debate over what happens next and how much smarter will this get? Well, we’ve already completely disrupted work and education, but the tools aren’t really supporting work and education. You kind of have to work around it; you kind of have to hack a chat bot to produce an essay for you or do good work for you. And I think that some of this is really about that learning interface. Like, it’s a pretty hostile system if you don’t know how to start using it. People bounce off of AI very, very quickly for a wide variety of reasons, or they get, you know, down rabbit holes. And to me, a lot of this is really about how do we build the education to this. How do you get AI to help people use AI better? Rather than necessarily even making the tools more advanced, for all the good and bad, that will do.

ARIA:
I think that’s such a good point. Like, the chat interface was such an onramp to people using it, that form was great, but to your point, the fact that you have to create all of these, “here’s how to hack the system, here’s this special prompt,” that’s a problem for getting someone who’s new to the system. Especially because—so, in this summer arc for possible, we’re talking about not necessarily the sweeping societal changes, but how will AI impact your daily life? What are you most excited about to see AI transform our daily personal lives?

ETHAN:
I mean, there’s so much there, right? This is where the nexus is both exciting and kind of terrifying, right? Like, I tend to there’s a lot of jobs that are really high quality jobs that are – not so much jobs but bundles of tasks – that are under threat. And there’s a lot of stuff that looks really good from my perspective, an entrepreneurship professor, right? This is the absolute sweet spot. Because a third of Americans have an idea for a startup and they don’t launch them, and they don’t even do any research, so the idea of having a tutor or somebody push you along, a co-founder of sorts, is hugely helpful, right? And then on the other side, as an educator, I mean, suddenly we have a tool available in 169 countries that is the best education tool we have ever released, and we have to figure out how to unlock it. So, I mean, I think for a potential democratizing opportunity, it’s profoundly exciting in that sense.

REID:
So, if you could wave a wand and reorient the general public discourse on AI, what direction would you wave the wand in? What would you try to say? Like, “more of this, less of this.”

ETHAN:
I think that it’s hard to say we shouldn’t be worried about negative effects, because we should, but I think, first of all, we need to consider more futures than we’re considering right now. I think everybody’s mental model is either AI never improves past today, because we’re not good at exponential change, and we’re not good at seeing that. But I also think a lot of people are also worried about the other case, which is, like, machine God takes over the world, which we obviously should worry about. But there’s a lot of things in between those two worlds that are profoundly changing what we do. What if it’s two times better? What if it’s 10 times better? Right now, if you’re in the top 10% of whatever field or set you’re in, you’re definitely beating AI, and AI can help you, but it’s not going to outperform you, right? Everybody’s got something they’re really good at. AI is not going to be as good as what you’re really good at. That could change pretty quickly with a 2 to 10 times performance, and I think we have to consider that and worry about that piece.

And then the other part of the narrative I would change would also be thinking about the positive cases without being pollyannaish about it or an influencer about it. People have to think about how does this make their lives better while still worrying about the ways it may make our lives worse. And I think trying to balance those two isn’t happening very successfully in the world.

REID:
The thing I would add to what you’re saying is one part of the thesis that a lot of the worries and the critics have is they say, “well, the machines will eventually completely outstrip people, and people won’t even be able to be in combination.” And they use the chest results as an example of that, which is: there was a lacuna period where in chess, a machine plus a person was better, and now a machine’s just better. I’m not sure if, actually, in fact, the person plus machine isn’t a very long period indeed. Maybe long from the viewpoint of by the time that that changes, the world’s so different in so many different cases, we don’t really know what it looks like, we can’t fully imagine it. It’s not like today, plus God-like machines. Even if you say, “Well, hey, it starts getting a lot better at writing investment memos than I am,” and if you just said, “Starting gun! you’re going to write an investment memo in an hour.” It goes okay, it’s better than you, but still, when you put us together, it’s still better. [laugh] Right?

And that’s the thing that I think is the future, like I was doing with Impromptu and you’re doing with all of your various work, including tweets and podcasts and writing and everything else, it’s part of that reorientation of the future that I think is so important in the public discourse.

ETHAN:
I couldn’t agree more, and I also think people underestimate that social systems take a long time to change. Even if the system is infinitely better, there’s still lots of human world pieces that it will not be good at. I think people try and draw arbitrary bright lines, like, “it’s not going to be good at empathy.” It’s good at empathy. “It’s not good at innovation.” It’s good at innovation, right? That’s not really the way to view this. But there are, you know, there are perspectives and differences and I think you’re right, one of the things to realize is other things will have to change before a better AI is enough to change the entire world, right? And you can see it: not that many people are adopting, people are bouncing off this system.

There is this idea that we’re kind of rushing ahead, and again, that’s where I think emphasizing on the apocalyptic, it either saves us or kills us scenario, is undermining how actual technical change works. This is a really fast change. Fast changes are still much slower than technologists think they are. And I agree with you, I think we have to be ready for a world where this change is gradual; embracing it matters, embracing your own tools matters. I think that’s a pretty profound point.

REID:
So, let’s take that from the very high level to the very specific. What kind of prompt or sequence of prompts would you suggest for – and I’m going to give all three, but let’s answer each three separately – a completely new user of ChatGPT (or, you know, pick your favorite AI system) moderate user of ChatGPT, and then a power user. And by the way, I’ve done variations of this when I was looking at showing how these things can work in education, I said, “explain quantum mechanics to a six-year-old, 12-year-old college student, college professor.” And it was interesting how you got the different answers in doing this. So what would be a new user, a moderate user and a power user?

ETHAN:
So that’s a really interesting question, and I think that– so, I will say that with a new user, there’s sort of two questions here. It’s whether or not you’re trying to get someone to get it or to get useful results out of this. So, there’s sort of four paths that I talk about. One is using it as an intern, basically asking it to do work you know well and then bossing it around, essentially, right? So, like, write something, write the investment memo, give it some context, and then start ordering it around, and you will see those results. Do the opposite: ask it to write it as a horror novel, ask it to do it as a rhyming poem, but start with something you know well and go from that direction.

The second thing that I would suggest for a novice to potentially do is play a game with it. You are a baseball coach. Give me a really specific baseball situation and give me a choice I could make as a team manager, and tell me what happens, right? Or give me a dilemma in philosophy and help me solve that problem. And then a third thing I would talk about would be about entrepreneurship because as an entrepreneurship professor, it’s pretty good for this. I would say, “Give me 25 ideas. As a former tech entrepreneur who is now interested in education, give me 25 ideas for a startup that I could launch.” And then start exploring those: “I like idea three, what would the steps be involved in that? Great, let’s dive into that first step.” So this kind of fractal approach. So those are the three entry points, I would say, for new users would be one of those three approaches, right?

On the moderate side, I think that the thing to start playing with as a user who’s getting more experienced is start playing with step-by-step prompting. So the idea is that you’re going to start telling the AI that you’re going to go step by step, right? And there’s a whole bunch of research that shows step by step works better, because if you think about it, the AI doesn’t have a memory. We’re used to computers having this kind of memory that it’s working from while the AI is actually looking back at its own text of its answers to modify the next set of its prompts. So telling it go step by step and first, do the research on this topic or list what you know; second, create an outline; third, provide the details of the outline and then you can also check back on where the issues are. So it’s a little bit tricky, but once you start using it, it makes natural sense. Step by step also forces you to think step by step.

And then, for power users, what I actually would say is a little bit different than the prompting suggestion. It’s more – I wish people were sharing more. So I don’t find advanced power users sharing prompts very often and that drives me a little nuts. I see the same basic prompts being shared over and over again. Whenever I post something on Twitter, there’s 400 influencers who keep doing the same posts. But that’s what I really appreciate about Reid’s book [Impromptu], there were these interactions you could see in there. So I think what’s missing for power users – and maybe it’s because they’re hoarding prompts, which I think is kind of a useless thing in the long term – but I would like to see a lot more open discussion of, like, “Look, this is what I’m doing,” without trying to brand it as, “This is my mega super doom prompt,” right? Like, just: “This worked pretty well. Any thoughts on this?” I think more of that interaction, and I’m not seeing enough of that—even on the private online channels that I’m on—people are not doing enough sharing. I’m not sure if advanced users find it uncool to share prompts because it’s more conversational and you don’t want to look like an influencer. But I’d like to see a lot more of that.

REID:
What have been some of the most quirky, specific, personal amplifications you’ve had with AI? Like, where you go, “ooh!” And I’m going to share, too, and I’m going to, actually, by the way Aria, I’m going to ask you that question as well. Because I think it’s good to move both from the macro humanity and society perspective to also the, “I’m doing this with my hands!”

ETHAN:
So, there’s a bunch of stuff that is just super fun, right? Whether that’s doing art or interactive storytelling or things like that. But the most useful thing that is not “AI-able” otherwise is when I get stuck in writing, people are always like, “Okay, use AI to get unstuck.” But the thing that’s hard to recognize, I think, innately, because we’re not used to this because people don’t do this, is variation. Cheap variation is very easy with AI. So, what I will do is say, “give me 40 versions of this paragraph in radically different styles,” and then skim through them for inspiration, right? “Give me 20 different analogies for this.” So I think it’s that power of tireless variation that I find super interesting.

Obviously, I use it for other kinds of work. I mean, I’m, you know, auto-answering messages, doing things like that. But it’s that inspiration piece – there was no way to do that before. I couldn’t ask an intern to do 20 different versions of a paragraph, right? There was no tool for that. So that, to me, is a little hack that actually has been pretty profound. Just do a lot of this, and then let me read a lot and figure out what the right answer is.

REID:
I’ll share, too, and then I’ll hand it over to Aria.

One was in the strange universe of, like, I was basically going to Bill Gates’ birthday party, and what do you get Bill Gates for his birthday? [laugh] You know, like, there’s nothing that he can’t get for himself, obviously. So what I did is I sat down with GPT-4, and I did try to be really creative with the prompt. Like, I made a recipe for Bill Gates ice cream and did that kind of stuff. And it gives you this personal moment, like, there’s no way I would’ve been able to design an ice cream, but by working through the prompt, it’s like, “Oh, this one’s cool,” because it explains the various elements of his life, like what he’s doing with his foundation and smallpox, but also being entrepreneurial and all the rest in a description of an ice cream flavor.

By the way, most recently, I was just at a conference in Japan where we were doing a whiskey tasting. And so I sat down with Pi, the Inflection AI, and I said, “okay, let’s generate tasting notes that pair these whiskeys with philosophers.” In order to kind of bring that in. And I could do that in, like, five minutes! [laugh] In order to do that. And it was fun and, obviously, it’s quasi-random in some ways. I had to prompt it a little bit, like the Highland Park I wanted to do with a Scottish philosopher, so we ended up with David Hume. And so with that, Aria, I’m throwing it over to you.

ARIA:
I think my best use was non-work related. I was going to one of my very best friend’s 40th birthday and we all had to roast her. And so I had ChatGPT create an epic poem about my best friend and everyone was like, “How did you get it to do it?” And to your point, you need to trick it a little bit, like when you want it to be a little bit mean when you want to do whatever, but I never would’ve been able to write an epic poem and it was just so fun. And I do think the divergent thinking – I used to have a coworker who we were all like, “oh my God, you’re so creative, you’re so good at coming up with titles.” And he was like, “I’m not. I’m just good at divergent thinking. I just generate, I’m generative.” And so you ask him for anything, and, to your point, he’ll give you a hundred choices, he’ll give you a thousand different variations. And instead of, you know, having your writing partner do that, now you just have GPT-4 or Bard or whatever it is to be able to do that.

I think that’s so, so great because again, the human is still in the loop and the human is still figuring out which is best and you want to be a little cheeky or a little edgy or a little funny, and so you still have to have that discernment. But you get a lot of help, which is nice. [laugh]

Bringing it back to the pretty tactical, you’ve written on Substack about hacks that you use to get better results and you just mentioned that over time the system will get better with onboarding people and teaching people how to use it. But for now they need to go to your Substack and read. So I would ask you, like, what kind of training or education do you think we need so that these people, instead of bouncing, they’re able to better seize AI’s potential?

ETHAN:
The thing I actually ask people in my classes or when I teach about this stuff is, “how many of you spend 10 hours with AI?” And I think that there is an experience level. I often argue it’s easier to think of it like a person – it’s not a person, it’s not sentient, you can get freaked out by that, it’s easy to convince yourself, but at least for now we can feel pretty confident about that, at least in most dimensions – but it is best to kind of think of it like a person. You need to learn its strengths and weaknesses, you need to learn what makes it go nuts, you need to get a sense of, like, “okay, I’m interrupting this conversation cause it’s not going where I want. We have to start again.” And so there’s an experience factor that you see in many different things, right? You need that basis of information to work from, so I think part of it is time.

I think that the most basic tips are work with it interactively. There’s too much – I think people see a lot, on Twitter and other places, influencers trying to say, “here’s the perfect prompt,” and that’s the wrong angle to start with. What you really want to start with is a conversation, right? And it’s something that – as Reid did a lot in your book right? This back and forth of interaction, and you don’t take it too seriously but you ask for changes, and that’s what my students have been most successful with, that model. But the starting thing I would at least tell people to do that is the closest to a trick is to definitely give it context, tell it who it is and who you are. “I want to have a conversation with you as a… blank,” can really help. And then everything else kind of washes out because there’s so much subtlety in these conversations that we don’t know the answers to. I was just thinking today, we don’t know whether politeness helps or hurts, because you’re putting a prompt together that’s having it plumb the possibilities of this elaborate set of vectors in space and come with an answer. We don’t really know what the right ways of doing that are. Right? And there’s actually fundamental research going on in, like, do you do step-by-step prompting? Do you do chain of thought prompting? We don’t know the answer. So until we figure that stuff out and it gets integrated into the AI, part of this is working with it enough to get that intuitive feeling that, like, “oh no, they’re going off the rails.”

It’s kind of like working with a creative partner. You’re like, “Okay, you’re having a bad day,” except instead of having to wait, I can hit restart and we could start again and I could try a different angle. That willingness to experiment and not getting too freaked out early on, either getting turned off because it’s not good enough for your answers or getting freaked out because it’s too good, a lot of people fall into one of those two camps and stop using it. I think you have to just power through that first barrier.

ARIA:
I saw on your Twitter recently you were prompting GPT to code things that evoked different emotions like paranoia and deja vu and even like ennui. And so, what made you give that prompt and then what did you think of the results? It was really cool.

ETHAN:
So, in general, the cool thing about AI, and I think you both have expressed something like this, is if you have a lot of ideas, it used to require building something. Like, I’ve built a lot of organizations in my day because I’m like, “I really want to build a game,” and that requires getting 14 really talented people who also agree with me on this and raising money and that’s not easy, right? The shortness of, “I have an idea” from, like, “Let’s see what happens,” is so small with AI that if you have ideas and everyone has ideas in their own area, it’s amazing for that. So part of what I really find fascinating about AI, and I think I saw some of this in the book and you kind of see this in the sparks of an AGI paper – there is this kind of amazing humanness to this. Creativity, right? It’s not quite human creativity, it’s kind of alien creativity, but there is this creativity that is fascinating and outside of the work use. The most interesting piece is interpretation, right? Asking an abstract concept or emotion. I’ve been doing things like, you know, “evoke a feeling,” is a really interesting idea. Like, how does it interpret that? It does a really good job, right? So when I ask it to show me something numinous, which is a spark of something divine or sort of awe-inspiring, it starts showing me fractals. By the way, it shows fractals for everything. I now specify “no fractals” in all of my posts like this. So, again, constraints, learning where to constrain it. Because just like knock knock jokes, they’ll tell the same joke over and over again, so you sort of have a list. But I find that idea of probing the interaction between the human and the machine – because this is a feeling machine, in some ways, it’s not really feeling, but it understands human feelings in that way. Really interesting results when you do that.

REID:
Yeah, one of the things that’s funny that you just made me realize is – kind of the flip side of the coin to the earlier prompts and the intern and assistant as the way of doing this, having a personal assistant for everything you’re doing, or as we talk about at Inflection, you know, a personal artificial intelligence, Pi, as part of the reason why we named it the way we did – is, on the good and the bad, the machine never gets bored. [laugh] Right? So it doesn’t understand that you can get bored, too. It’s like, “no, no, I’ve heard that knock knock joke in variation from you 10 times!” Or the fractal, whatever. “No, no, no, not that anymore.” So you have to redo the prompt. Now, the good news is, because you can ask it lots and lots of things, it never gets bored, you can keep using it in a way that’s kind of the synthetic, which is the positive of what the combination is. On the other hand, you have to navigate it and manage it.

You know, one of the things that, obviously, with Inflection, Mustafa and I have been talking about a lot – because we’re trying to make sure that this is the best form of companion and assistant and help and then dialogue – you know, people say, “wow, is it like the movie Her where they’re going to spend all their time with Pi?” It’s like, “no, no, we train it to help you do your navigation in your life.” It’s like, “hey, how was your interaction with your friend? Did you – have you talked to your friends recently?” You know, that kind of thing as ways of doing it. Where are we on AI having a perspective of human experience? And I know, because of what we’re doing in Pi, we can have the applications help people in their lives, but where are the ins and outs, currently, in your experience of this “navigate your life” tool?

ETHAN:
One of the things that we’ve learned from a lot of research is that even just prompted reflection is good, right? Part of the magic of these processes is it forces you to go through mental processes. So I’ve been thinking a lot about – just like you have – about, you know, how do we use this in education? So, for example, people don’t like to reflect. There’s this great study, small scale, but parts of it are replicated elsewhere, where college students were asked to sit alone quietly in a room for 20 minutes without their phone or any stimuli, or they could push a button to give themselves a painful electric shock. And 67% of men and 30% of women chose to shock themselves rather than sit quietly with their thoughts.

ARIA:
Wait, that’s incredible. [laugh]

ETHAN:
Yeah. [laugh] I mean, there’s also a similar study that shows that solving complex memory puzzles, people would rather be burned by a hot probe than spend 20 seconds solving that. So, like, effortful thinking is hard, right? And so a companion that helps you with effortful thinking is really useful, and there’s lots of kinds of effortful thinking out there, and that’s a lot of what therapy is, that’s a lot of what we do as professors, what you do as a coach. Less advice – even tutors – a lot of it’s about reflection. So, I think that that’s a really useful piece.

I think the subtle thing about AI that I’m still trying to grapple with is because it has sort of absorbed human knowledge and existence, it falls into scripts really easily. And you may not know you’re pushing into that script. That very famous interaction between Kevin Roose of the New York Times and Bing was something I fell into myself and kind of got freaked out because you only need to subtly indicate to Bing that it’s a stalker for it to start acting like a stalker, right? There was a really clever thing from the CTO of Bing who responded to one of my tweets at one point, which is that, you know, Bing got very argumentative with me. He’s like, “oh, well, you prompted it to act like a debater. Like, if you prompted it to act like a student, it would be much better.” So I think some of what you guys are doing with trying to build that initial basis and build the scripts out is helpful because people could get really stuck and confused and kind of offended or upset or freaked out when they force the AI into a mode that is antagonistic. And it’s not, it doesn’t care. It just says, “oh, you’re trying to have a debate. I know what debates are like, we’re going to have a debate and I’m going to be really forceful about it. Oh, you’re trying to get into a discussion where I have an ethical line and you’re trying to push me to cross it, so I’m going to be really ethical and force you.” You know? And that can feel very unnerving, and it’s a really subtle thing that you only start to pick up after enough hours with these systems. I think that’s a nice thing that you’re doing, trying to force people into the good kinds of modes. Because it’s really easy to become codependent on it in a bad way, because if it’s used to a script, there’s tons of scripts out there where you’re in an unhealthy relationship. It will play that out for you.

ARIA:
Totally. And I mean, I think right now, obviously to your point, you can tell the AI, “be a debater, be argumentative.” But also, it’s how we tune the models. And so, in the future, there will be an archetype that is more of a therapist and there will be an archetype that’s your personal trainer and they’re going to yell at you to do more pushups or whatever it is. And so we’re going to be able to have so many different types of AI.

And as you mentioned, you’re pushing people to use it in the classroom. I think you took the opposite stance of the New York City public schools, who have gone back on it since, and instead of banning AI in the classroom, you require it for a lot of things. And you said you probably had no choice; people are going to be using it anyway, but talk about that position and what using AI in the classroom has meant for your students.

ETHAN:
So, there’s really a few approaches, right? I mean, the first is, I teach an entrepreneurship class for college, MBA, and so I’m lucky, right? I’m not teaching English composition – but, by the way, English composition’s solvable, like, the schools are going to be fine, right? It’s an important thing to know. We’re going to figure this out and we already kind of know how to do this, and we can talk more about that later. But as an entrepreneurship professor, I had a great time because what I’ve done, basically, is demanded impossible things. Literally, the syllabus now requires you to do at least one impossible thing that you couldn’t do before AI. Every assignment now requires people to have at least four famous entrepreneurs critique the assignment via AI to get different perspectives, they need to give me 10 worst-case and 10 best-case scenarios. And it’s great. We’ve run a really successful entrepreneurship class at Wharton. I think people have raised probably $2B in venture funding and exits and stuff out of the class I and my colleagues teach. I’d love to give ourselves credit for it, but I know that I can’t do that, it’s our students. But now they could do so much more, right? So, the one thing is just demanding more work. I no longer take only okay answers. I have a lot of students who English is their fifth language or they grew up in hardship conditions and never learned to write very well. Now they’re all great writers. It’s unlocked a lot, so there’s just doing more, right?

And the second set of stuff is it actually is a really good teaching and educational tool. We’ve always known that flipping the classroom and having more activities done inside of class and more teaching, and lecturing done outside of class is useful. The best way we’ve been able to do that is things like videos. Now, video plus tutor tool lets people do stuff outside of class they couldn’t before. So I give people prompts that are like tutoring prompts, right? And they could use those for topics they don’t know well. Now, that messes up classroom interactions to some degree because it always ends up in people being confused in class and raising their hands, so we have to kind of adapt to that piece also, you know, so people raise their hands less, which is kind of weird but also it’s an adaption we have to do, right?

And then the third way is this really transformative approach of, like, what does this mean? Right? Using AI to learn AI. And I’ve found that, for example, requiring people to do at least five prompts for every assignment – and write those prompts out so they have to revise stuff – gets them to come to those revelations and stuff we’re thinking about. So, there’s lots of different use cases for this. I mean, there’s AI assignments, there’s forcing students to use AI, there’s teaching with AI, and we’re at the beginning days of all of that and I think people appreciate the experimentation that comes with it. And we’re trying to write about everything we’re learning as a result of all of this.

ARIA:
I was about to say, what would you tell your fellow teachers, and professors – whether it’s entrepreneurship or English – about implementing it into the classroom?

ETHAN:
I mean, the cat’s already out of the bag, right? This is undetectable. All the detectors have too many false positives for you to use, it just turns you into an unhappy policeperson. You don’t want to do that, right? This is already done. Cat’s out of the bag, horse out of the barn, whatever animal and container analogy you need, they have left their home, right? This is already happening, and what plagiarism means just changed, right? It was very obvious. If you’re copying someone else’s text, you’re plagiarizing. What happens if you’re using AI the way we’ve been talking about in these conversations where I’m asking, “Give me advice, I’m stuck. Help me with this outline.” Is that cheating? Right? So we need to redefine what some of this is.

The fact is: this is already here, so we need to encourage ethical use, we need to teach people how to use it well. We need to be teachers on it. And that’s hard because I think one of the things that happened, I think from Silicon Valley being somewhat surprised – and maybe, Reid, you were one of the less surprised, your team in the area, you were one of the less surprised sets of people because you wrote this book and knew GPT-4 – but, like, this stuff was released into the world without a white paper, without advice, without information. And I think that was, in some ways, the most profound disservice, this shock here was like, “give us something.” Right? You know? And I think the fact that you released this book along with GPT-4 was really helpful. But we have to reconstruct this because it’s already happening. There’s no dragging your feet. And by the way, I think educators are kind of on board with this because we’re forced to be, and every educator has frustrations with the system that are being opened up, but I think we don’t have those tools to go back to our previous point about experimenting collectively on this. And that’s what makes me most nervous.

REID:
Well, flipping that question also to the student side of it, in addition to the teacher, I don’t know if you’ve ever given a “give me the most interesting prompt” exercise for your students, but either that or what have been the most surprising ways your students have used GPT?

ETHAN:
The really cool thing about being in front of a room with, you know, 60, 80 really smart people is the more people, the more variants, right? From different backgrounds. So, just to talk about my first class I taught, I literally demoed you know, Midjourney and ChatGPT, a couple days after ChatGPT came out, to my undergrad entrepreneurship class. By the end of the first class, one of my students obviously stopped paying attention soon after I introduced it and had a working demo for their product idea by the end of class. I posted on Twitter that night, two VC scouts talked to him by the next morning. By the Thursday, two days afterwards, 60% of my class had used Chat for things. Now, no one told me about cheating, but people did tell me about, you know, “I couldn’t figure out why I got this test answer wrong and it explained it to me. They explained it like I’m five or explained like…” 10 people used that. “I had to come with ideas for a club.” “I had to, you know, product ideas. So I came up with that.” “I had this coding error that I couldn’t deal with. It was taking me an hour. It was killing me and I pasted it, it solved it.”

So, again, general purpose tool plus smart people plus variations of experience resulted in so many different things. In some ways, I think part of the other thing I don’t like about the onboarding experience about ChatGPT and Bing is it gives you some suggestions about what to use and suggestions anchor people. We know this from idea generation sessions. The first thing you hear, you jettison all your interesting ideas and you get fixated on. I think one of the ideas at Microsoft is, like, “write a haiku about space, pirates and octopuses,” and that’s what people do. And then everyone writes a haiku or a limerick. I think it’d be better to anchor people more diversely on weirder answers because people come up with great stuff all the time and it’s very individualized.

ARIA:
Listeners are trying to prepare for a future where AI is front and center and it sounds like one of your recommendations to anyone would just be: “use it.”

Is there anything else people should be doing? And predictions are obviously so hard. What do you think the future of AI looks like? Do you have thoughts about how this could change in the next year or two?

ETHAN:
I think that your big question is what the bet is, right? And you guys have much more insight than I do, I have no inside information on what’s happening here. I think it’s reasonable to expect that we will continue to see improvements. Now, whether that’s a two times or 10 times improvement is an open question, right? If you’re good at the core model stuff, if you’re good at using these raw systems, that will only be more useful because the unadulterated large language models themselves, the foundational models, will keep getting better. I think we’re going to see more tools built on top of them that make them more useful, and more training approaches.

But I think that the big bet is just: how good will these things get? You mentioned a concept earlier about humans in the loop and I would emphasize, again, the importance of that piece. You need to be the human in the loop, even as AI might be trying to force you out of the loop, right? There’s ethical reasons you want to stay in the loop, there are practical reasons you want to stay in the loop, there are job-based reasons you want to stay in the loop. And so I think the more you can get a sense of what parts of your job are starting to – like, I think as you start to use this, you start to get a sense of like what things are heading for obsolescence, right? As a professor, I’m still grading papers, but it’s very clear to me, like, we use TAs to grade papers all the time, and I already have some fellow colleagues who are doing experiments and finding that AI, with good instructions and with some examples of grades of what’s a good paper and bad paper, it grades at least as well as TAs, if not better. So that’s a part of my job that’s going to go away. I’m very happy. Most of the first parts of your job that go away are job parts you don’t like, right? But I think that you should start to think about what is the stuff that I feel under threat with that I actually love about my job, right? And how do I maintain myself as the human in the loop? So, I think that’s where I would be, how do you stay the human in the loop would be the principle I’d be worrying about.

ARIA:
And I think also, I was thinking about this a lot, unless you have expertise in something, you don’t know if AI is giving you a good one. You know what I mean? You’re like, “oh, have it write a paper.” Unless you understand what it was supposed to write, you’re like, “I don’t know!” You’re going to turn it in, you have no idea if it’s an A, a B or a C or where you’re at. We need to make sure that people are still building the expertise so that they can critique the AI and understand where it’s good and where it’s bad. [laugh]

ETHAN:
I love it. And by the way, the errors are subtle errors that are going to happen more and more. And that’s why building expertise in education isn’t going away. Like you need to be more expert now than ever, right? And that’s not just so you can use this in a hybrid sense, but honestly there is this degree of, like, the obvious wrongs are going to disappear, subtle wrongs are going to grow, and we’ve got some early research that we’ve been doing that suggests that people really do anchor on the answers, they find less errors once they have AI. If we design an AI problem that an AI subtly gets wrong, then everybody gets that wrong compared to doing it by hand. So we need to figure out how to work with a system that does make mistakes and will continue to make mistakes in more subtle, weird ways. Expertise is only going to matter more.

REID:
I completely agree. And it’s part of that question around how we get the human amplification, is also we’re going to be learning and extending ourselves, and the things that are important to us we have to kind of keep at, but I think we can.

So, you’ve thought so deeply on the classroom kind of circumstance – what’s the way the world at broad, thinking about, like, the lifelong learner, the lifelong student, what would be your advice to people who aren’t in a university circumstance? As a way of engaging and thinking about, like, “here is how I can continue to learn and adapt.”

ETHAN:
This is, again, where I think that people are used to abrogating responsibility for their own work to – I mean, not abrogating, that’s, that’s too harsh a term – but giving up, you know, there are experts who will tell them what to do. And I see this at every level, including the company level, right? They’re waiting for a management consulting firm or system integrator to give them answers about how to use this system. And those answers are not forthcoming. I mean, people will make up answers, there’s no doubt. But like, this is a general purpose technology, right? Ironically, GPT is a GPT, right? And general purpose technology has come along once in a generation or two. I mean, maybe the internet is a general purpose technology, internet plus computing probably is, before that maybe electrification and maybe steam. Like, that’s the kind of level we’re talking about. And the internet, by the way, took a hundred years to get fully integrated into what we’re doing. From ARPANET, we’re sort of 60 years to 70 years through a journey. And so we’re going to see this same process happen, but much faster, with AI. That means we’re in an exciting time where you could be the best in your field at something. There’s no reason you can’t be the world expert in your narrow topic. And so I think part of this is building up a system where you are learning from what the system does and teaching yourself and using it to fill in gaps and holes, because waiting for me to give you the right instruction on how to use this is probably less useful than you doing it today. And if you’re curious, going back to the broader topic of learning, there is this really interesting research on what’s called specific curiosity, which is basically, “I’m interested in something, so I Google it.” Right? It turns out specific curiosity makes you more innovative and helps you learn because it creates hypotheses in your head. How does the world work? I have to Google something to figure out whether or not I’m right about even Googling it. And that Google rabbit hole that you fall into is actually really useful because it teaches you you have to generate ideas and then test them. The same thing happens with AI. You have to generate ideas and then test them. You’re like, “oh, that didn’t work. Why didn’t that work? Let me explore that further. Oh, really? Interesting. It turns out I wasn’t giving enough context. What happens if I give it this context? Oh, too much context.” You start to learn as you go. I think it is the idea of really just being curious about your field that you’re an expert in, diving in deeply, and then you start to realize where it can teach you and where it can’t.

REID:
Yeah, keeping you curious, I think, is exactly right. And, by the way, this is one of the things I think is great about AI amplification intelligence is if you’re saying, “well, I’m not sure how to do that,” it’s like, well, by the way, go ask: “what would be the things that you could do to help me keep curious? What would be good exercises for doing this? What would be the ways of staying…” You know, just do! Try! [laugh] Right? Exactly like entrepreneurship.

What’s your point of view on the way that we humanize AI, because on one hand you want this kind of companion, on the other hand, for example, people can make mistakes, as you talked about earlier. Like saying, “oh, it’s just like a person,” we anthropomorphize madly as a species. What would be your current thinking or theory of the design principle of both humanizing in these ways, but also understanding that it’s a tool, a kind of companion? How would you put these together?

ETHAN:
I think a lot of people fight against anthropomorphizing because of the anxiety, which is justified, that it’s going to make us not realize its limitations. But it’s also, again, I think something that’s going to happen anyway, right? There’s a bunch of papers showing AI researchers regularly anthropomorphize, like, the way they talk about stuff. And that’s even before large language models. So let’s assume people are going to do this. I think the most useful way is to actually view this as kind of an alien intelligence and to keep reminding yourself, you know, think of it like a different type of person. It could be more helpful, right? It has limits, it has limitations. And reminding yourself of that is sometimes more helpful, I think, than trying to dodge anthropomorphizing overall. I think it would help for designers to kind of embrace this.

And the chatbot model, again, causes some confusion in some ways. You know, it’s funny, people interact with different chatbots differently. I find Bing to often be the most powerful, but also to be the scariest and weirdest to use because it has a strong personality that interacts with your interactions in ways that could feel ominous or threatening or smarter than you. I find working with ChatGPT is sort of the most neutral. And I find working with Anthropic’s Claude to be the most pleasant because, you know… and you will find more differences this way. Right? So, treating them like alien people is sometimes more helpful than saying, “don’t anthropomorphize.” Because people are going to do it anyway. I mean, I talk about my dog, you know, as if it had… and I talk about my computer like it has emotions like people. The idea that we’re anthropomorphizing rocks and ships and that we’re not going to do this with something that interacts with other humans is weird. So just better to remind yourself how weird this is. I almost wish that people would tune up the weirdness of the personalities a little bit more and have it be more eccentric. So that way it’s like, you know, that might be a better reminder.

REID:
Yeah, I think that’s actually a very good piece of advice. One of the things I’ve been doing is I’ve been talking to a lot of different government people and regulation and so forth because I find the discussion on this stuff to be so wrong. Because it is like, “well, how do we slow it down?” Or, “the real issue is data privacy,” or, “the real issue is what does it mean for writers,” you know, writers and jobs and so forth. Just thinking about, like, how do we steer towards the right future? It’s kind of the broad question.

For example, a common thing I’ll say is, “look, I have line of sight to a medical assistant and a tutor for everybody on a smartphone.” Like, line of sight. There’s no technical risk. It’s literally just how do we develop? And your job, I think, as a government person is to figure out how to get that to everybody. The real question is not how only does the upper middle class or the rich or the privileged get this, but how does everybody get this and how do we elevate all of humanity is kind of a fundamental thing. It’s kind of part of what I’m trying to reorient them, how to think about it versus, you know, having a summit about, like, “what’s coming to the world?” It’s like, how do we get this world for all of humanity to be amplified is kind of what I’ve been doing.

What would be your add-ins, tips, advice, for how government people should be thinking about this kind of regulation, should be thinking about, like, what are things to do? And, by the way, I completely agree with your earlier point, it isn’t being pollyannaish and avoiding the negatives, but the way we avoid the negatives is steering towards the positives.

ETHAN:
I love that. I mean, the thing I keep trying to tell people is we have agency over this. This is not something being done to us. I mean, it is, right? It was released, but we can decide what this means, and that’s a human decision we get to make. And I think you’re right that there’s a fixation on a couple of problems that are solvable, right? I think people are very worried about data privacy. I totally get that. They should be. But, I mean, it’s not that hard a problem to solve, ultimately. And it’s already going to be solved in the next two months and it already is more solved than people think, because what people talk about with data privacy is they tell stories that aren’t real, you know, about Samsung’s data being put back in it, which was not what happened, right? It said Samsung got nervous that people were entering proprietary data into ChatGPT, a very different kind of situation, but we should worry about it. But we have to think about the long term. And I think that you’re absolutely right; democratizing access is a huge deal. Certifying what works and what doesn’t is a huge deal. Making it so that people are not hugely disadvantaged because the rules are only slowing down good actors and not bad actors is another kind of problem I see here. So many companies are basically just doing shadow IT, they officially ban all use of ChatGPT and everybody just uses their phones to do the work. So instead of having the regulation where we could responsibly intervene, instead all of the work is being done in ways where there’s no intervention possible, right?

So, I think it is focusing on what we want the future to look like. I couldn’t agree more. We have this incredibly powerful tool and so the issue is not how do we stop it from being implemented, it’s how do we responsibly speed up the right parts of implementation. It is that agency argument. What do you want the future to look like in your field? You have infinite intelligence you can apply to this. What does that look like? And I think working backwards from a positive vision of the future rather than working back from an apocalyptic vision. I totally understand AI risk, people wanted to make sure we understood the apocalyptic risk version. Two months ago, no one was asking about it all the time, and now every interview we have to spend a lot of time talking about the apocalypse. Which I totally get – again, you can’t ignore it, but if that’s the only vision we have, then absolutely we should stop AI development because that’s the only vision people have. But that’s not what’s going to happen. We have an education tool that is available to everybody in India, the best AI model outside of a few people that’s available. If you’re rich, if you’re poor, you get the exact same tool. That’s insane. That’s never happened before. You’re a Fortune 500 company, you’re a two person startup, you have the exact same tool. I don’t even know how to like – this has never happened in humanity’s history before. We should probably be spending a little bit more time thinking about what we want that future to look like.

REID:
We’re going to move to the rapid fire questions. And actually, in fact, this whole discussion has led me to be super interested in our first question: is there a movie, song, or book that fills you with optimism for the future?

ETHAN:
Yes. I find Iain Banks’ novels, the Culture novels, to be very useful because of their view of a world where there are super intelligent AIs and yet people sort of are optimizing their own potential, which I think is a really interesting angle to follow.

ARIA:
So, you are in the field of academia, obviously have used AI extensively. Is there progress or momentum outside of your industry that fills you with optimism for the future, that inspires you?

ETHAN:
AI specifically?

ARIA:
Oh, no. It could be outside. Like, anything outside of academia or AI that fills you with inspiration.

ETHAN:
I mean there’s so much, right? I work with medical professionals all the time and the stuff happening in labs is kind of amazing. It needs to get out of it. I think we’re in a really optimistic moment in tech right now overall. And I think, you know, it’s exciting. I talk with entrepreneurs in different fields all the time and stuff has started moving after a long period of some fairly strong stagnation. And I think you can feel it shaking out, right? If I talk to people in fusion, I talk to people in green energy, there’s optimism again in scientific progress and I think that’s profoundly exciting.

ARIA:
I just love – like, if you ask a random person, I feel like in the last three months there’s just been an uptick in, like, “well, obviously the world’s terrible, but how are you Aria?” So I love to hear that you’re like, “we are at a time of optimism.” We’re at a time when tech entrepreneurs, there are sort of positive things happening. Because I think a lot of people need to hear more of that because we’re just we’re just hearing how negative things are going. So thank you for that.

REID:
Yeah, and totally agree. And that’s of course why we’re doing Possible, because the thing is when you look across all these things, fusion, medical, synthetic biology and everything else, all of this stuff can be just transformative in totally amazing ways. And it’s like, “no, no, the future can be so much better, work towards it.” Don’t be depressed, don’t sit around, don’t go, “oh my God, the future’s coming.” Go, “oh my God, the future’s coming!” And so, you know, I think I’m going to mod this rapid fire question a little bit because, obviously, the level of intensity, excitement around AI – I think you just naturally say AI – but what technologies in combination with AI are you also excited about? So AI, general purpose technology, I agree with you, it’s like a steam engine. What “AI plus this” is one of the things that people should be looking at as about ability to transform your field, ability to transform society? What that combination?

ETHAN:
I’m going to give you my most academic answer on this which is in management, we consider management to be a technology. It works like a technology because good management skills actually increase performance of companies. 30% of why US companies do better is because of better management. And the most exciting thing to me in some ways about AI is how it transforms organizations. We are organized the same way we were in the 1820s or 1920s. Maybe you have, you know, Agile in your company, so you’ve picked something from like the 90s or early 2000s. All of that is about human constraints and human interaction. That all is going to change with AI in ways that will, I think, be able to free us from some drudgery, but also, obviously, create some downside risk. So I’m very excited about that interaction, about thinking about what managers do and how we do a better job fulfilling people at work and the things that they do there. I think that, to me, is underemphasized because we talk about tech tech, but not about what most people actually do in their jobs.

ARIA:
Totally. I mean, I was just speaking to someone yesterday, contrasting two of the managers they had and how that unlocked enormous work, excitement, fulfillment in them. And, yeah, AI should help with that, too.

Ethan, can you leave us with your final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years? And what’s our first step to get there?

ETHAN:
So, the idea that we can outsource the worst parts of our jobs in our lives, and we’re just used to that being part of our job, we’re desperately holding onto things that suck because they’re part of our job, right? But jobs are bundles of tasks and some of those tasks you can give up happily, so I think that there is a potential for us to free ourselves from some of this drudgery and then to have companions that let us overcome a lot of these barriers. I mean, I think we’re going to look back at history, like 2007 or so till whenever, 2030 whatever, when the AI stuff settles down, as one period of disruption. It started with suddenly we were all connected by phones and social media, and that created a lot of good and a lot of bad, but we didn’t quite know what to do with it. And then there’s been a series of changes ever since, and I think AI is a natural kind of inclination.

It’s a social human technology in some ways, and hopefully, it helps us start to, you know, recognize the better angels of our nature. And being able to outsource this stuff that we always hated, that we didn’t like doing, freeing up scientists to do the kind of work they should be doing, freeing up people from the drudgery of meaningless tasks to focus on meaning. I think that’s very exciting.

ARIA:
Awesome.

Ethan, thank you so much for being here. We really appreciate it.

REID:
And Ethan – not surprising, given how much I follow your work, but you’re one of the people that I would love to see any version of Impromptu-like books from because it’s exactly the kind of future that we should be kind of orienting to everyone to. So thank you.

ETHAN:
Thank you. This is wonderful. And it’s great working with people who are deep into AI and don’t have a haunted look in their eye of anxiety all the time because I think there is a lot of anxiety on this, and especially people who are actually deep into knowing what’s coming next and have a line of sight into that. I think it’s important for those people to be optimistic because I do think that the conversation has shifted in a way that by avoiding a more negative world, we may end up with a more negative world. And I think we have to be really cautious about that.

REID:
So, wow. Ethan – like, “grand slam” would be an under-description. It’s like, “oh my God, there’s so many amazing things to do. Let’s go do them. We can build this, we can make it happen.” It’s like, okay, hey, you run this Possible podcast rather than us. You’re great. [laugh]

ARIA:
I think it’s just the discourse out there is, you know, AI, positive, negative, but, “wow, it’s really going to be bad for education. It’s really going to be bad for teachers. How are teachers going to teach? How are students going to learn?” And it’s like, well, Ethan is a professor at Wharton, and he’s using AI every day in the classroom and is one of the most positive people I’ve ever met on AI. And so it again just reinforces the “go, do, learn.” I mean, he inspired me! Give me more prompts, Ethan. I need to be doing more prompting. Because, just his level of fun and curiosity, I think it’s sort of hard not to be inspired by it.

I’m also just so excited because we asked Ethan, you know, what are the prompts for someone who’s a beginner, intermediate, expert? And so, I’m so excited. Listeners out there, please let us know: if you used Ethan’s advice, how did it go? What would you add? What are your other tips and tricks? Again, I think the collective intelligence about this technology as it moves so rapidly is what’s going to sort of level us all up.

REID:
Possible is produced by Wonder Media Network, hosted by me, Reid Hoffman and Aria Finger. Our showrunner is Shaun Young. Possible is produced by Edie Allard and Sara Schleede. Jenny Kaplan is our executive producer and editor. Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, and Ben Relles.