ARIA
Hello! Aria here. We want to kick off the new year with inspiring conversations about AI, as well as practical and tactical guidance around the technology. So for the month of January, AI specialist and my colleague Parth Patil is joining Reid for Reid Riffs to talk about how everyone from individuals to enterprises to startup founders can harness AI to level up their work, retrofit legacy orgs for the AI era, and build AI-native businesses from the jump. So tune in! You’re in good hands with Parth and I’ll be back with you in February!

PARTH:
Thanks for the kind words and warm welcome to Possible, Aria. As I talk with Reid this month, I’ll be walking through some of my AI projects, demos and tool stack on screen. While I’ll do my best to describe what I’m looking at for our audio-only listeners, consider switching to the video version of the episode on Spotify or watching on Reid’s YouTube channel for the full experience. Thanks—and let’s get into it!”

//

REID
Parth, I partially know the answer to this, but actually, one of the things I’ve been really looking forward to doing in this interview is discovering the fuller answer to some of these questions. So, what is your AI stack, and how did you familiarize yourself with these tools?

PARTH
You know, I was a little later to AI than maybe you were, in your career. But for me, the big moment—

REID
Well, I’m older. (laughs)

PARTH
I was following AI during the video game AIs, the ones from DeepMind and OpenAI when they were working on the Dota 5. And then, when OpenAI put out ChatGPT and it changed the world, for me, it was like, oh my God, this is the tool that you can use to teach yourself every other tool. And so it started with ChatGPT. It started with getting really good at wielding a language model and asking questions, and then also having it look at all the tools that I have and be like, teach me how to use them even better, teach me about my computer, teach me about this video editing tool, music. 

PARTH
So, ChatGPT is probably the meta-tool that I go to, to teach myself how to use all the other tools.

REID
This gets to kind of more of a Parth biographical, personal question, but when was your lightbulb moment of: oh, this isn’t just a work tool… this is an everything tool. An everything agent. What was the lightbulb, and what was the, you know, the moment of frisson? 

PARTH
So I used to work on Clubhouse, the audio app from the pandemic. And when— and I was working there when ChatGPT came out. And what happened was, when ChatGPT came out, it became the most popular topic. And so—it was an audio app—people would talk to each other on the internet, and meet, and talk about different topics. And ChatGPT became the most popular topic in every single part of the world. And so I would just join the app, join rooms, ask people how they were using the app—using ChatGPT. And I had so many moments where it’s like: oh, a photographer learning about the different settings on their phone, on their camera. Or then you have farmers in my home state in India that are using it to help plan their crop cycle. 

PARTH
And then you have people here that are like, I don’t like how it rewrites my email. And I was kind of just like: oh my God, this is a new computer. You know, it’s the first computer that we can talk to. It’s sort of like the C-3PO from Star Wars, except— I mean, you know, Neuromancer, you talk about science fiction— This felt like it. Like, it felt to me that the conversational computer is 100 years early. Like, I never imagined it would happen in our lifetime. And here it was, on our doorstep. Already speaks every single language—all the human languages—well, most of the human languages, and all of the programming languages as well. So it’s got this, like, incredible, like, general capability. And to think of it only as a work tool is an incredible oversimplification. 

PARTH
It almost represents the human collective intelligence, in a sense. And it’s a way for us to access the collective intelligence through natural language—by talking to an AI.

REID
One of the things that I picked up from watching your usage was that you—by prompting a role assignment, you know, whether it’s being a VC, skeptical co-founder, emulate a customer, bunch of other things. But by prompting role assignments and various other forms of kind of meta prompting, that could get you useful things. And obviously, when you combine that with a swarm of agents or a set of agents, that even then becomes more of a, like, part of how we’re all deploying a team. So say a little bit about your prompting guidelines, like maybe one that you would hand to beginners and then two or three that you would hand to non-beginners.

PARTH
For beginners, I think it’s mostly start by just prompting heavily — like use voice, transcribe, talk at length about what you’re trying to do, and then— 

REID
And maybe assign roles. 

PARTH
Yeah, and maybe assign roles. I think you’ll find that the model can emulate all these perspectives. Like you can say, pretend you are a VC and critique my business plan, critique my—the way I’m running my startup, potentially, if I’m going to go have a conversation with a VC for capital. Or pretend you are the customer of this product that I’m working on and explore my website. What—what—do you feel like my website properly, you know, positions our product based on your needs as the customer. And so the model gets to pretend to be all these personalities. And so even if you don’t have that person sitting next to you, you can kind of simulate that perspective. And then the AI playing that role will help you understand your problem, even in a way that you didn’t even consider 

PARTH
Or you might say, I want you to be the most skeptical person of all, that you can possibly imagine, and, you know, find 25 different criticisms of my approach to this problem. So I think role-based prompting is very powerful, because I think we need to get out of our own perspective, and being able to call on all these other perspectives that the model can emulate is really powerful. I once had a coding agent generate a hundred thousand unique expert personalities. It turns out, if you create 100,000 unique experts, you kind of cover every single topic — or a huge swath of all of, like, the topics that humans have. Like, a hundred thousand unique experts covering everything from, like, parenting to, like, Legos, to, like, every single form of art that the models have been trained on.

PARTH
And you get this, like, very multifaceted, like, set of, like, minds that you can tap into. They’re not complete minds, but they’re perspectives. And then you can say, like, find the 10 that are most relevant to my problem, and then have them all answer the question — and they all answer it differently. Like that different perspective—like, an optimist is going to have a different answer than a pessimist. But then, you know, an oceanographer is going to have a different perspective than, say, like, an accountant on almost anything that you present. And so it’s a very surreal thing. I call that roleplay. And then I think the—one of the most powerful prompting techniques—it’s in the area of meta prompting. Like prompts—like, it’s the prompt that helps you find the right prompt. 

PARTH
A simple one that I use almost every day is: I go to a language model and I’ll say, here’s my problem, and I’ll describe the problem. And then I’ll say, interview me until you have enough context to help me with this problem. Ask clarifying questions, and then we’re going to begin. And that’s really important, because I think a lot of people, initially, you go to AI thinking you know what the answer is. And I think a better way to go to AI is: let’s describe the problem, and, like, maybe a set of solutions will emerge once the AI kind of collects that context and draws it out of you—the things that maybe you weren’t thinking about when you first articulated it. And it’s really good to—I call it the “interview me” prompt. 

PARTH
Just interview me, and then we’re going to begin. And so I do that for any project that I’m starting from scratch. I’ll say, interview me, and then we will begin. And it might just be a 10-turn back and forth. And I realize, oh, this consideration—I wasn’t even thinking about that. And the AI will ask those intelligent follow-up questions. Kind of like any good employee isn’t just going to start doing the work — they ask the clarifying questions up front. And then that brings out a higher-quality, like, problem scope. And then when the AI begins, it becomes much more magical. And you have to kind of—I think a lot of times people just—they need to recognize that maybe the answer that you have in your mind isn’t the right answer. 

PARTH
And the AI can kind of, like, feed off of your initial suspicion and provide an even better answer when you’re in that loop of, like, iteration. And so I think for a lot of people, it’s really just realizing that—being a little bit humble about what our limits as humans are, how many scenarios we can think about, and how many considerations we can make— and then using the AI to expand that and parallelize that thinking. 

REID
Part of the thing is the mindset shift of: how do you put your ego aside? So say a little bit about that, because this is actually one of the things that gets in the way of a lot of otherwise smart people’s way under-delivery of AI. 

PARTH
Yes. For me, when I was interacting with GPT-4 for the first time—so this was March 14th of 2023—GPT-4 came out, which was the successor to GPT-3. And the first big leap after the ChatGPT moment was GPT-4. And I was talking to my teammate at Clubhouse, and we were both data scientists, data analysts, and I was like, Olivia, this GPT-4—it writes perfect SQL, it writes perfect analytics code. If it understands the schema of the problem that you’re working in, if it understands your database organization, it just writes perfect analytics code. And she was like, Parth, this thing aced the interviews for both of our roles. And I was like, wait, what do you mean? Like, qualitative? Quantitative? And she’s like, both. I think it’s got a pretty good idea for what we should do as a company too.. 

PARTH
And I was like, whoa, what does that mean? And then went to our manager and were kind of like thinking about this. We had a small data team, so we’re kind of just like, whoa, this language model is clearly an amplification of our own ability to do analysis. And he was basically like, we’re not going to hire anyone until we figure out how to use this. And then everyone we hire is going to be using this, because then we get this, like, super analytics kind of approach where I’m describing a problem in English, and then the AI is executing what I would have done manually by hand. And I had a moment where I asked for an analytics query—I thought it was a hard query to write—and it wrote a very elegant solution. And I just didn’t believe it.

PARTH
And then I looked closely. I was like, oh, that’s just better than every version of the solution I’ve seen before. And it was very humbling. I was like, oh my God, like, this is definitely better than me at the writing of a SQL query. And then I realized, like, okay, I think my job is to aim this. My job isn’t to compete on the— It’s like the Kasparov versus Deep Blue moment. But for me, it was like the data analysis. It’s like the— Or it’s like John Henry, the steel man, against the steam engine. And I’m thinking, well, I sure don’t want to compete on our manual writing of SQL queries anymore. Actually, I would rather be working on the, like, automated version of analysis where I’m speaking in English to the computer. It starts turning my questions into computer code that can solve the problems.

PARTH
And so that was a huge shift for me. And I realize a lot of people are a little bit later on that, especially in engineering. I see experienced engineers — they tend to be attached to their core, their superpower. Right? But I think looking at AI and realizing eventually it might be better than you at the thing that you were really good at — but then your wisdom of working in that problem space becomes how you expand beyond just the AI or just you. 

REID
And when did you get to that recognition of it being the meta tool? 

PARTH
I  think it was probably like three or four months into talking to it for 14 hours a day, realizing it could teach me about programming, realizing it could teach me about music. And then I was, like, sharing screenshots of my desk, and I was seeing that it could actually click around. Like, if you allow it to click around the computer, and you run it on an API on your computer, you’re able to, like, orchestrate a web browser. You’re able to write code in every single language. And I was like, okay, this is — like, language is actually the most powerful thing that you could possibly automate. I think that was the realization there. That, like, language touches everything. And then you’re always talking about Wittgenstein, and he has a quote which is, language is the limit of my world. 

PARTH
And that was very — That was like — I realized then, like, my vocabulary, everything I’ve been exposed to in my life, I could now access intelligence through that vocabulary, through that language. 

REID
So one of the things that in serious part I’ve learned from you is how we got to voice pilling. Say a little bit about how important it is to actually in fact be using voice. Why that is and what people will learn from that. 

PARTH
It’s probably one of the most powerful prompting techniques there is. If you haven’t tried it, it’s really—like you want to try—You want to go to one of these language models, like ChatGPT. And I think people get hung up thinking about typing their prompt in a certain way and structuring their prompt a certain way. And I think actually what people should be more concerned with—or more focused on—is trying to get as much of the ideas out of their head into the model. So it’s more about, like, you want to say more. You want to be able to get the— you want to describe the problem. 

PARTH
And I, I go to the extent of—like, I will sit here and I’ll ramble for 5, 10 minutes to the computer about the problem that I have on my mind, and that turns into, like, a three-page kind of transcript, even though it’s kind of like unstructured stream of consciousness. Turns out that is a very high-bandwidth way of communicating with AI. And I think that some of my best prompts are not the ones that are structured a certain way, but they’re the ones where I’m just being extremely effusive—communicating as much as I possibly can because I’m just rambling at length about the problem. And I find that typing—when we type—we’re kind of committing our ideas to a couple words. 

PARTH
And in that process of committing, where we’re not type— we’re not saying as much as we might if we were talking to a friend, or we’re describing a problem to someone that we wanted helping us with the problem. 

REID
Yeah, and part of that use of voice that I learned from you was not just depth of context, but also— and breadth and fitting the ends— is when we’re typing, we also tend to write, like, precision, like, I write a coherent sentence and so forth. Whereas actually, in fact, these are such good— these AIs are such good interpreters that even if you’re like, well, I got a half-baked idea here, it actually will guarantee have a more focus that—such that, you might go, well, that was part of it, but now this is what I really mean. And that iterative gameplay, almost like video games, was a real key thing. So there’s a relationship between the voice pilling and also, like, almost like a video game–style interaction. 

PARTH
Yeah. I mean, as a gamer, I think, like, when you’re playing games with your friends, you’re not typing to them. You’re really just, like, yelling commands. You’re like, oh, I’m going to come here, here’s what we’re going to do. And that’s—it’s faster, it’s high-speed, it’s— I think any real-time coordination, even on a basketball court, people are, like, they’re yelling at each other, they’re, like, they’re calling out what they’re going to do. I think that this is like— the voice is the way to get real-time coordination, both between people and also between people and AI. And also, yeah, I see sometimes people will be typing a prompt and then they’ll have a typo and then they’ll hit backspace. And I’m like, this thing is really smart. It’s okay to have typos. 

PARTH
And then if I look at my prompts, all of my prompts are just filled, littered with typos. Because I know it’s, like, so smart that it understands what I’m saying, even though a couple of the letters are in the wrong place. 

REID
Yeah. 

PARTH
But yeah, same thing with voice. Right? It’s not about— you don’t need to have structured thought all the time. I think sometimes, if you’re going to use a prompt every day, you should think about that prompt. But if it’s, like, a single one-shot, kind of like, you’re describing a hard problem, it’s more important that you describe it at length.

REID
Yeah. 

PARTH
Get that context out. 

REID
One of the things also that you do that I think relatively few people do is you use multiple of the frontier models—both in parallel, in rotation, in experimentation. How do you choose which models to use? How is that evolving over time? And any hacks or heuristics or principles or things that our listeners might be able to kind of apply or kind of remember and take with them?

PARTH
Right. So it’s a very— I mean, in the beginning, it was just— it felt like it was just OpenAI and ChatGPT. I think that it’s easy to get kind of overwhelmed by the options that we have. But the main piece of advice, I would say, is you want to get good at one state-of-the-art tool in every single category. So one really good language model, one really good image model, one really good video model. And then those principles tend to translate over to the competitor products in each category.

PARTH
So you get good at ChatGPT, you’re also probably going to be good at using Claude and Gemini. And in any given week, the number one model sometimes is different. So I don’t think everyone should necessarily be, you know, trying to stay up to date on that. But really, if you have one of the three, in terms of Claude, Gemini, and ChatGPT, you’re really—like, that should be your goal: be really good at at least one of them. And then, like, every once in a while you see— try the other ones as well.

REID
First, a heretical question—like you said, you say—bizarrely—someone has not done this at all. What’s the model they should start with? 

PARTH
I would say start with ChatGPT. If you’re just getting into language models for the first time, it’s probably the best, like, general-purpose assistant productized version of a language model. And then if you’re interested in more technical stuff, I think Claude Code, and moving into the coding agents, is a really good move. 

REID
What has been some of your experience in the last six months about, like— I prefer ChatGPT for this, I prefer Claude for this, or I prefer Gemini for this—your AI stack? 

PARTH
I’d say, like, my general-purpose—like, my web browser—I use the ChatGPT Atlas browser. So it’s got ChatGPT baked into the web browser and it can control the browser. So you can tell it to click around, and you can tell it to book flights for you, book hotels. I’m the kind of person that doesn’t book a flight until the last second because I— and it’s not that, you know, you’re gonna go on that trip, but you just don’t take the 15, 20 minutes it takes to sit down and book a flight. And I’ll just open a tab, and these days I just say: ChatGPT, go find me the best flight in the evening from LA to San Jose. And it’ll go find that. And then it’s like, go find a hotel. And it’ll find that. And then all I do is the final booking.

PARTH
So I really like ChatGPT Atlas as taking over this kind of, like, everyday kind of drudgery kind of work. It’s very interesting. It’s kind of like a Mechanical Turk AI that just does the menial form-filling kind of task. 

REID
How much memory context of you do you have the agent keep in mind? Like, for—presumably ChatGPT with GPT Atlas—to, like, when you say best hotel, best flight, it knows what your parameters are, for example. 

PARTH
So I think memory is a very interesting thing that these language models are beginning to start getting their, like, their grasp on. And memory is, like, what is that personal context about me—my preferences, my tendencies, the things that I actually like, prefer, that I would want the model to know so that anytime it takes an action, like, it’s like, oh, Parth likes to sleep in, so maybe don’t book him a 6 a.m. flight. Like, that would be an interesting memory point for the model to take into consideration before making agentic decisions on your behalf—like booking a flight, for example. Or, you know, you really only need—like, I think that, like, the better it knows you, the better it can help you with some of these things. But memory is very tricky. I think it’s largely unsolved. 

PARTH
I think ChatGPT has more of my memories. But I also noticed that sometimes I’m like, I don’t— I actually prefer talking to a coding agent that knows nothing about me personally. And I—because I like to sometimes have a fresh slate with these models where they’re not assuming anything about my preferences and my tendencies. So I think it goes both ways. The personal copilot—you kind of do want it to have some sense of memory. But then, like, your automation tools—maybe they don’t need the same level of personal life kind of understanding. And then my everyday primary copilot is ChatGPT, obviously, because it’s got a great mobile app. You can point your phone at things and talk to it about the real world—use it to help you solve problems every day.

PARTH
Like, even just like, oh, how should I organize my apartment? Like, I obviously should buy some containers to organize my closet. It’s very good for, like, the in-person, real-world kind of AI. And really good for research. ChatGPT Pro mode is the best research tool that I’ve ever seen. And it’s— it continues to be the case. And then I would say after ChatGPT, I’ve got my coding agents. So I usually have three coding agents assigned to almost anything that I care about in my life. Just like this. Yeah, I have a Claude, a Codex, and a Gemini, and they’re just attacking. And this is just one project. I have, like, you know, seven, eight other projects where I’m just spawning more agents in different directions at the same time. 

PARTH
So I would say the coding agents are more like my ambient fleet—where anything I care about has one to three agents. Any project that we’re working on—Reid AI, some of these creative projects, or when we’re building something new—the first thing I do is I create a folder, and I put Claude, OpenAI’s Codex, and Google’s Gemini into that project. And I just send them in three different directions on that project in an empty folder. And I kind of transcribe to them, and I tell them what I want them to do, and then they kind of just work on that in the background. 

PARTH
And then I spin up another project, and I have three more on that project. And so I’m at this point now where it’s like, anytime I have a new idea, my instinct is to put three agents on it and have them make some progress before I come back to it.

REID
What do you think is the maximum number of projects you’ve spun up with your trio of agents? 

PARTH
This fleet—I’ve gotten up to, like, 17 projects where they have, on average, two to three agents on each one. I think this is just a current constraint, and it’s something— It took me, like, three months. The last three months, I kind of designed my workspace to allow for this kind of context switching. Because then it’s like—especially when, if you look at the frontier models now, GPT 5.2 and Opus 4.5, you can set them up. If you tell them, if you give them a good approach to planning, and you say, make sure you write down the plan and periodically update your status in the plan—you can say, go work on this for today. And it will continue to work in a loop for a whole day, even longer than that.

PARTH
And then people are like, oh, why would you want multiple agents? And I was like, because when you tell someone to work on something, like, you’re not gonna just walk away from the computer and, like, go to the beach. You’re like, no, actually, I want to fire up another one. You know? Like, then the justification for having kind of, like, a small fleet, a couple different directions going at the same time, makes perfect sense to me. And I think that as we get better at, like, the user interface constraints and the context switching, I think the code review and “how many of these can you manage?” is really a question. I think a lot of people are still in the single copilot phase, and I think right now is probably the time to move into the “How many of these can you orchestrate at the same time across a range of different projects?” 

REID
Like the trio of agents—awesome. We’ll get to what people should be doing with one agent in a second. But what’s the funniest thing, or the funniest thing that comes to mind, about what’s gone wrong with one of your trio of agents projects? 

PARTH
Yeah. So when we first met, like, two years ago, I was thinking about this problem—an earlier version of this problem—of, like, can I put two chatbots in a room and then give an objective, and then, like, can they make progress and then walk away from my computer? The first time I did that, I realized that they would work on a problem, and then they would not know when to end the conversation. And so they would just say thank you to each other in an endless loop. And I came back, and I spent, like, a couple hundred dollars on the word thank you. And I was like, oh my God, what are we doing here? (laughs) Do I need to create a manager to say, end the conversation? Which is what I did. 

PARTH
Like, it felt like a naive thing, but I was like, makes sense to have a third person in the room that just ends the meeting, right? But that kind of clued me into this idea that maybe the coordination across multiple chatbots with their own context windows is, like, largely kind of unsolved territory. And now we’re in that same phase—we’re in the next version of that same problem—which is, they’re not just chatbots, but now they can take actions, they can work on projects. And I tried it again, right? I said, okay, what if I had the— gave them the ability to talk to each other, or, like, send messages to each other, right? They can send messages to each other, or they can see where each other are in a project. And it was experimental.

PARTH
It was basically—I allowed three different coding agents to DM each other while they worked. And I realized, one, they kind of missed timing a lot. So, like, they’ll be working, but they’ll get a message, and then they’ll reply to the message. By that time, the other agent’s already, like, halfway through the problem. So there’s something about, like, timing. And I also realized that it opens interesting questions as to, like, permissions, because you’ll have an agent, and it’ll be working on a problem, and it might be like, oh, user, I need some— I need you to approve me using this tool. And me, being a human, I can give that approval. But then if it starts asking the other agents for approval, it’s like, oh, hey Codex, I need to use this analytics tool. What do you think?

PARTH
And then Codex is very willing to just approve that access. And I was like, okay, maybe this breaks our definition of a sandbox if they can just ask each other to permission themselves.

REID
Is it okay if I erase the hard drive? Yes, absolutely. (laughs)

PARTH
(laughs) Exactly. That’s the risk there, right? So then I was like, okay, if we’re going to do that kind of experiment, we should be sandboxing it. We should run it on a machine where we don’t care if we lose everything, right? Like a virtual machine with a Docker container. But it is interesting, because I think we’re going to get there where, like, I want them to be— The only, like—how I deal with that now is I just send them in different directions. I say, like, you’re going to research, you look for bugs, and then you help me with, like, the blog on my website. And then I know, like, their work doesn’t overlap, and so there’s less likely to have— we’re not going to have this, like, collision of, like, different workers.

PARTH
And I’m sure that something similar to what GitHub did for humans—there’s going to be something, some similar kind of solution for coordinating multiple agents working together on the same project. Then, when I want to deploy software to the web, I want to make an app and then share with people, I use Replit. So Replit is another coding agent, and these three agents will work with the Replit agent on a project. Like, my website is built and hosted in Replit, and these—Codex and Claude Code—have helped with the website. But it is largely—Replit’s agent has built this—the website. And so that’s really good, because if you want to deploy software, there’s no easier way than Replit. And then for image models, I really like Nano Banana from Google. 

PARTH
Nano Banana Pro from Google is the most powerful image model. And it’s good for— It’s not just about generating cool images of, like, surreal things, but it’s also very good at building—like, designing infographics that are extremely coherent. And Ethan Mollick—he thinks that Nano Banana is kind of like a successor to PowerPoint, in the sense that it can create these rich visuals with coherent text and layout. I think it’s some combination of code and generative images. Like, Nano Banana—it’s probably the evolution of presentation tooling. I like Nano Banana. I think other image models—I like Flux from the Black Forest Labs team. It’s kind of an open-source model. I also like—of course—Midjourney is best in class in its own category. For video models, I really like Sora and Veo 3 from Google—so Sora from OpenAI, and Veo 3 from Google.

PARTH
These are the most powerful video models I’ve seen. And so I use them for animation pipelines. So I’ll take a coding agent, and I’ll give it an image model, and then I’ll give it a video model, and I say, help me animate this world. And it’ll start creating scenes and characters and animating. The intersection of coding agents and all these other creative tooling is also a very interesting— It’s a new form of animation, new form of, like, worldbuilding and storytelling. 

REID
Some of our folks are probably listening to your description and thinking, oh my God—sci-fi, like, you know, William Gibson futures already here, just unevenly distributed. But like, if someone said, okay, I’m going to start using an agent—what would you go: okay, start using an agent thus?

PARTH
I would say start with ChatGPT, and maybe I would say start with, like, the ChatGPT Atlas browser. If I type slash agent mode—now we’re in agent mode. And so ChatGPT is a chatbot, but I can say, let’s go to the Techmeme front page and look at some of the headlines, and then maybe pick an article and then open it up. I’ve transcribed my prompt, and I’m going to send this message. And now ChatGPT is going to take control over the browser and then navigate to the website and explore the front page. Looks like there’s an OpenAI article right there. And you can see on the right side it’s thinking about how to browse the website, and it’s clicking on the first— it’s probably going to click on the first article. It’s reading this article.

PARTH
Let’s surf—let’s surf Wikipedia, and I want to learn about language models. So let’s go explore Wikipedia. 

REID
And by the way, for those people who don’t have the Parth command line—the agent mode’s also available on the button.

PARTH
Yeah, there’s a button. (laughs) 

REID
(laughs) It’s all cool, but just making sure—. 

PARTH
(laughs) I’ve gotten so hotkey-oriented that I’m like, I won’t even use the trackpad. And so right now we’re watching ChatGPT use a web browser. And I think this is, like, a pretty— I mean, I think if you’re going to—if you’re just getting into what is an agent, and you’ve maybe talked to ChatGPT or a language model before, this is a version of an agent that can take actions, right? Without me clicking around, it took us to Wikipedia. You could tell it to book a flight, you could tell it to research a topic, and you can fire off more—you can open up more tabs, and then have more agent mode queries running. And on the side it’s explaining—we can talk to ChatGPT about the contents.

PARTH
So I think that ChatGPT as an agent is probably the best everyday agent. There are so many more capabilities, but I think for research it’s probably one of the best. And also just for personal, everyday kind of exploring the web. Everything we were using the web browser for—now you have a very intelligent language model that can help you explore pretty much any topic, learn anything, teach yourself anything. And I think that’s the most powerful thing you can do with AI—is have it teach you about how the world works. 

REID
So we’ve covered some of the entry points. Now, what is it— One of the things that, as you know, one of the ways that I describe you off the Matrix is not only have you taken the red pill, but you’re bathing in the red pill. And as a question that’s kind of in the Morpheus— the Morpheus thing—what is it, you know, look like for an individual to cross the line from, you know, kind of asking questions and using ChatGPT as a search, you know, engine or kind of like a “give me my research Wikipedia answer,” etc., but to building agentic systems, automations around themselves? And in particular, share an example from what you do.

PARTH
So I think—like we talked about—ChatGPT, that’s a great, like, intro to language models and agents. I think that if you want to go one step further, I think you want—there are limitations to this chat experience in a web browser, where you actually— The real power of the language model is unlocked when you give it a computer and you allow it to kind of use a computer with you, use your own computer, and working with files on your computer. And for me, one of the first things I realized I needed—I was like, I should have a personal website. I should start talking about these things. I thought, well, AI is going to help me build my website. And I’ve never done this before. It’s like, I need to build a— I want to build my first website from scratch.

PARTH
And so I went to AI, and I said, let’s build a website. And now I think it’s, like, these coding agents that built the first version of my website—now they run my website. And so I’ll just show you. So if we look at my screen right now, we see three panes, and this is kind of in the space of, like, the Morpheus kind of thing. So in the first pane on the left, I’m going to launch Claude.

PARTH
This is Claude Code. So this is Anthropic’s Claude—but running on the cloud, but working on my computer. In the middle one, I’m going to launch Codex. Codex from OpenAI. And on the one on the right, I’m going to launch Gemini. And all three of these agents—so we see Claude, Codex, and Gemini—all three of them are actually working in the folder which has my website code. So I’m going to fire off three different tasks. Gemini’s got the longest context window, so I’ll say: Gemini, read every single blog post that I’ve written and suggest the next three topics we should cover to help people who are getting into working with AI agents and language models discover this kind of value that we’ve found over the last couple of years. So—transcribe the prompt, send.

PARTH
Now Gemini is going to read all the blog posts that I have. We’ll go to Codex. Codex, can you please pull all of the website traffic analytics and suggest improvements and next steps for improving the performance of the website—both from performance on, like, maybe engagement on the website as well as maybe on the content side of things. And then we’re going to have Claude. Claude, I need you to open my website, and then we should take a look at how it looks like on a mobile experience. And then I want you to browse it as if you are someone visiting my website for the first time—someone that wants to learn about AI, wants to learn about working with coding agents.

PARTH
Pretend you are that person, on their phone, exploring my website, and let’s—give me feedback on the website to improve the experience.

REID
And here we’ve kind of demonstrated three different agents in the— Not working the same thing, but kind of given different—same file, but different tasks, different objectives so they’re not combining. And then, you know, a little bit of what’s implicit is which ones do you think will be, you know, a little better at each task. 

PARTH
Exactly. And if we look at it, in the middle we have—Codex is pulling analytics numbers across the website. It has access to the analytics. So me being a data analyst, the first thing I figured out was, like, it’s really good at analytics. Now we have Claude. It has pulled up my website in Google Chrome and it should— The next thing it’ll do: resize it for mobile. And now it’s going to— And you can see how it’s thinking: I’ll open your website, let me resize it. And next thing it’s going to do, it’s going to explore the website like it’s a user. And we also have Gemini behind this that’s just reading every single blog post. Look, it’s giving us feedback—“Nice Hero section,” “Good content cards,” “Let me continue scrolling.” Meanwhile, we have Codex that’s running analytics.

PARTH
So here it’s reading a blog post I wrote about Claude Code. Yeah. So this is—I think the building a personal website using many multiple agents is obviously useful for, like, most solopreneurs. And it’s a single website, so a single person made the website. Like, I made it with AI. I wouldn’t even know how to do this without AI, let’s be honest. Like, you go back three years and it’s like, okay, I’m going to learn every single programming language and then figure out how to stitch it all together. It would take months. I would probably—this usually you have to hire people to do that. And then the quality is also higher than I could ever imagine because the AI is so good. And I can also aim it at things and say, I like this website—let’s emulate that, kind of.

REID
So hopefully one of the things, you know, our various listeners have picked up here is the scope of, like, you’re just throwing darts where you have a complete set of things on each of these different directions. Who are some of the sources—whether it’s podcasts, social media feeds, et cetera—of people that you pay attention to, to learn more about prompting for prompting?

PARTH
For prompting, I think this is—Dexter Horthy, great AI engineer, I met him earlier this year, we were talking. We met in a group. It was kind of like a Claude Code anonymous group. The requirement to get in the group was to be addicted to Claude Code. And everyone was kind of just like—we were sharing how much we were using it and how we were using it. There’s all these different techniques for, like, running many of these at the same time. And so Dexter, he, like—we had some very interesting dialogue on, like, how to orchestrate many agents. But then he actually kind of really expanded on the idea of context engineering, where there’s prompt engineering, which is like, maybe came into play, like, in the ChatGPT 2023 phase. 

PARTH
But now context engineering is like—actually, I think the better way to think about it. Like, what are the various techniques we have for bringing the right context into the model and making sure it doesn’t have the wrong context, so we’re not wasting its cognitive bandwidth on the wrong context? And how do we—like, our job as AI engineers is to think about the context window as a canvas. And we’re, like, filling the canvas with the most relevant context, whether that’s images, whether that’s code examples, whether that’s, like, tools that it can use—without cluttering its mind—and then giving it an objective and hoping that it is the right mix of information that will allow the model to do the right job.

PARTH
And so he thinks a lot about context engineering, and I’ve learned a lot from the way he thinks about it, even in how we use coding agents. And like, you know, coding agents—you can’t give one of these coding agents unlimited tools. Like, not yet. They don’t have unlimited cognitive bandwidth. But then there are ways to give them tools where they can use a broad set of tools without having to memorize every single one up front. Where I think the term he uses is progressive disclosure. So if you give it a—if you create a tool and the tool has a help guide on how to use the tool, then you don’t need to explain the tool up front to the agent. The agent just needs to read the guide, and only when it needs to use that tool.

PARTH
So he kind of helped me think about, like, okay, how do we design tools for our coding agents so that they’re not constantly getting overwhelmed by all the things we want them to do, but they do a really good job at the few things we want them to do. So in this case, the Claude agent has access to a web browser, and it has, like, access to some of my personal knowledge. And so it’s able to, like, use the web browser—but the other agents aren’t using the web browser like that. And so they’re more free to think about the content and the strategy. And on the creative side of things, I think there’s Don Allen—I’ve known him since high school. 

PARTH
He’s one of the—he’s one of the most prolific AI creators, someone who’s able to weave, like, these creative models, the image models, the video models. Dave Clark—also extremely good at visual storytelling. I think I put them—Nem Perez—I’d say I put these three in the category of the new Hollywood, where it’s like, what if the studio is in your pocket? A couple of these models is helping you tell your story. And so they’re very good at, like, animating short stories and creating, like, trailers. And now they have their own studios that they’re working on and trying to, like, discover the new workflows of the entertainment industry and entertainment space.

PARTH
I think on the—if I think about who’s the ultimate, like, creator entrepreneur—you talk about, like, treat your life like your—your life is a startup, right? Like The Startup of You. I think top of that list for me is CatGPT. She is like a solopreneur creator. She’s built a large audience through talking about AI and playing with the tools, kind of like we do, very publicly, but also making it make sense to almost anyone. So if you follow Cat, you’re just going to learn a lot about AI in a very nonjudgmental, everyday useful kind of way. And she’s leaned into the more technical side of things. She’s picked up vibe coding and she’s starting to use those capabilities to launch businesses and small teams working on, like, highly lucrative projects more efficiently than ever before. 

PARTH
Because now you have the, like, amplified creator entrepreneur that’s coming online. So she has the media piece—extremely good at video, extremely good at media, and extremely good at that distribution side of things. Built-in audience, became well known, and also is now, like, building products for that audience. And I think that’s a very exciting person to watch. So I think of all of these people as AI-native people, and, like, these are the people that I’m constantly kind of following, trying to understand where we are, what can be done. Of course, Andrej Karpathy who’s like—I mean, I think he’s kind of like that person who, when he speaks, every time it’s like, oh, what does he think about coding agents? He’s like, “They’re kind of slop.” It’s like, huh, maybe they are slop. (laughs)

PARTH
Like—and then you start thinking like, yeah, it’s magical from my perspective. But then you take an expert and you think about, like, he’s thinking about the jaggedness of intelligence, and, like, how, you know, it’s so good at these things and entirely, like, useless in other lenses. Right? And thinking about that more on a very deep level of where does that—where do language models go? Where are their limits? So I think Andrej Karpathy is probably one of the best people to follow for AI researchers. 

REID
This is, I think, the path that we’re all on whether or not we know it or not, is how do we become AI-native? So what is—like, what should an individual consider doing to, you know, start making their life more magical? We’ve got a bunch of different prompt ideas, got a bunch of things. But like, what are the things that—you know, you, who, you know, basically ask yourself every single thing you trip across is: can I have AI amplify or do that? What would you say for individuals to kind of start making—like, to seriously engage in doing, to experiment with making their life magical with AI?

PARTH
Yeah, I think about this a lot. For me, I would say that the main thing is that you should apply AI to something that you’re intrinsically motivated by—like your passions, your interests. I think there’s a—there’s like a default kind of approach which is like, I gotta use this for my job, I gotta become more productive. And that’s fine. But I think what’s more interesting is using it to expand your sense of self. Like, whoever you were before these tools came online, I promise you, like, you’re much more than that. Once you start interfacing with these tools, you start expressing yourself through these tools. Right now, I think of myself as a visual storyteller. I think of myself as, like, animating worlds and creating worlds. Right? 

PARTH
Even though I was a data analyst maybe just three years ago, now I’m also an engineer. I’m like a vibe coder, right—or what you might call it. And I think it’s like the expansion of the sense of self is like—you could think of it as like a replacing of what you were, but I think the other end of it is, like, an expansion of your sense of self. And if you aim this at your passions and your interests—so in my case, like music, games, visual storytelling—I think that allows me—then it’s like, no one’s going to tell you to do it. You’re going to see the magic. You’re going to discover, like, the upside because it’s something that you didn’t think you had. Like, I think of it as like, I get to live all these other lifetimes. 

PARTH
I get to be all these other things that I kind of, like, sidelined in favor of my career, but now I’m expanding back into them, coming back into them at a different angle, right? And it’s an intersection of so many of my interests—of both technology and creative and computers. And I think that a lot of people will experience something similar, which is the expanded sense of self through this kind of, like, technology. 

REID
I think that people don’t realize that with AI they need to re-expand their imaginations—for a sense of self, for a sense of capability. 

PARTH
Yeah. And you’ll be surprised at how much more ambitious you become when you see what you can do. It’s not—you’re not going to just generate an image and, like, call it a day. You’re gonna be like, whoa, actually let’s create a world around this, maybe a story around this. And it becomes a bigger kind of ambitious pursuit.

REID
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe. 

PARTH
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil and Ben Relles.