This transcript is generated with the help of AI and is lightly edited for clarity.
//
Aria:
Hey everyone, Aria here. We want to kick off the new year with inspiring conversations about AI, as well as practical and tactical guidance around technology. So for the month of January, AI specialist and my dear colleague Parth Patil is joining Reid for Reid Riffs to talk about how everyone, from individuals to enterprises to startup founders, can harness AI to level up their work, retrofit legacy orgs for the AI era, and build AI-native businesses right from the jump. So tune in, you’re in very good hands with Parth, and I will be back putting Reid in the hot seat come this February. Thanks so much.
Parth:
Thanks for the kind word and warm welcome to Possible, Aria. As I talk with Reid this month, I’ll be walking through some of my AI projects, demos, and tools on screen. While I’ll do my best to describe what I’m looking at for our audio-only listeners, consider switching to the video version of the episode on Spotify or watching on Reid’s YouTube channel for the full experience. Thanks, and let’s get into it.
//
Reid:
An area close to both of our hearts, which is entrepreneurship, starting companies, starting products. So everyone is going in AI today. What does it mean to actually be AI-native? I think most people misunderstand this, but matter of fact, of all the people I know, you are the most AI-native person. So say a little bit about what being AI-native is, and how does it change the kinds of problems entrepreneurs might choose, or the way they might go after, kind of, starting a company, going after a market, thinking about how they operate from the very earliest days.
Parth:
This is, this is something—and actually, I think I’m one of the early AI natives, but I have a feeling the next generation will be even more AI-native. There are people I’ve met that have never even used a keyboard, and for them that’s like an alien experience. A mouse and a keyboard, they’re used to the trackpad-laptop kind of experience. But I think, even when I think about AI-native, I think, pretty much every—I have this like, realization multiple times a week where I see something that a model can do. I see that, oh, Codex can now work for two days straight if you allow it to plan very deeply. Same thing with Opus, Claude Code Opus, you can give it a planning framework.
Parth:
If you allow it to take notes on its own progress, it’s able to work for two days straight. And then I read that, I read the paper, and then I fire it up on a project, and I see it working, and it’s like three hours deep doing like, productive work for me. And I’m just like, oh, my God. Then I go for a walk, and I’m just thinking, I’m like, like, I’m walking right now, but I’m also being productive, right? I have these processes running, and then my mind starts going, and I’m thinking, like, where do we apply this new capability? But this happens every other week, right? Some new capability comes online, and then I have to go for a walk and think, where does this apply?
Parth:
And a lot of times I think I take—like, okay, let’s just describe the workflow as it is, how we normally do it, the normal problem that we’re trying to solve. And then I go to a language model. I say, given we now have X, Y, Z capabilities, how might we reimagine this workflow so that we can do it in a more parallelized way, in a more extensible, modular way? How can we reduce the drudgery of the human experience in this work, and then maybe write the first version of that? And I usually allow it to think for a whole day and work on that for a whole day. And it is something that I try to get a pattern of, where I describe the problem, I take it to the smartest model I know.
Parth:
I tell it to think about a couple different plans, and then I tell agents to start working on those plans, executing those directions. A lot of times you don’t get, like, something that works, you know, on the first try, but you get a lot of promising directions, or you get like 80% of the way there. And all of a sudden you’ve eliminated like 100 hours of work a week. And—or like now that the few people that are working on this can go way further, they can think in a more—like, there’s the parallelization of cognition now, right, where it used to be that every human was the bottleneck in any job, but now you take a process that someone does. You have them equipped with agents, and then they’re parallelized across many different parallel streams in that process.
Parth:
And that’s something that’s amazing. I think rethinking of yourself as like, being able to atomize a task, decompose it, and then parallelize subcomponents of it, and then lean on the computer for those pieces. It just feels like a superpower.
Reid:
So give me a real life example.
Parth:
So one thing that I’m known for in my own friend group is—as this guy, like, the guy that’s just like, deep in the coding agents, deep on the frontier of, like, what came out last week, what’s the new superpower that we have. And a lot of my friends, they still work at normal companies, they have normal jobs, they’re building their own companies, and a lot of them are programmers. And so for me, it’s like I get to experience the first version of the superpower, but my friends are better engineers than me. And so I’m kind of like, is it because I’m a noob or is it because I’m kind of just teaching myself everything, that it all feels magical? Or does someone who’s also much more experienced than me experience a different kind of amplification?
Parth:
So I’ll take—so for example, earlier this year we got—Claude Code came out, and it started taking off like wildfire within my workflows. And then I was like, well, is it just me? So then I took it to—I called all my best programmer friends in SF, we got dinner, and we were sitting there at dinner, and I was like, guys, Claude Code, this feels like the step forward in coding automation that I’ve been waiting for. Like, looking at different angles of, and here’s what it does for me. And these are the same guys that saw me when I first interacted with Cursor. So they were like, if he’s right, we should get early on this. And so I had my buddy Amila—Amila is a startup founder, and he’s working on a company called Palette.
Parth:
And it’s just two of them, it’s just two engineers working on this company. And it’s—the company, what they do is JavaScript optimization. So they’re trying to make the web interface and the web interfaces faster. So tools like Notion, how do you make them responsive? Low latency experiences. And his entire thing is how do we make the web faster? Like, there’s a lot of JavaScript, how do we make it all faster? And so I showed—we were sitting there at dinner, and I was talking to him about the cutting edge of coding agents, and I was like, here’s why I feel like it’s amplifying me. Here are the problems I can solve, and I think you should be using it. I mean, you’re a startup founder, right? Like, you should be using Claude Code before you even think about hiring anyone.
Parth:
And at first he was a little dismissive. And this I think is usually because—this is because he’s a really good JavaScript programmer. I think he’s like, probably one of the best that I know. And that comes with this pride and also a very high expectation. So yes, in some cases AI generated code can be slop, but that doesn’t make it useless. We work with people that are not perfect, and we are not perfect, right? But to get zero value would be shocking. And so I give him Claude Code, and then a couple months later I’m like, oh, Codex is also very good. Now OpenAI’s Codex is competing on a similar plane. And so we get dinner four months later, and this is just like three months ago now.
Parth:
We get dinner again, and he pulls me aside, and he says, “Parth, I’m so glad you showed me these coding agents because now we are launching an enterprise partnership. It’s still two of us, and we’re just—like, it’s nailing extremely hard migrations. Codex is able to understand and solve an arcane problem in 20 minutes that would have otherwise taken me a week with all of my time.” And he’s like—if you think about yourself as a solo founder or a two-person team, you have many other responsibilities other than programming. And that he can delegate to these intelligent copilot systems that can actually solve some of these multi-day problems for him, it means that he’s seeing this, like—he really feels like it is something that he can’t imagine hiring people that don’t interact with these tools as well.
Parth:
And so it’s a totally new playstyle. And my thing is okay, cool, I need to figure out the next part of that game. The orchestration of multiple of these, and everyone’s at a different level. I think a lot of people are interfacing with ChatGPT, or they install Claude Code, they have one agent, and I’m kind of at that point of like, okay, what if we have many of these, a fleet that’s kind of on deck, some of them are active, idle, and some of them are working continuously. And then can I put these tools in front of the best builders that I know? And what impact does it have on them? And it is staggering how much more the effect it has on them. They become much more ambitious. They’re like reaching milestones earlier than they would have imagined.
Parth:
And then they reimagine the team that they’re building around this kind of new playstyles. This is why I love working with startups, because if you think about startups as opposed to enterprises, like, you have no baggage, you’re actually just already dead, like, you’re default dead. Like, you don’t have—you don’t exist, and you don’t have this calcification of a bureaucracy and a thousand employees. You don’t even have enough employees to do what you’re trying to do to stay alive. And so then they lean into the new technology. And so my job, I view it as like, scout the new technology and put it in front of the right person. And then it reveals to me, like, I get more validation. I was like, oh, it is truly that kind of, that important of a technology. It ends up in their daily workflow.
Parth:
It ends up being something their whole team is like, collectively contributing to. And I get a lot of feedback then, because I can use it myself and learn, but if I infuse my network with it, then I get a lot of feedback, and people are coming back to me six months later, and they’re like, I learned this new trick. Here’s this crazy new ability that we have and love to show it to you.
Reid:
Yeah, the collective learning, the network, the allies, friends, and learning. The iteration and learning is super key. So walk us through what a modern example would be. Something concrete of like, a product or a feature that a frontier model in the loop from day zero could do. Like, you know, what does that process look like in the kind of the—in a few steps?
Parth:
Well, actually, you know about this one. So we’ve been working on the Possible podcast, and you’re obviously the host of Possible, and I’ve been kind of supporting the team behind the scenes. The big kind of push for the last couple months has been, can we internationalize this podcast? And internationalize—not just like, release the podcast and then translate the transcripts. But, what if we were to recreate the same conversation but natively in many different languages? So your voice, your co-host, Aria, you and Aria both are the hosts of Possible, but can we rerelease the podcast using your voices in Chinese, in French, in Hindi? And how many languages can we do that in? And how quickly can we expand into many different languages?
Parth:
And this was an interesting project because when we mentioned we were interested in translating, you’ve been working with translation of your content for a long time. But I was like, oh, this is perfect for agents, because it is a coding problem. It is a problem that can be sliced up into a bunch of different small chunks, and then it can be largely parallelized. And so you’re kind of cheating time, and you’re cheating the, like, you’re reaching into the general capabilities of language models and the increasingly general capabilities of voice, voice models from ElevenLabs. And so I basically—And also, you know, at my last company at Clubhouse, when I was working there, we had an internationalization team. It was like 20 people, 25 people. And it took a long time to just launch in one new market.
Parth:
And that was in the pre-language model world. But now the language model speaks every language. The voice agents can generate almost 68 to 70 languages of the most popular languages on the planet. So you can almost think of a new kind of creator that emerges that’s natively localized all over the planet, where, yeah, you might be an English-first creator, but imagine if everyone could experience you in their first language. And so that’s always been on my mind. But the second piece of like, can we do it with a very small team? And so I took this problem as I described it to you, and I went to Codex, I pulled up Codex, and I just talked about this problem for 10 minutes.
Parth:
And then I said, let’s build an agentic workflow that breaks down, atomizes this problem, and then reanimates the podcast in, say, five different languages. And a combination of Codex and Claude Code built the first version of that in one day. At the end of that day, we had this—I mean, let’s see. I run the app locally so I can show it to Reid. Here we go. So we’ll pull up the pipeline. But I went to this coding agent, described the problem, and I was like, we need to atomize this, strip away how it was done, look at every new technology I show you that is now on our table, that we have access to, and then resolve this problem using AI agents. And so we end up with this pipeline. Basically, you think, we have a transcript.
Parth:
We have a transcript with two speakers, and the first step is to parse the transcript into turns. So we take each person’s turns. Then we need to translate into a different language and transcreate. So we need to preserve the meaning of the original conversation. You don’t want to do a literal translation, because then you have cultural idioms that come into play. And a lot of this is what we learned when we partnered with the human experts on each language. And it’s been a very interesting journey because one technical person paired with a few language experts can actually localize an app, or an experience, or a podcast very quickly. And when I had the first version of the pipeline, it was like, great, we have the French translation pipeline working. And then Codex was like, would you like me to enable the other 68 languages?
Parth:
And that’s when I was like, yes. I mean, we’re not ready yet, but let’s do it. I mean, I want to see that—we don’t have all the human reviewers that we want in every language. But I was like, this is the awesome thing about thinking about it as an AI-native approach. It’s like your agents are today, right now they’re in English mode. But then they could be—you could just change one word, switch the language, and now they’re transcreating the content into French, into Chinese, into Portuguese, into every single language, even, like my mother tongue, Marathi, right? And I showed, I showed like, a sample of the podcast to my parents, and they were like, whoa, this is, this is—it feels like NPR, but like, from Maharashtra, in India.
Parth:
It sounds—it feels—The quality was so—their jaws dropped. And so I think about this as like, we did the first version in a day, and the agents were just ready to enable the next level of scale. And, like, we just need to get enough experts around it so we can raise the quality bar up to our expectations. But it’s something that I could not imagine. I mean, we tried to do it, and it took like 25 people and several months to do it before. And now it’s like, it’s pretty much like—
Reid:
Next couple days.
Parth:
Next couple days. Yeah.
Reid:
Well, and one of the things actually was particularly funny, and this is part of your general point that’s important here, is look, there’s a huge amplification that comes from the agents. But so we did French, and we did French early because I’ve been spending some time in the French ecosystem trying to help various things. And so we released French as the very first Reid Riffs. And then went to some of my French friends, and they said, well, that sounds like Canadian French.
Parth:
That’s right.
Reid:
Right. And I was like, oh. And we didn’t know enough to know, but that was the reason why it’s still worth cross-checking. And so then we redid it again, again, using agents to be, you know, Parisian French
Parth:
To delineate between all the—Yeah, it’s—the naive approach is that everyone who speaks French speaks the same. But no, actually French is spoken differently in the different parts of the world.
Reid:
Yes.
Parth:
And then I went back to the agents, and I was like, guys. I was like, guys, we have to actually localize this. This isn’t about ‘every language is one version’, but actually every locality gets its own unique version. And then it was like, well, actually we need to retrain the voices, so we need to create a French Reid. We need to create like a Parisian French Reid. We need to create a Parisian French Aria. And then realizing that we could do that, ElevenLabs has some very cool voice remixing tools. And realizing we could do that, I was like, well, it seems like this same approach might work for every locality in other languages as well. So the idea that you’re, like, solving this problem and the next problem and the next problem at the same time is very interesting.
Parth:
And also realizing that the models, like, the models are getting way better. And when we started, we were using an ElevenLabs model that didn’t have intonation. And then now we’re using the V3 model, which you can actually prompt, inject emotional context, and we can create more animated—it’s not just a robotic kind of recitation of the podcast. It’s more like talking to someone that’s very animated. And so the models—I think that’s the huge thing there is, the models are getting better. And it was a leap in Codex’s capabilities that showed me that I could do it in one day. But this is something that every week, every two weeks, there’s some leap in capabilities. And I sit down on a fresh project and I’m just like, I aim a very hard problem, and I just say, hey, let’s see where we can go.
Parth:
And I’m shocked at where you can go in just 1-2 hours of iteration, 8 hours of it thinking. And then that first version, you’re just like, this used to be 25 people in six months, and now it’s a day to the first version. And now we’re like, okay, let’s become more ambitious. Let’s see how quickly we can get this out there.
Reid:
By the way, one of the things, again, it’s the rebroaden your imagination for stuff. For example, this conversation hadn’t occurred to me, but one of the fun things we might want to try with Reid Riffs, and probably using your agents in order to do this, is to essentially say, well, let’s try Scottish English, Northern English, Welsh English, classic English, and then release four versions of it with that kind of locality tuned. Because that would be fascinating for people. We should try that.
Parth:
Yes, I agree. It’s like, what’s the extent of this? I think of it as hyperlocal. I even cloned my own voice and then went very local into, like, India. And I was very, like—recreated my own voice as, like, local, in like, six different languages in India. And I was very like, this is incredible. Like, now we can reach everyone in a very, like, in a way that they feel like, like natively, like, heard.
Reid:
Yeah, exactly. Do you want to show something with the tool?
Parth:
Yeah, well, I guess I could show you.
Reid:
Yeah, it’s up to you. You got a launch now!
Parth:
Let’s go with the Possible.fm, find a transcript, podcast transcripts. And then let’s go with “R.I.P. Computer Keyboard”. Oh, that one has Tanay. We don’t have his voice. Let’s go with Reid and Aria. So here we’re going to take two paragraphs of the Possible podcast, paste it into our translation tool.
Reid:
“Paris custom version”.
Parth:
Yeah, so we have the custom voices that are Parisian French. We also have Beijing, Shanghai, which we’re working on, and we have a couple other markets we’re looking at. Let’s do Paris. And French, I had to delineate between Canada and France because it was a point of, It was a point of feedback that we got. So we have a podcast transcript. We have you and Aria, the hosts of Possible. And I’m going to click run, and I’ll explain what’s happening. So the first thing the system does is break it down into turns. And so this is your turn. This is Aria’s turn. And then it’s going to tag each turn with the emotional context appropriate for that moment in the conversation. It’s as if they’re—It’s like these agents are basically role playing you guys in the conversation. Now it’s tagging the conversation.
Parth:
And so we’ll see these same turns of conversation where it’s going to infuse emotional context. And that’s the cool thing about the new ElevenLabs V3 model, which is, it’s extremely realistic voice. So here we have frustrated, serious, and then emphasizing. And then like, you’re making a very strong point in this, in this turn of conversation. And so the AI is starting to assign that, that emotional context, smiling. So hopefully, like, Aria’s response is going to be very, like, positive. Curious when she asks the question, serious for her final point. And now it’s actually translating the conversation into French. And so the next column will appear soon. And this is going to be the French translation column. We’re almost done.
Reid:
One other version we should try this for fun at some point is Klingon.
Parth:
Oh, Klingon! Yeah. We could release the podcast in Klingon. (laughs)
Reid:
(laughs) Just to kind of show the fact that the future is here.
Parth:
Yeah. So now here, this cell right here has just come up. And this is the first draft French translation of this conversation so far. And so now it’s in French. And what’s happening is ElevenLabs is generating the audio. It’s basically reassembling the conversation using your voice clones but now in French.
Reid:
Kind of as we’re doing this, pop up a level. This is like, an example of our workflow, where something that was previously a massive stretch, maybe too expensive to do, then becomes something easy to start prototyping. And actually, even for Reid Riffs, we’ve deployed it in French, and a huge amount of acceleration through agents, but then selective, intelligent use of humans in a loop.
Parth:
Exactly.
Reid:
For getting the product right, et cetera. And this is the parallel to, for example, a founder who might be thinking about, like, okay, what’s the way—we’ll just start doing it? But where are the things where you use the AI to accelerate you and what you’re doing? And then what are the places you bring in experts, feedback, potential customers, et cetera?
Parth:
So it looks like we have a French translation. We’ll play a couple seconds of it.
AI French Reid:
Tu voudrais que le plus possible soit géré sur le territoire. Faire de l’IA—American intelligence—c’est aussi avoir toute la chaîne complète. On se fait complètement dépasser parce que notre administration a l’air de croire que c’est une course sur les réseaux sociaux à qui va poster sur Truth Social au lieu de vraiment bosser dur là-dessus.
AI French Aria:
C’est un vrai plaisir d’être ici avec toi aujourd’hui. On a vu pas mal d’articles récemment qui parlent du boom des data centers, qui provoque déjà une grosse vague d’embauches dans la construction et les métiers techniques.
Parth:
And so that’s Aria’s voice.
Reid:
All the way back to when we did this with the Perugia speech. It’s just, it’s so mind blowing to hear your own voice speaking a language. Like, it’s like the, you know, like the other Indian dialects that you’re like, I don’t speak that language, and yet that is my voice speaking that language.
Parth:
It’s kind of like accessing the multiverse.
Reid:
Yes.
Parth:
It’s like, imagine if you were French. (laughs)
Reid:
Yes.
Parth:
Here’s a, here’s a glimpse into that.
Reid:
And this is all like, kind of, a very concrete dive for kind of saying, look, this is how much how to operate, how to do quick internal tooling, how to explore various versions of product market fit, all of the things that, you know, I think basically, frankly, any credible founder today has to be showing AI-native. If you’re not, then you basically shouldn’t be doing a company.
Parth:
Yeah. And we should be thinking, like, what parts of this workflow do we absolutely want to start aiming AI at? Even just to get a baseline of its performance, even before we are like, oh, it’s good enough. And I just asked our coding agent to tell us, how many agents are we using in this system? Because—okay, here we go. We’re using six agents. So the first agent tags each turn of conversation with emotions to guide the delivery of the voice. Then there’s an agent for single-turn tagging. Then there’s an agent that translates it into the target language. Then there’s an agent that validates it, making sure that the transcript is still holding the conversational tags. Then there’s an agent that listens to the generated audio. So, we generate the audio then it transcribes it, and then it listens.
Parth:
It’s like, yep, yep, that looks like what we’re aiming for. And there’s a second agent that’s verifying language on the other end. Part of this is like, thinking like, how much of this can we double check and triple check and generate and regenerate before the person has to come back in and do the final approval? And I’m pretty excited about how much of this we can do before our experts come in, because our experts are then like, let’s focus on this part of the idioms, making the tech references especially, how do you preserve the meaning of something that is hard to explain in a different language? You have to have an expert in that culture, in the idioms of the culture. Then you get into transcreation. All of our agents are using GPT-4.1, and it runs on the agent’s SDK.
Parth:
But the reason this system is possible is because I said, use the agent’s SDK to solve this workflow. So I went to the smartest model, and I said, we’re going to use agents and then create the next version of this pipeline.
Reid:
So, you know, part of this is, amazing amounts are now doable by individuals. But so how does that reconceptualize possibilities in founding teams? Right, so what might now be possible in terms of founding teams? What they should look like, what might be different from a founding team five years ago?
Parth:
I think the biggest difference is, of course, you want to embrace the technical velocity that we have accessible. So if you’re, you know, your CTO, the first technical person should be learning how to use Claude Code or Codex, maybe both, and then very quickly moving to a level of where they can orchestrate a small fleet of, say, 20 of those at the same time. There is a slight learning curve there, but it is so much more worth it for the most senior, first technical person attacking anything to adopt that mindset. And you should be willing to spend the money on these tools because actually, like, it cascades through the rest of the hires that you make, the rest of the people that you bring on. You’re going to want each person to be individually amplified.
Parth:
And so what’s different is that maybe each person has some kind of compute spend, which is maybe even equivalent to a contractor hire equivalent of Claude Code spend and coding agent spend in aggregate, and agent spend in aggregate. It is almost like, you think about that as a first-party kind of approach to the problem. And I think that the person who embraces these workflows is going to see at least 50 to 70% increase in productivity. And then looking for generalists, people that quickly adapt into multiple roles. So I think the PM that can vibe code, that can quickly convince you of a new design choice, right? It’s like, oh, maybe it’s not production grade, but it gets the team thinking about a new way that the product could be designed.
Reid:
Potentially weeks faster.
Parth:
Yeah, weeks faster. Yeah, exactly. And I think that, like, starting with that expectation of speed, and then because, you know, as companies grow, like, you know, we tend to get slower as we have the coordination tax builds up, but starting very quickly, quickly unpacking the hypotheses before. And you’ll reach these realizations before you even have to raise money. Or then you—when you do raise money, you raise for different reasons. And I see that in the startups that I’m advising, is that they can go much further with, you know, with a very small team and a strong core of like, agentic tools.
Reid:
The thing I would add is, you know, kind of classic, you know, call it two decades ago, three decades ago, was you have a business person, a technical person, you know, as the kind of co-founders doing something. And if you had only a business person, they would hire a technical person. If you had only a technical person, they’d hire a business person. And I agree with you about that. But I also think that one of the first jobs for the technical person is to similarly make sure that the business person is amplified, right? Like, it’s not just, okay, this is the way I’m doing my workflow for DevOps and for experimenting with product design, for product market fit and the rest. Yes, yes. But also amplify.
Parth:
Yeah, that’s right, that’s right. Yeah. And I think they should be using state-of-the-art models as well. Like, everyone should, in your small team, should be using state-of-the-art models that help them with all aspects of their work.
Reid:
Right.
Parth:
Your general copilot, using the best one available.
Reid:
So one of the things that, you know, there’s a—partially because we live in a media environment, and obviously Hollywood’s tied itself into knots about AI and all the rest of this stuff, and you live down there, so you see a lot of it. Like the, we’re using it but we’re not telling anybody because it’s kind of unpopular. But even though it’s such a clear amplifier, it’s kind of like—as opposed to a Masonic handshake, there needs to be an AI handshake now.
Parth:
It’s like, I finally get to tell a story. I was never even in Hollywood.
Reid:
Yes.
Parth:
You know, I could just, Oh, like, I have an idea. Now, we can put the first version. For $300, you can make the first version. And, like, in the same way that we’re vibe coding prototypes. We’re vibe coding storytelling. Or like animating and creating these worlds as, like, concepts. And they may eventually become bigger things in a traditional format but they don’t have to either.
Reid:
Yeah, but the speed of using it, exploration, iterative development, it’s the same thing where you, you learn by doing. You learn by seeing what you did on your first iteration. Like the first—like you said, okay, let’s use agents to build this. Oh, wait, we need an emotional tagger as ways of doing this. So it’s kind of our last question for the moment, for kind of, AI in, kind of, startups is what ways should you look at when you look at startups and see, that AI is marketing or is that AI as real? How should people think about that themselves? Like, am I being real enough? Am I being AI native enough? I’m not just using AI as a buzzword bingo to try to get money or attention or anything else. And what does that real AI traction look like?
Parth:
Yeah, there’s a lot of that I see going around these days, which is this buzzword, kind of like this buzzword era of AI this, AI-enabled, AI-powered. And you’re kind of—and maybe it’s because I interact with all these models, I’m kind of like, but which model? What AI? Like, do you even need the AI, or are you just shoving it in there so that you can say that it’s AI? And for me, it’s like, if you don’t mention—if you don’t go one level deeper, it makes me very skeptical. It makes me wonder if there is anything of value here at all, or if it’s just pure signaling in order to get attention or like to send a kind of message to people that wouldn’t be able to discern.
Parth:
And I think that the other way to put it is, can you describe what it is you’re doing without using the letters AI? If you can’t, then maybe the AI isn’t the important thing here. And then the other thing I see is, it’s not even about the AI. Like the AI powers, does anyone care what database technology Uber is sitting on? No. Like the actual person—It’s like if you were to sit in the car and you were like, oh, the average person just wants to get from point A to point B safely. And like—so a lot of that is like not even relevant to the end user.
Parth:
And it’s actually—like, I imagine a world that I want to live in where AI is under the hood and taken for granted because it’s just so good at what it does and it stays out of the way and it’s not making itself the whole—like, that is not the purpose. It’s in service of the actual objective that we have, which is maybe create a new artifact, learn something, or make, you know, build a product. And I think that AI is like—when AI blends into the background, that’s the best version of this.
Reid:
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.
Aria:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil and Ben Relles.

