This transcript is generated with the help of AI and is lightly edited for clarity.

//

REID:
In 2015, Ivan Zhao and his co-founder moved from San Francisco to Kyoto, Japan, where they spent 18 hours a day in a two-story house so small that only a traditional shoji screen separated their bedrooms. Rebuilding Notion from scratch, after scrapping three years of code, it’s one of tech’s great reset stories, but it’s also a window into a radically different way of thinking about software.

ARIA:
Today, Ivan runs a company with over 700 AI agents, working alongside roughly 1100 employees at Notion. But this isn’t a conversation about AI for AI’s sake. It’s about what happens when you treat computers not as industrial machines, but as materials to be mastered, like steel, like steam engines, like the fundamental elements that reshape entire civilizations.

REID:
Today we’re asking, how do we design organizations? Not just tools. How can we think of human scale and centeredness in this age of AI? And what can Renaissance Florence, the design of cities, and Douglas Engelbart’s vision of augmented intellect teach us about building in an age of infinite minds? Without further ado, welcome to Possible, Ivan Zhao.

REID:
Ivan, I’ve been looking forward to this for a number of months, so welcome to Possible. And we’re here at this, unsurprisingly, very cool office. And again, not surprising given the design background, given the focus on kind of the human touch of this. How did you also deliberately curate the office? What’s the kind of design and the kind of artistic sensibilities that you bring to the office and to the culture of the company?

IVAN:
Artist accessibility. That’s it. It’s very serious. (laughs)

REID:
Yes. We can use less serious work.

IVAN:
Oh, no, this is fine. It’s kind of intuitive, I would say. Just like since Notion, we’re small, we always care about what’s surrounding us. You don’t want to hear noise, but you also don’t want to see ugly things. So if we have a choice, we prefer — at least, personally, I prefer to see a beautiful Eames chair. And yes, they’re a little bit more expensive, but it’s extremely comfortable on your body and comfortable on your eyes. That’s one factor. The other factor is we’re quite inspired by timeless tools in history. So why not surround yourself with timeless office tools so we can design timeless software?

REID:
Yeah, no, makes sense. So like one of the things I heard about is apparently there was a rug from your childhood that was in Notion’s first office.

IVAN:
It was until like our last office, until maybe six months ago.

REID:
Okay.

IVAN:
I grew up in Iraq from my hometown because my hometown was in this Muslim part of China. So we use a lot of rugs. There’s this red and blue rug we’ve been using ever since the first Notion office and has been traveling with us to all the offices. And six months ago, we moved to a new office in SF (San Francisco). So I finally took the rug home. Yeah, it’s still clean.

ARIA:
That is awesome. So I want to dig right into Notion. And so can you talk about the core problem that Notion AI has tried to solve? And has that changed over the last few years?

IVAN:
A lot of people think Notion is a productivity tool, AI-powered workspace. We actually start more from a more philosophical angle with tools and technology. I was really inspired about the computing generation in the 60s and 70s. The hippie generation took [root] on the west coast, largely, what should you do with this mainframe computer in the basement that’s turning out, printing out paper with numbers on it. But if you connect this machine with a monitor display, you can make it interactive and can become a new type of medium. So I was reading their paper in the last year [of] college and realized this is the most meaningful thing I can do. By being a programmer, by being a designer. Bring this medium of computing to more people, democratize it.

IVAN:
So it’s no longer just “the digital scribe can’t use that medium,” but more people can too. So I’ve been working on Notion as an idea for more than a decade. Started as this kind of “you can do everything” product. In the last three years, AI is this “you can do everything” new technology. So it’s become a new piece, a new Lego block in the toolset for us.

ARIA:
And so in some ways it feels like we are a million years from that time in the 60s and 70s. And other times it feels like we’re right back in Xerox PARC. We’re creating this new technology. We’re so excited. You’ve said that AI came about at sort of the right moment. Like, how did you know it was the right moment and sort of not too early, not too late. How is it just right.

IVAN:
If you use it, at least the first model that clicked with us, with me, was the GPT4 class. GPT 3.5/3 class is useful, but if you see GPT4 class models, it’s not just a text regurgitator, it’s a piece of mind in there, a piece of little human thought in this thing. And it’s a brand new material that’s just like relational database, just like bitmap display. It’s a new thing that can just unlock so many new possibilities with it. There’s a funny story that when we got early access from GPT4 because our friends were working on OpenAI, we thought everybody got access to it as well. So like, holy shit, we’re going to race to the world to build the first product. We actually launched our first Notion product a week before ChatGPT happened because we’re just rushing for it.

IVAN:
And the story was my co-founder Sam and I got access during a company retreat in Mexico, Cancun, and we’ll just lock ourselves in the hotel room the entire company retreat, except the keynote I happened to do. Then we’re just building the first prototype and if you build tools with this, you know, it’s a completely different material.

REID:
Yeah, no, I remember the GPT4 moment because I was on the board of OpenAI at the time. And when I got — when I saw the difference in 3.5 and 4, that’s part of what made me create the book before the last one, Impromptu to try to get to show AI as a collaborator in writing a book, that piece of the human mind as ways of doing it. Because as you probably remember, the GPT4 launch date was kind of moving. So we ultimately ended up publishing it through Amazon directly to be able to hit the date, to be on the exact right date. Because part of the thing was I wanted to show what was possible with GPT4. Now when you think about Notion, it has this kind of textual workspace, but part of what, you know, brings up — and a tool for everything.

REID:
But part of what comes is one thing is obviously AI doesn’t just change what is a document or what is the kind of equivalent, but also changes interface modalities. Right. So like, you know, last year and trying to get people to use AI more, it was kind of like voice pilling. So how do you think about how the nature of kind of the interface to computing changes, what’s the fabric that Notion’s becoming and adding that in.

IVAN:
Yeah. Rather than, consider as a fabric, we like to think in terms of building blocks.

REID:
Yeah.

IVAN:
And to me, it’s very difficult for one company to change the building block or language of anything that humans use. Software UI is kind of just like a language. Right. It’s like we all grew up using a graphical user…interface. We understand there’s a box, you’re supposed to click on it. Double click means a different thing. It’s almost like speaking French or English. It’s a language you learn and there’s all the building blocks within a language. In some sense the premise of the first version of Notion was okay, what are the core language of computing? Can we — which has historically always been in the hand of programmers. Can we open up to allow non-programmers to stitch it together, those building blocks. The building block being text you mentioned in one, text editing relational database is one of the most powerful building blocks, language, tabular format

IVAN:
Different graphic user interface pieces. Right. Normal people should be able to do with that. I would say the constraint, even AI opens up a lot of possibility. But the constraint is still there. Like still having to look at a computer screen to interact with whatever the thing is. There’s more modality introduced through voice and sound then that’s new. But the higher bandwidth thing is still looking at it. Until we invent something popularizing the brain computer interface which is not far probably. Right. So that’s another topic. But until then there’s a constraint which is human biology of seeing things, touching, clicking on things and talking to things. Language model changed that a lot. But still we’re constrained by human biology, constrained by our cultures of understanding how to interact with those boxes on the screen. So we sort of can see work.

IVAN:
I’m coming back to a circle a little bit. So if you think about the first version of the popular AI product is the Chatbot. What’s the previous version of the killer app before Chatbot, it’s Google. It’s a Chatbot. It’s a text input.

REID:
Yes.

IVAN:
That’s how we understand this new tech. The most powerful new technology we have, language model. We’re [mirror-imaging] Google. Right. And if you think about in the past couple years, or so, coding agents have become really popular. IDEs become popular. So everybody was running coding agents. But in the past six, nine months people realized you have… limited bandwidth managing one coding agent. When you’re managing a dozen coding agents, what do you do? It’s a freaking Kanban board. We’re going back to a project management software. So the constraint it’s somehow changed quite slowly because humans don’t learn as fast. Because it’s really hard to change the physical display in front of you.

REID:
How is the kind of the flow of work at Notion created in terms of humans, agents, which things agents do, which way are amplified between them and where do you think that’s evolving to?

IVAN:
Yeah, I don’t think anybody knows. I think that’s the honest answer. Nobody knows the right answer. Everybody shares practice on Twitter, on blog posts. So figuring out this together, I would say the overall trend is if you think you have a company of 10 people or 100 people, or a thousand people, 10,000 people, which of such group of people are doing information passing or coordination or alignment and those are necessary to align a group of hundred or thousands or tens of people. [That’s] unnecessary, right? Language models can almost do this kind of information passing alignment better than humans can at this point, even a year ago, two years, a year and a half ago. So why don’t you just let language models do those simple alignment work? The metaphor I like to think about this is almost like we’re in New York City.

IVAN:
It’s almost like buildings. You think about until 130, 150 years ago, most buildings were no more than five, six floors tall. It’s brick or iron because if you build more than six floors tall, the weight of it will collapse itself. And that’s kind of similar to human organizations. You have a lot of people. The organization naturally slowed down because there’s more work to do to align to such a group of people. And that growth scales super linearly as the organization itself. At some point when you have a really large organization, you either have to move to GM structure, sub corporation or just slow to a halt. Language model can do this coordination work for you.

IVAN:
Language model, the metaphor we like to use, is the steel beam of organizations. Allow organizations to grow in terms of throughput without adding more people just to do the information passing part of work. So humans can elevate to more strategic outer loop way of thinking rather than information passing.

REID:
AI is also more dynamic and has more elements of human thinking. So what’s the ways you kind of chart this trajectory? Because it won’t simply be note taking, it won’t just simply be oh well, we had this meeting and then someone else needed to hear the results. Those information summarized, communicated, humans in the loop. But there’s a part of how it changes, like our epistemology, what we understand to be true, the way we communicate. So what are some of the earliest ideas that you have on that? Whether it’s for the product and the customers or how you’re operating here in Notion.

IVAN:
We talk about multimodality as like — at least for me, I no longer write anymore in the traditional sense. What I do oftentimes, sometimes making tea in the morning, I talk to my phone and I open the Notion app and start meeting notes with myself. A meeting of one. And when I finish it, I have my rambling in a piece of meeting notes and I turn AI into a doc. And because AI is good at summarizing, good at regurgitating, good at editing, the docs is actually better than I can typically write and the more people are picking up this habit. Right. So also my relationship with email inboxes look different. No longer going out to triage each single item. I trust this intermediate layer enough that okay, I trust you to know what’s important in my inbox.

IVAN:
Surface the one to me then archive the rest. Those are the products we’re building, we’re releasing more to the public. But it changes how I use computing.

REID:
Yes.

ARIA:
So AI often changes sort of one on one. How each employee is doing their job, is using their inbox, is instead of writing docs, they’re speaking to their AI over tea and coffee. And I think some people are saying that the companies that are going to win are the ones that are starting now or started last year because they can be AI native. But Notion seems to be turning that on its head. You guys are one of the few companies that has really transformed from sort of the more traditional SaaS company into truly an AI native AI first company. So do you have any thoughts about that sort of journey that you guys have been on and sort of what has made you successful?

IVAN:
It’s still early to say but it’s very difficult. It’s very difficult. I agree with you. Almost no S scale software company has done this transition well. We’re probably one of the best. While you have to be AI-pilled yourself as leaders. I live and breathe every single new tool, every single model comes out. I vibe code building games on weekends, during the break, you have to feel this new material. I think another thing we’ve been doing pretty well at Notion is just no sacred cow to reinventing yourself. Took us a long time to get product market fit. Took us like four or five years to get product market fit. We rebuilt Notion three, four times to get to product market fit. In fact we’ve been building with AI three years now.

IVAN:
We pretty much re-architected like core AI agent layer product five, six times. Because the industry is changing so fast, right. If you don’t do that, your architecture will be usable by language models, then you are not joining the game.

ARIA:
Not a one time event. This is something you constantly have to iterate and watch and improve as you’re going.

IVAN:
I think that organizations calcify, the nature of it. Right. Once you calcify, you can’t change. What’s changing is the environment, language models get out every month, every couple of weeks now there’s a new one. They do a little bit different things. The new trick got discovered. If you’re calcified, you’re not going to be able to adopt. Other people adopt faster and this tool gives you such a quicker compounding return. Soon enough you get outcompeted. This is what is keeping me up at night but it also made the game really exciting because the pre-AI era feels like a sleep walk. (laughs)

REID:
(laughs) Sleep stumble.

ARIA:
Can you think were there any particularly sort of hard trade offs that you had to make during that transformation or do you have any advice for other founders who are sort of going through similar — They’re going through these tough times also and are looking for advice.

IVAN:
Just don’t think about cost. Like people think, oh, I need to adjust the ROI of this and that. By the time you finish ROI calculation it’s wrong. Then second, it’s too late. So we actually encourage our engineer to burn as many tokens as you can. It’s a sense of pride, it’s not a sense of cost saving. I think be okay just you don’t know anything coming into it. It’s like just having that. nobody knows the answer. Even the people who create models, they don’t know the answer. How could you know the answer? Right. So changing the mindset from planning into perfect plan into just try it. Dial up the urgency and agency. Those are typical ones.

REID:
Yeah, totally agree. And part of it is like the not knowing the answer is also have the kind of the humility of the fact that your own particular skill set and what makes you unique may be changing and it has to change in part with how you use the tool. So it’s like I have all this knowledge. Right. Well the AI is also bringing a bunch of knowledge tables. So be engaging. Don’t be trying to say no. No. What I know is I know this particular, you know, history of design better or this particular set of techniques like okay, how do I use it effectively and start engaging.

REID:
And more or less one of the things that I tell individuals now is if you haven’t discovered something by which AI can help you on any serious thing you’re looking at, you just haven’t tried hard enough.

IVAN:
Yeah, it’s kind of like as a human it’s not just your knowledge and your capabilities.

REID:
Yeah.

IVAN:
There’s other dimensions. I like to think there’s a bucket of your capabilities, what you can do, there’s another bucket. Your judgment, your taste, your values. What do you care? What do you want to bring to the world? Your aesthetics, second bucket. The third bucket is like your agency, your drive, your will. So in some sense, language model truly democratized access to knowledge and access to capabilities. Coding is the most scarce capability until three years ago. it becomes abundant. So now the bottleneck in a tech company is no longer can you build it, it’s what should you build. It’s the judgment, the taste. And that can you will through the walls of difficulty to push that into the wall, into the world. So the knobs are different now.

REID:
Yeah.

IVAN:
Right.

REID:
Well, this reminds me of your essay, I think it was Steam, Steel and Infinite Minds. And you know, where individual workers now have essentially infinite minds. And so Satya, I think also Satya Nadella I think referred to this as well. So how are you seeing that sudden rechange of imagination about what’s possible in terms of work, you know, kind of generation of you know, kind of content, ip, et cetera. Like what are you seeing so far about that change of imagination and infinite minds.

IVAN:
It’s changing. Yeah, I don’t know. I don’t think anybody knows what’s going to happen at the other end. I wrote that because like a lot of thoughts in my head. I think a way to communicate those thoughts or try to predict a little bit of the future is to look into the metaphor of the past. We already talked about how programming is completely different. Right. It’s like that’s almost the first profession that’s completely disrupted by language model. It turns out to be the most secret profession in the free AI era.

REID:
Yes.

IVAN:
But the rest of knowledge work likely is going to have similar things too. Right. So the knowledge we can think about is okay, the first and second industrial revolution, power, energy sources become abundant. So then you can have industrial revolution and scale production of physical goods. What is it like to have that before intellectual or for mind goods?

REID:
Yes.

IVAN:
Knowledge or economy is half the U.S. economy today. And as if we discovered the infinite energy source to power this than what the world’s going to look like. The slower changing bit. Once again, it’s our habit. It’s the calcified mindset about what knowledge work is. An analogy I like to think is kind of like before we discover fossil fuels, a lot of power in factory and mills are powered by water mills. Right. So then steam engine and fossil fuel, all those combinations happen and the first generous steam engine actually did not bump the productivity of factory by much because the factory owners are just replacing the centralized water mill with a centralized steam engine shaft. Then they realize, wait a second, the energy source is abundant.

IVAN:
They don’t need to be sitting next to a river, they can sit next to a port, they can sit into a flat line where they can use parallel water energy sources, parallel multiple steam engines. So the mindset change then allowed the second industrial revolution to happen. Now we have all the textile industry and all the goods for… the words. But a lot of things in the world. Right. I don’t think we have that mindset change for knowledge work yet. We are in the water wheel, early water wheel, just adopting steam engine era of what to do with this thing that was shifting bits around.

REID:
Well, this is a little bit like the earlier question, which is the notion of like right now most people think oh, it’s going to have a person and then you have your AI agent. But of course you’re going to have like a team of AI agents. You’re going to have a team working with individuals, you’re going to have teams working within companies and there’ll be various patterns of interface between humans and you know, workers and AIs, both individually individual teams and then teams. What do you think it kind of shifts about like the notion of what is a team.

IVAN:
Yeah.

REID:
What is organization? You know, I’ve been thinking about like, okay, what’s the directory of like you have an active directory, human resources directory. But now you’re gonna have a director that includes all these agents.

IVAN:
I know, I mean it’s a good question for workday, right?

REID:
Yes.

IVAN:
There are multiple dimensions. One is what can drive economic outcome. The other dimension is what humans like to do. So I would say we’re still in the largely, the single-player era of using language models. Right. Developers looking in their own coding agents in their own terminals. So actually we’re brushing against this problem of how do you manage the team of coding agent with a team of developers. We’re looking at this problem right in front as of this month, February 2026. And so how do you solve this multiplayer AI problem, collaborate AI problem. Something passionately we’re solving at Notion. But I think as an industry we need to think about how do you manage rather individual agents, but a factory of agents.

REID:
Yes, right.

IVAN:
That’s like a productivity human-computer interface problem we need to unlock. And on the human nature side, there’s a lot of talk, which I agree there likely will be 1 billion dollar startup running by 1 person. Probably already happened somewhere.

REID:
I don’t think yet

IVAN:
I think it makes sense.

REID:
Yes, it makes sense.

IVAN:
Right.

REID:
But I think it’s still a bit of a TBD. There’s this interesting question around, like for example, there is one run by one person, there’s run by zero people, there’s one run by lots of people. Anyway, there’s the whole.

IVAN:
Yeah. What is the entity of a corporation? Yeah, it’s an invention also. I think that’s the interesting question. I think the capability is there. Then the constraint might be the legal entity. Like some company has to be a human to sign the paper. Your CEO, CFO has to be on the hook when anything bad happens. Right. So that’s one factor. And another interesting factor is like do you want to do that? Do you want to run a company by yourself?

ARIA:
Totally.

IVAN:
Would you want to or is it too lonely for you?

REID:
Mhm.

IVAN:
I think human teamwork is kind of like when productivity becomes abundance. And assuming there’s a baseline of human welfare protection, what would you like to do? Would you like to play a game by yourself all day long for tens of years? Would you like to play a game with another human? I think that teamwork flavor is a question that I prefer the world that you work on problems with other humans, that is what makes a company fun. That makes me running Notion more interesting because there’s a group of people I love working with. We go to battle, we go to puzzle solving together. I hope that doesn’t get lost as the productivity of individual gets greater with language models.

ARIA:
So that feeds directly into my next question. A lot of people are talking about the capabilities of the model and how that is going to, you know, sort of springboard companies to the next level and have them keep growing. But other people are saying like no matter how good the technology gets, this is actually an organizational problem. Like humans are social animals, they want to hang out with each other. You have interpersonal questions. You have some people resistant to change. So like where do you — What is your take on that dynamic as sort of the AI implementation is sort of just a model question or it’s an organizational change question.

IVAN:
Oh, it’s definitely organizational change. I think any given time you’re bottlenecked by the slowest piece in the factory. Right now the capability of models is pretty good. Now the constraint is human adaptation on the organization level, our habits. Right. So we talk about the steel metaphor. All those things are in our organization problems. I think the way I like to think about this kind of problem is like what are the invariables? What doesn’t change? Like ever since we came out of Africa, we have hands and fingers and we have language. We learn to use fire. These things don’t change. They change much slower. Right. Culture changes faster. Internet memes change a lot faster. But those things don’t change as fast. So until we have brain computer interface system, we need to see the thing.

IVAN:
Then therefore interface buttons and all that stay with us at the moment. And humans like to gossip, humans like to spend that time with other humans. So even if with infinite abundance of productivity through language models, we likely want to do this, maybe we just come to work and chitchat and watch our machine. Seriously, that could be a possible future, right? And I think from that lens it’s like what doesn’t change is the human part that doesn’t change. The world around us would probably — It’s like the changes seem to be accelerating.

REID:
Yeah, well, I think elements of the human thing don’t change. Like I agree with the fact is kind of like are we part of the like — we’re social animals, right? Aristotle, you know? What people understand when they say we’re political animals. It’s actually we’re citizens of the polis and they’re back to the city metaphor, which I, I think we want to get back to as well. And I think those elements will stay. But I also think we change and evolve through our technology. Right. It’s not just eyes, clothes microphones, you know, compute software, but it’s like, you know, for example, telescopes, microscopes change the way we think, change the way we see the world.

REID:
I think that one of the questions — and so I think that will both be that eternal like human beings wanting to be, you know, citizens of the polis or citizens of the company kind of as a way. But I think there will also be changes in it. You know, like one of the jokes that I’m sure you were also hearing and participating in, you know, for a couple years back is like, well, my agent, my AI agent will write your AI agent an email. Yours will —

IVAN:
Shrink.

REID:
Yes, yes, exactly.

IVAN:
Summarize theorem.

REID:
Right. Like, just the whole thing is round trips through emails, through agents. And so what do you think in the information system it’s going to be most useful to keep our kind of metacognition kind of tuned to? Because some of the changes that we guarantee are going to happen are speed.

IVAN:
Yeah, right.

REID:
Because there’s various ways in which compute information stuff at speed just works that way. It’s part of the reason why our developers now are using multiple AI agents and parallelizing because you’re like, okay, I can get to something at speed. I think the same thing is going to happen with various forms of informational work. Therefore it becomes a necessity because of the speed, necessity because of the interface point to lots and lots of information. What are the points of metacognition do you think will be, as I would call it, longest lived or most important for people to be thinking about as they do work?

IVAN:
It’s a good question. If we bring back to the framework we were talking about, which is capability is first bucket, your value, your taste and value, second bucket, then your agency, your last bucket. Right. We can put speed in the capability bucket. Yeah. There tends to be a right answer. To me, the middle bucket, it doesn’t actually have a right answer. What is your value? Right now the buzzword, like, what is your taste? Right? Like what? Do you like Chinese food? Do you like Italian? There’s no right or wrong answers like are you drinking tea or drinking water, whatever. To each of their own, right?

IVAN:
So I think as long as we’re living in this human society of rule of law, of like people obey this thing and CFO have to sign a legal doc in a corporation, then human need to be at the final say of what is this entity of business or project trying to do in the world. The value, the taste. And to me that is uniquely human and likely doesn’t have a right answer. Meaning that economy, it’s a chaos system. It’s like the market is the chaos, chaotic system. Nobody knows the outcome because every participant can change the outcome, the participation of it changes the outcome. It’s by us pulling together our market idea, business idea. We create a market, then there’s a pick and choose the market and when they’re flushed out. And it’s almost like the art world, right?

IVAN:
Yes, you need to look like a piece of art, but it’s our decision as a human group to decide what’s a good piece of art. I think business might likely become creating art in the sense that when everybody can create a lot of things really quickly, then it’s just you have to inject your value system into the market, into the world, then let the world decide should that be the best thing for people to buy, to follow, to like? And then there’s no right answer. You have to participate and your participation is your value. It’s your aesthetic.

REID:
Well, I mean to some degree there are at least directionally right answers because it’s kind of like, you know, this is the entrepreneurship journey that you know, we all three have participated in various ways, is you’re making both a prediction and intervention in order to say this is what I think the market will respond to. And you could be you know, very right, partially right, not very right, totally wrong. And one of the things I think that the like, you know, we know the whole world’s accelerating because of the speed of compute, but it’s kind of like you could almost say like, you know, how does information now flow at a much faster rate and what does that mean for you know, kind of how we bring these kind of human systems together?

IVAN:
Yeah, I think it used to take — you guys invest in the typical 18 month cycle of a series A company to find an idea product market fit and try. Right. Should it be 18 days now? Possible. Used to take 50 people to find the scaled product market company. Can it take four or five people? Can it take a weekend? So in a sense the economy almost can become everybody has their own Etsy shop putting some like hopefully valuable, but maybe at least a piece of art. Then humans can vote on it through the thing we call dollars or whatever currency it is and you buy other people a piece of art, then that’s what we do in the post-AGI world. Right. And the speed kind of can do that. And if [language models] can do that, it’s on a daily weekly basis or something.

IVAN:
I don’t know. That’s some style experiments.

REID:
Yeah, exactly.

ARIA:
Well, so you were talking about values and how we can all participate and shape them. And so before this conversation we were talking about walkable cities and we were talking about how we are in New York right now. It is one of the great walkable cities in the United States. And you were saying how well pre-100 years ago there was lots of walkable —

REID:
As the New Yorker, you have to say great walkable cities in the world (laughs)

IVAN:
And there is plenty in Europe.

ARIA:
I have some heavy lifting! “One of the”. Totally, totally. I’ll say the best walkable city in the US right up there with the great walkable cities in Europe. But you made the great point that you know, over 100 years ago a lot of the cities in the US were walkable. You know, even like Akron, like Kansas City, like a lot of these places were walkable. And then we totally changed the tenor of cities when the automobile came in and we, you know, highways shot through the center of town and you were saying that we’re sort of at that same moment here with AI we sort of made one choice 100 years ago that sort of changed the trajectory of some of our cities. I would candidly say it’s the wrong choice.

ARIA:
And so we shaped it in one direction, but now we have this other choice. And so what are the choices that we are facing as it relates to AI and how that’ll shape knowledge work and work in general.

IVAN:
The concept I really like, I think a lot more people should be intuitive, this concept is called human scale. Still going back to what doesn’t change. Right. A human scale city is a typical European — like Florence is a perfect human scale city. One hour from one end to another, you can walk. The streets are no longer like this, but three times[wider]. Right. And 100 years ago, we discovered this thing called automobiles with gasoline, you can travel at much faster speed than horse and human walking. So scale of cities were changing and change to car scale. They have pros and cons. Car scale allows America to go to the west, connecting all the towns that’s really difficult to be connected. The con is the human scale city becomes forced to become car scale.

IVAN:
So on one end you have Florence, on the other end you have Dallas. I’m not saying Dallas is bad. I’ve never actually been to Dallas, so maybe it’s amazing. But they’re different.

ARIA:
I have. They’re very different.

IVAN:
Very different.

REID:
I think even people from both cities would say that.

IVAN:
Yeah. I think just like language models give us the power of steel beam to transfer information and speed and scale, we’re doing that similar thing we did with cars in the information space, in the knowledge work space, the choice we need to make. I think it’s really difficult to know where the leverage points of the choices are because it’s just such a complex system. But I think we need to be aware of what’s happening because oftentimes, by the time we realize what’s happening, your city got run through by highways already. Marshall McLuhan often cited saying, “we drive faster into the future through the rearview window.” Because our understanding of the present, usually through what we know in the past. So by the time you notice this, it’s gone. You become part of the culture. You cannot dial [it] back.

IVAN:
I think this is happening right in front of us. In some sense, the value system of the language model and the economy might not match the value system of humans. It’s like the car has a will on its own in some sense. Car as a mean, as a tool, wants to multiply too. Right. The economy wants to grow and it happened to be where the CFO has to be the person to send the legal papers. I think while humans are the people signing legal paper. We should be conscious what kind of branches of world we can [enter] into. I don’t have, unfortunately I don’t have a right answer. If I do, I’ll be like —

ARIA:
But, but we should go in with our eyes open and ask the questions. Otherwise we end up in this place that we realize didn’t match our values. If we choose that place, that’s fine, but we don’t want to end up there without at least saying, let’s try to shape this to end up with a place that matches our human values.

IVAN:
I think more people should talk about it. It’s surprisingly not in the dialogue of — like, people building language models, people using language models, building on top of language model companies. We’re so in the weeds in the arena. We’re not asking about what is the arena for? Where is the arena itself going.

REID:
Well, I think one of the things I completely agree with some of these threads, which is One, ask the questions, be intentional. Two, understand that your drive forward in the creation of the new technology is generally speaking, what you can more clearly see is the rearview mirror versus the front mirror. And that, you know, like Kevin Kelly, is what technology wants. There’s the creation of different kinds of technologies create different kind of gravitational fields and artifacts.

IVAN:
Totally.

REID:
So, for example, cities are natural network densifiers, whether it’s an economy in knowledge, in, you know, kind of getting people to work together and all the rest. And that’s part of the reason why the drive of a lot of kind of economic and cultural prosperity has been driven by cities. And I think your parallel in thinking about this as AI has that kind of similar densification of network enablement of more people and densification of networks. And it’s a parallel that’s worth thinking all the way back to kind of like what the organization of work is. Right? Like what, like what counts as a market. You know, what, you know, how do people work together? And I think that part of the thing about this is we won’t be able to —

REID:
We’ll only be able to partially like, kind of like it’s very much like, it’s like driving forward in a fog in somewhat uneven ground, but with jet packs. (laughs)

IVAN:
Yeah.

REID:
So it’s like, okay, and that —

IVAN:
I like that analogy.

REID:
Yeah.

IVAN:
Wow. Okay.

REID:
It freaks —

ARIA:
It’s a little scary.

REID:
Well it freaks people out. But it’s like that’s where we are. That’s where we are. So, what do you think are the most — like if you said hey, we as kind of humanists and not particularly within you know, value of Florence or Dallas or other. What are some of the questions that you think everyone should be asking themselves? Like what are some of the questions you’re asking yourself? And you know I partially ask this because I think it’s one of the things we should also be doing, you know, as possible is what we’re trying to do is like ask the right questions to make technologies possibilities are more human future.

IVAN:
Are you operating in human scale? Is the human in the center of this or it’s market in the center of this? Okay, like this is a technology/business/culture podcast and a lot of listeners probably are tech business people. I would say probably majority of decisions it’s not a human in the center. It’s the market in the center. We are participating in the market. We help market move. But the beneficiary is the market and it’s the corporation. Right. Then oftentimes there’s a huge overlap with the value of the individuals or with a group of humans. But not always. As the scale changes become greater like what the market can do with language model just like what cars. Who is in the center?

IVAN:
And I don’t want to sound like a luddite trying to break the textile industry or something.

REID:
Yes, well. And you’re building a technology company.

IVAN:
I am building a technology company. I think it’s to be conscious of it. if everybody can be 10% more conscious of it.

ARIA:
One of themes throughout and that you just hit on was talking about sort of human scale and we also talked a lot about single-player versus multiplayer. Single-player is sometimes not that fun. We all have more fun when we’re playing video games next to a friend. We all have more fun when we go to an office and we genuinely like the people we’re working with. And so many AI tools are chat bots where you’re just talking with AI but Notion is really building something for a team, for a company. Can you talk more about sort of what is the difference when you’re fundamentally in single-player versus this is multiplayer AI for everyone.

IVAN:
Yeah, I think we’re still in the PC era of the PC files era of AI tools, which is files you live in your local PC by yourself. Hasn’t moved on to the cloud yet. We’re probably going to have a speed run towards the cloud, but we’re not in the cloud yet. So for example, you can use coding agents work with your Git repo locally. Git is the collaboration framework for allowing engineers to collaborate. But fundamentally a single human work with one or few agents. The question becomes how do you allow a group of humans to work with a group of agents? Not just a group of agents, a factor of agents. That’s in fact what we’re building for our upcoming product is how do you build a factory of agents and make it so easy for humans to tinker and create?

IVAN:
And how do humans collaborate with a factor agent that’s our product? I think it’s important for multiple reasons. One is just solve a real bottleneck in productivity of how to work with those things. Second is make it a lot more fun. And this will be especially useful for business enterprises which are usually a larger group of people. This gives them a solution that they don’t have to go to their coding agent. Work with local files. You have an out of the box solution for a team of humans and team agents.

ARIA:
And I think so many times when you think of work software. We just think like productivity. But you’ve said the word fun so often. You’ve said the word beauty. Like people need to want to use these tools. And if they are fun, if they are beautiful, if they are aesthetic like that will also lead to more adoption and us having better teams, which I love.

IVAN:
It’s more adoption for our product or other people’s product. But also just it’s more fun for the people using it. Right. So why not? It’s like you want to have people to — If you create something, you want the user of the thing you created to enjoy it. So.

REID:
So another theme and there’s at least eight or nine themes here that we could go on for hours on which are super interesting. But one of the other ones I think would be like as a personal kind of thing is also the focus on agency. And there’s multiple reasons. One is, I think it’s one of the things that people most worry about in technological changes, how their agency shifts. That’s part of the reason last book Super Agency is part of it. Part of it is obviously agentic. Right. Technology. And there’s this kind of ambiguity in agency because one, just like my agency does it take agency from it. But you also have like a travel agent, you have a — You know there’s agents that are on your behalf. So it has this kind of this lens.

REID:
What are the things do you think in the AI age that in addition to asking kind of that 10% question about human scale are the ways that we should try to make human agency along with AI the right result?

IVAN:
Agency for me feels like muscle. It can grow for a person and can also atrophy. I got married last year. I’ve been together with my wife for many years now. I noticed something I used to do before in the marriage independently because she does it more frequently. I don’t do that anymore. So my agency or my habit of doing that activity, I could rely on her to do that. Right? I noticed, oh, I used to be able to do this. I can still do it. But naturally my agency got weaker. I think if we’re not careful, a lot of this will happen to the thing we’re doing with our minds. Right? Whether that’s a good or bad. I don’t want to provide values. But actually that literally happens. Before printing press, humans are amazing and remember things, right?

IVAN:
There’s a lot of techniques for memorizations. The scribes, all the class of the scholar scribe class can remember tons of stuff. Printing press happens and it killed the memorization. It killed the travel musician who travel news, poetry, right? All those poetry and rhymes are for memorization to contain information which are atrophying to go one place to the other. Those are lost. And printing press by printing press also bring enlightenment. A lot of good things. Then we had Google. I remember when I was in college, my first year college, my English teacher was the only teacher who asked us to use the physical library. He just forced us, everybody use Google Scholar. Why do you use physical library? But I still remember the color of the books. Where did you find those things? Citations, the physical tactile nature of things. We don’t do that anymore.

IVAN:
Maybe right now the kids growing up with ChatGPT of the world, they don’t use Google anymore. They don’t need to go read the links and digest the link and they just get the answer. So there’s a sense of atrophying of your agency or habit of doing things. I think we gain a lot. We gain a lot of efficiency, productivity. But there are things that are being lost. I think it’s really hard to predict which part is important, which part is just okay. I think going back to theme of our conversation today is let’s be conscious of what’s in gain. Let’s be conscious of what’s lost. Let’s constantly flip this thing. See the humans in the center, if that’s what you care about. Just be more conscious about the situation.

ARIA:
Well, I think one of the ways we can do that is by thinking about the past, learning from history, etc. And so I’m going to give each of you a sheet of paper. And so we have some.

IVAN:
What is this? Physical paper?

ARIA:
Physical paper. (laughs)

REID:
(laughs) The printing press hasn’t completely gone out the window.

IVAN:
Yeah.

ARIA:
So these are iconic thinkers, you know, from the past 60, 80 years. And there’s a few quotes from each of these thinkers, and I’m gonna ask each of you in turn to pick one of those quotes and agree. Vehemently disagree. Give us an observation. So you can choose whichever quote you want and then tell us a little bit about it. And, Reid, we will start with you so you can model. So we have four quotes from Alan Kay. So please choose one.

REID:
So the one I would choose is the first. Which is “The best way to predict the future is to invent it”. The creation of that future is actually driving through a fog with jet packs. And so you can’t. It’s how you steer much more than. And how you create and invent more than just how do you participate in.

ARIA:
And it just gets us away from the victim mentality because it just says you can create. You have agency. You are empowered. So let’s next do Douglas Engelbart. Ivan.

IVAN:
I like the second one. “The better we get at getting better, the faster we will get better”. It’s the concept of… He talks a lot about concept bootstrapping. Can you use the system to create itself? Then the things that improve [themselves]. And that’s a faster compounding loop than you just working. You humanly create that system. We see this with language model right now. Truly, it’s happening, the compounding loop through. If you’re building a product, the product builds itself. It’s happening right in front of us.

ARIA:
Absolutely. Reid, pick a quote from Richard Feynman.

REID:
I will pick the third one, which is “I’d rather have questions that can’t be answered than answers that can’t be questioned”.

IVAN:
Yeah, it’s a good one.

REID:
Yes. And I think it’s in part because there’s actually two parts of it. Part of the questions that can’t be answered is I actually think that’s part of the mystery of life in terms of what we’re doing. And some of the really important questions I think are really ultimately unanswerable, but still very important to participate like in a sense the meaning of life is participating in the journey for that question. And then also, of course, answers that can’t be questioned tend to have a rigidity of thinking that is maladaptive to how we create the future we want to. That we should create.

IVAN:
It’s actually related to the first bucket. My theory in the first bucket is the changing perspective is worth 80 points of IQ.

REID:
Yeah.

IVAN:
To ask a question is to finding a perspective. And that’s usually changed the whole thing. It’s not by knowing the answer, like the scientific revolution happened by asking the right question, not by having the right answers. And that’s from better perspectives.

ARIA:
I think that also directly relates to language models today, because people are like, well, what are we if we’re just getting answers? And it’s again, well, are you asking the right questions? Like, you have to have the taste to be able to do that. And Ivan, you mentioned Marshall McLuhan earlier. So do you have a quote there?

IVAN:
I think in part of a mission strategy doc in Notion, we always open that we shape our tools. Thereafter, our tools shape us. I think might not directly attribute to Marshall McLuhan, but most people think it’s Marshall McLuhan. Just be aware of what’s happening. Be aware of the perspective you are in, understanding that it’s the system, that culture is the thing that once you shape it, it will shape back too.

ARIA:
Absolutely. And we talked a lot about cities. Reid, do you have a Jane Jacobs quote?

REID:
Well, I think, unsurprisingly of these three, because I think this is very much in theme of our discussion is “There is no logic that can be superimposed in the city; people make it, and it is to them, not buildings, that we must fit our plans”. And that’s a different way of saying human scale.

ARIA:
Well, I was saying to everyone this morning that I was reading about Buckminster Fuller on the subway this morning, and I missed my stop because I was reading so many interesting things. So, Ivan, do you have a quote for him to bring us home?

IVAN:
I actually like the first one. I did not know this quote, but I like the idea of it. “You never change things by fighting the existing reality. To change something, you build a new model that makes the existing model obsolete”. I think Einstein said something similar. To solve the problem, you have to get out of the box and figure out a new thing which is similar to the perspective one. That’s the promising part about the new technology we have, because to solve the problem of the old technology we brought to us, a lot of problems that innate with us, cancer, global warming, whatever those problems require new tools and new perspectives. I think language models gave us this opportunity. As long as we’re careful with this new fire, maybe we’ll find a new way to solve our old problems.

REID:
So we have four questions which we term rapid fire. But you can answer at any length you want. That we ask all our guests. That’s kind of a little bit of an amuse bouche and just kind of a fun angle through it. So I’ll start. Is there a movie, song or book that fills you with optimism for the future?

IVAN:
I recently discovered this production company called Merchant Ivory [Productions]. It’s a trio. They’re making mostly period adopted novels, usually not on a high budget, but they’re particular about the detail. Beautifully shot. Adaptations, like A Room with a View… Death in Venice. And I wouldn’t say using the word optimism, but it’s very human. It’s a topic about things that happened between a group of humans in a beautiful set. Historical pieces, my wife and I’ve been watching one by one. There’s like two dozen, tons of those movies by them, but each one is encapsulated of something very human. I highly recommend everybody to discover it.

ARIA:
What is a question that you wish people would ask you more often?

IVAN:
Where did you get your jacket?

ARIA:
Love it. Great answer.

REID:
Where do you see progress or momentum outside of your industry, outside of the tech industry that inspires you?

IVAN:
I would say overall history. If you’re zooming out from what’s present, it just makes you marvel both in the science domain, in the business domain, in just history itself. Make you marvel about human ingenuity like this meatball machine coming out of Africa can figure out how to cross the ocean. Figure out what’s up there, figure out what’s down there just by keep thinking hard and with agency and cleverness and never give up on a lot of things. This is the thing I think is most worthwhile to preserve.

ARIA:
Absolutely. Can you leave us with a final thought on what you think is possible to achieve in the next 15 years if everything breaks humanity’s way? And what’s the first step to get there?

IVAN:
I think we have the technology to, using Engelbart’s term, to bootstrap the intellectual industrial revolution. And like right now, it’s happening already and it can solve all our yesterday’s problems if we fly with the jetpack somewhat carefully, I think, and I hope we’ll do that. And I think and hope and really hope that we preserve what makes us humans, human in the meanwhile.

ARIA:
Well said.

REID:
Well said. That’s awesome. And Ivan, many things to talk about, but this has been great.

IVAN:
Thank you. Yeah.

REID:
Possible is produced by Palette Media. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Thanasi Dilos, Katie Sanders, Spencer Strasmore, Yimu Xiu, Trent Barboza, and Tafadzwa Nemarundwe.

ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil and Ben Relles. And a big thanks to Amelia Salyers, Camille Ricketts, Josef Duncan, Amy Wu, Grace Donovan, Myles McDonnell, Emily Fernandez, Michael McGinley, and the rest of the team at Notion.