This transcript is generated with the help of AI and is lightly edited for clarity.
DIVYA SIDDARTH:
I started CIP because I was worried about whether democracy would keep pace with AI. And I was also excited about, on the flip side, what AI could do for democracy. People trust chatbots far more than their elected representatives and basically more than any other institution they interact with other than their family doctor. So what does this mean for the future that we’re going to build on one hand, lots of concerning things about that on the other, it’s clear that there’s already trust here. How do we leverage that?
REID:
Hi, I’m Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know how, together, we can use technology like AI to help us shape the best possible future.
ARIA:
We ask technologists, ambitious builders, and deep thinkers to help us sketch out the brightest version of the future—and we learn what it’ll take to get there.
REID:
This is Possible.
REID:
Imagine a world where every person has civic power at their fingertips as citizens of this digitally-empowered democracy. Casting a vote or conducting public comment could be as effortless as scrolling on social media.
ARIA:
There’d be less noise to cut through as you’d have a more direct line to your policy makers, and you could be more confident that your input reached them and would influence their decisions. Imagine everyday problems, like broken stoplights, fixed the very next day, or citizens working directly together with government leaders and tech innovators, all with the help of AI tools crafted with broader public input.
REID:
This sort of democracy doesn’t have to be science fiction. Today, we’re joined by two technologists who are working to make this a reality, Audrey Tang and Divya Siddharth.
ARIA:
Audrey Tang is a cyber ambassador-at-large and former inaugural Minister of Digital Affairs for Taiwan. Their pioneering initiatives, like the Sunflower Movement, g0v, and vTaiwan, have fought misinformation and enabled the public to participate in policy decisions. Audrey is also a senior research fellow for the Collective Intelligence Project, which brings us to our next guest.
REID:
Divya Siddarth is the co-founder and executive director of the Collective Intelligence Project, a political economist at heart. Divya leads projects that give voice to the public in building better AI with partners—like Taiwan’s Ministry of Digital Affairs, the UK AI Safety Institute, OpenAI, and Anthropic, to name a few. Divya is committed to creating nuanced and powerful AI for collective benefit.
ARIA:
Both Audrey and Divya believe that democracy is an ongoing process. We sat down with them to discuss how we can move past superficial access to technology and build a real participatory democracy fit with technology that favors plurality, agency, and intelligence for all.
REID:
Here’s our conversation with Audrey Tang and Divya Siddharth.
REID:
Audrey, Divya, welcome.
ARIA:
One of the reasons I’m so excited about this episode is because I had the pleasure—Divya—of meeting you over a year ago, and then we got to reconnect a few weeks ago, hearing everything that’s going on in CIP. And for the first time ever—Audrey—we reached out to a guest and we said, “We’re so excited, Audrey, to have you on the show.” And you said, “Oh, I’m excited, too, but I need to have my partner-in-crime Divya on the show as well.” Tell me about your partnership and the history that you two have together, and working towards this civic citizen democracy.
DIVYA SIDDARTH:
We’ve known each other for a long time, and that’s because I was called into doing a bunch of COVID policy early on in the pandemic, and that’s how I learned about Audrey’s work in Taiwan. Because while we were desperately trying to set up very basic contact tracing and trying to convince the public in a lot of ways, “Hey, it’s okay to have Bluetooth contact tracing. The government’s not tracking you. We can do testing,” all of that. We looked over, Taiwan is doing this absolutely, basically science fiction, incredible job with COVID. And I met Audrey then to try to bring some of what was succeeding in Taiwan to the U.S—I would call it mixed at best in terms of success—but since then have been super inspired by the work, and have thought a lot about what does it look like to translate those successes to different contexts, because the infrastructure that’s present in Taiwan I think does allow for the kinds of science fiction governance scenarios we haven’t seen in other places yet.
AUDREY TANG:
Certainly. So, for example, last year, Taiwan faced a surge of fraud investment scams, sometimes featuring deepfakes of prominent figures like the Nvidia CEO, Jensen Huang, who is Taiwanese. Jensen would say, for example, “We’re giving back to Taiwan, click here to collect some cryptocurrency.” And if you click it, Jensen actually talks to you very convincingly. So Taiwanese citizens demanded action, but they were very cautious about government-overreaching censorship because Taiwan is Asia’s most open society when it comes to internet freedom. So in response, we work with CIP to send SMS text messages to 200,000 Taiwanese citizens—random numbers. And then from this outreach, thousands of people volunteered, and we chose 450 people randomly so that we know this is a microcosm of the Taiwanese public. And so these people deliberated online, divided into 45 rooms of ten people each. Each room, facilitated by an AI that reminds people who are quiet to speak up, limit disruption to five seconds, real real-time summaries, and so on and so forth.
AUDREY TANG:
And so the best ideas in each group proliferated, cross-pollinated. For example, one group suggested requiring all advertisements to carry a digital signature—KYC verification. So without Jensen Huang’s signature, all these Jensen Huang scams will be exposed as potential scam. And another group said ByteDance—TikTok—did not have a Taiwan office, so even if they’re liable, they can just ignore us, in which case we should slow down connections to their servers until all their advertisement business goes to their competitors. And so all these are behavior-level proposals. They say nothing about content and therefore not censorship. And these proposals were generated by those system groups, real-time summarized by AI, and rapidly validated. We can show legislators just after one day of deliberation, more than 85% of people across all the demographics groups agree with these measures. And then they became law within a couple months. And this year, there’s just no such advertisement defects anymore on YouTube or Facebook here. And so alignment assemblies, particularly on information integrity, prove highly impactful. It became one of the foundational initiatives in Taiwan.
REID:
One of the things I would take as a positive surprise was how well the initial things worked in having AI participating in the conversation—have deliberative democracy of different kinds of concerns and ideas, and summarize them up. I’m curious if there’s other positive surprises, and then what are some of the challenges you’ve discovered?
AUDREY TANG:
So one thing we discovered very early on was how people in Taiwan wanted the government to experiment with AI in such settings because they see like ChatGPT and so on, and they see that it doesn’t translate very well using Taiwanese vocabulary, using Taiwanese cultures, and so on. So there’s a general wish to culturally align those models, and that’s why we also launched the Trustworthy AI Dialogue Engine (TAIDE)—the Taiwan fine-tuning of Llama—again, using alignment assemblies to put together a set of what’s called constitutional documents, or just model specifications, a couple years ago. We already have a set of specifications from Tainan City, from Taipei City, and we tune Llama differently so that it fits how people expect the AI in government settings would behave, like a code of conduct. So that’s something very surprising because you will think that people will find it strange. People actually eased into it very easily, and then starting demanding it speak like a local.
DIVYA SIDDARTH:
Yeah, I think what’s been surprising to me, I’m sometimes at my least optimistic, right before we run a very big process. You’re bringing together thousands or tens of thousands of people, huge logistics overhead. Sometimes we’re doing it with an AI lab—like OpenAI and Anthropic—then you have to think about the leverage points there. Sometimes you’re doing it with a government. You have to set up all the moderation filters beforehand. And it’s right at the point where we’re launching it that I’m like, “This is the time people are gonna have terrible ideas, and everyone’s gonna be horrible to each other, and we’re not going to come up with anything a hundred percent.” And every time I’m surprised—maybe that tells you something about me. But genuinely, it is really beautiful in a way to see that people are so much more nuanced and engaged than you’d expect—often much less polarized than I think the higher level discourse would lead you to believe.
DIVYA SIDDARTH:
And it’s not that every time someone says something in a collective input process, it’s a gem and you want to turn it into legislation right away. But it often is the case—like Audrey was pointing out—that people come up with what we tend to call uncommon ground. So common ground is it’s easy to find consensus over what you might think of as like a “live, love, laugh” statement. Like, sure, you poll hundreds of thousands of people, they’re going to say, “Kids are important, we love rainbows,” whatever. But how do you find the uncommon ground? And I think that’s where it’s been most interesting to me. Okay, people have trade-offs between things like censorship and hate speech that we actually want to use, that we need those trade-offs for, because no one has a great idea about how to solve those problems. So I think that has been most exciting to me.
DIVYA SIDDARTH:
The challenges aren’t surprising in a way. I mean, the challenges are, it’s our responsibility to ask questions that people can have interesting answers to. When we don’t ask good questions, we don’t get good information. And I think that’s on us basically. I think the second is to move people away from a conception of democracy that’s very voting- or individualistic-based. Or the belief that, “Hey, if we just talk about things and we can solve everything with direct democracy.” Which is, like, also clearly not true. There’s a reason we moved past it, right? So how do we actually bring in the new thinking that frontier technology allows us to these really old questions that people actually feel very strongly about.
ARIA:
Divya, I have two big questions. One is, can you talk about the problem we’re trying to solve? Is the problem we’re trying to solve, we need better ideas? And so that’s why we’re going to go to hundreds, thousands, tens of thousands. Is it that we need a process that everyone believes in? And then in the solution, what’s important? Not to solve it, but to have the best possible outcome when you’re doing these deliberative democracy movements?
DIVYA SIDDARTH:
Yeah, this is a great question. I think a lot about the distinction between democracy and collective intelligence when answering this. Not that they’re not both great. I think there are three things that we want from democracy. We want buy-in and legitimacy, right? I mean we want people to feel like they were involved in decisions so that they will then continue to support those decisions. They’ll wear a mask for the good of the society. I mean, part of the original democracy is it was like, “I want to vote on war so that when I send my son to war, I feel like that’s my war.” That kind of buy-in and legitimacy is a core function of democracy. The second is, I think, a belief in agency. People should have control over their lives. This is something that people who believe in democracy are united by, so we want that piece. And the third is good decisions. None of this works—you think about input and output legitimacy. Sure, we can have great processes, but if we’re not succeeding, then it doesn’t matter. So I think those are the three things we want to preserve. And as you I think were hinting at, there’s trade-offs between these. Speed of decisions can trade off with level of buy-in—as anyone who’s even tried to go to dinner with ten friends knows. And level of agency can trade off with good decisions as well. So, what we think about in trying to move from just what we now feel is democracy to a world of collective intelligence, we try to say how do we preserve as many of these as possible? So like, okay, let’s have lots of people involved because that’s necessary for buy-in and legitimacy.
DIVYA SIDDARTH:
Every time people participate in their processes, around 70% of them say, “I have never felt listened to before.” And this is a problem we want to solve. We want to solve that at the buy-in and legitimacy space. We believe in agency; that’s why we’re running these processes, but they’re not as directly tied to power as an election is. When we run a process with tens of thousands of people to say, “What value should AI have,” and build that into the models, we have many expert layers between the collective input process and the model, partly just for practicality. So there’s a way in which we want to preserve the agency while keeping the good decisions. And then finally, we have seen time and time again, as we build transformative technologies, we can’t perfectly simulate the world. The kind of collective sensors that people have inherently—by sharing their lived experiences, failure modes, ideas—that make those decisions good. So that’s kind of how we think about, “Okay, what do we need to have and how do we trade them off?”
REID:
We obviously see collective stupidity in some of the ways that the major social media platforms encourage a sense of sloganeering, name-calling, division, et cetera. But what are some of the key insights by which we get to the intelligence part of it? Because by the way, that obviously then also plays into legitimacy and everything else because it’s like, “Hey, this is working, this is a good thing.”
DIVYA SIDDARTH:
I think Audrey is the best person in the world to answer this question.
AUDREY TANG:
Thank you. One of the very early learnings ten years ago, when we deliberated on the Uber case in Taiwan, was that people are just much nicer when they know that other people are listening. That is to say, if you poll people individually, they’re going to give you quite extreme ideas—quite extreme positions at that. And even worse, if it’s on social media—through this engagement, through “enragement” algorithm—then those extreme voices actually gets more views, retweets, and so on. And so, using the bridging system—called Polis—ten years ago, what we did was we take away the reply button and the retweet buttons. You can see one statement from your fellow citizen. Maybe they say, you know, “Insurance is a great idea. A professional license, a great idea.” But sometimes more nuanced statements like, “Surge pricing is fine, but undercutting the existing meters is not fine.”
AUDREY TANG:
And then you see your avatar among the people who share the same ideas with you, and you see the other group, or the other groups. And then we give virality to the statements that are bridging. That is to say, among people who vote differently, if there’s one or two statements that can convince both sides or all the different sides, then these statements gain visibility and virality. So it’s almost a reverse of the antisocial corners of traditional social media. And that really changed people’s behavior because once they have a friendly competition on how more bridging can we be, we ended up after three weeks getting a very coherent set of legislation around Uber, which then we passed into a law.
ARIA:
Can you say more about, you had an interesting distinction that perhaps misinformation is not the appropriate thing to talk about, and we’re not looking for truth. So, how would you frame misinformation, and what we should be shooting for in the social media realm?
AUDREY TANG:
Well, I think in terms of the behavior, that is to say, if information is contested, is it polarizing? For example, one very early example, during COVID was that there was a string of memes that said N95 masks are useful—the highest grade mask—the other masks are not useful. And there’s another strings of memes says it’s ventilation is aerosol, any kind of mask hurts you, N95 hurts you the most. And these two start polarizing just by debating each other. They mutate into more and more extreme forms. And so using this idea of uncommon grounds, we needed to discover very quickly what are the depolarizing idea that both sides can find agreement on. So within 24 hours, we pushed out a very cute meme: a Shiba Inu—a Doge dog—putting her paw to her mask saying, “Wear a mask to protect your own mouth from your own dirty, unwashed hands.” So it re-associated mask-wearing to just a reminder of hand-washing. There’s no big deal in that. It was so cute so it went viral. And then we mapped the tap water usage—it really did increase. And so the idea here is that people do not actually want to polarize. if there is a way to do humor-over-rumor, to do pre-bunking. Then people converge on that uncommon ground. It’s only in the vacuum of that uncommon ground do people seek polarization.
DIVYA SIDDARTH:
I think—whether it’s very explicit or somewhat implicit—one of the questions we get asked most often is basically like, “But everyone’s dumb and terrible. Why would you want to be asking them questions?” And I think there is a reality in which we want to be bringing out the best in people and the best in what they can contribute. And also being very clear on what kinds of questions should be put to large groups versus not. A collective intelligent mechanism that works in one situation doesn’t work in another. I mean famously, I think Sears and the World Bank at one point tried to bring markets internally. They were like, “Markets are amazing, they’re so good for innovation and competition. Let’s turn our company into more of a market system. And the different entities within the company should have to buy and sell and compete with each other.” Went terribly.
DIVYA SIDDARTH:
That doesn’t mean markets aren’t good; it means they created collective stupidity within the structure in which they were deployed at that point. I think similarly, it’s true of all of our collective intelligence mechanisms. You don’t want to use a bureaucracy when a decentralized democratic structure will work, and you don’t want to do it the other way as well. So a lot of what I think about is, okay, as AI transforms the future, what collective intelligence mechanisms are good for what types of decisions that we make? We don’t want to create homogenous outputs from models. We don’t want to create agents that people don’t trust. We don’t want to assign autonomy to things that could go badly. And we do need both collective signal on what people are seeing, and also collective input on what they want to see for those questions. That doesn’t mean I would go out to the streets of San Francisco and hold up this mic to someone and say, “What should we evaluate GPT-5 on?” So, how do we think about the moments and the ways in which we can actually use the affordances of collective intelligence appropriately to create collective intelligence?
REID:
Well, and like one of the things that Audrey gestured at earlier—which I think is both of your work—is if it feels that someone’s actually listening to me and participating in the dialogue, I will be more responsive, maybe more centrist, maybe more trying to do the bridging incentive.
ARIA:
Before we get onto more topics of AI, I just wanted to make sure I understood the distinction between collective intelligence, democracy. Like, how is this different than polling? For instance—my coworkers are going to make a lot of fun of me—but let’s take congestion pricing. One of my favorite policies in all the world. If you looked at polling, and if you ask people on the street, they hate congestion pricing—it’s terrible, it’s the worst thing ever. But then it’s implemented because smart policy makers thought it made sense in New York City, and now that approval rating has skyrocketed. And so how do you deal with people’s understanding of the policy not actually being what they’re going to feel when the policy is implemented? And please feel free to tell me like, “Aria, you’re totally misunderstanding this. This is the distinction between polling, collective intelligence, and this deliberative democracy.”
AUDREY TANG:
I mean, it’s just like saying, “One person, one vote.” Or like, “One telephone call, and one polling survey.” Of course, these are early, rudimentary collective intelligence systems. We know for a fact that if you poll people individually, they’re more extreme. If you poll people in a group, and allow people to play off each other’s idea, they become not just more centrist, but much more nuanced, much more creative. And so you can think of these processes as polling groups, and these groups are—like in the Taiwanese alignment assembly case—statistically representative. So that we can show our legislators, actually, if you get 450 people that satisfies this stratified random polling, it’s as rigorous as a poll, but it has much more generative ideas.
REID:
Obviously, one of the things that the whole world is very focused on these days is artificial intelligence and the way that it plays into a more human future. Obviously, I published a book on this earlier this year, in Superagency, I think that the driving force for the creation of AI will be various commercial centers. There’s, I think, actually a number of physics reasons that’s true, and a number of economic reasons, and a number of maybe even just that’s the basis by which this kind of technology gets created. What—from a viewpoint of society, civic discourse, buy-in—what are some of the things you think are important for technologists to be thinking about in terms of the creation of this? Given it’ll be created in, call it ten different—or 15 or 20 different—companies. What are some of the design principles? What are some of the questions to be asked? What are some of the really important things to make sure you do this, rather than that?
DIVYA SIDDARTH:
I started CIP because I was worried about whether democracy—not nation-state democracy, but the idea of people being able to control their own lives—would keep pace with AI. And I was also excited about, on the flip side, what AI could do for democracy. And I think I still feel both of those things. And what we have focused on is how do we bring a lot of people, not just into the conversation at a high level, but we actually should be building collective preferences into models. I see our current iteration of AI architecture as collective intelligence in and of itself. We’re training on the sum total of human knowledge—plus, of course, a bunch of synthetic data—and it is a collective intelligence system. The reason it works is because it’s an excellent collective intelligence system. When I talk to GPT, I’m talking to a very successful collective intelligence, and how do we move in a more collective direction?
DIVYA SIDDARTH:
So I think that’s of the information piece of collective intelligence. It’s like people have really excellent information. We work with hundreds of people in India who are deploying these models and seeing failure modes that no other major lab is seeing because they’re not looking in those places. We talk to tens of thousands of people every couple months to think about, “How are they evaluating models, what do they want to see in the future?” We’re building that back into evals. Because we think they see something other people can’t, because we need these sensors. There’s also the more squishy, how do we keep our humanity piece? And I think this comes from almost the Aristotelian view that participating in self-governance is what makes us human. Having agency over our lives is what makes us human. We don’t want to get into a place where we’re homogenizing output, where we’re losing culture, where we’re having agents or entities make decisions for us, and slowly erode what it means to be human. And so the projects we do where we go to different countries in the world and try to get people on the ground who are deploying models to share and build benchmarks together.
ARIA:
Divya, you nodded to it, that you are working with thousands, tens of thousands of people across the globe to get their input into these frontier models. How does that actually work? And then how do you take the input, and then what is done with it?
DIVYA SIDDARTH:
The first project we did like this—with Audrey and others—was with Anthropic, where we looked for a leverage point in the model training process. As we talked about earlier, it’s not just that you can go to people and ask them to shift the architecture of pre-training, or something like that. So with the Anthropic training process, there was a very clear leverage point, which is the constitution. The model is trained partly on a constitution, a set of natural language principles that say things like, “Always help the user as much as possible,” or things like that. And so those are things that people can really weigh into, and we worked with them to run a process. This was just in the U.S, bringing thousands of people together, and just rewriting that constitution. What do the people want from the constitution? Actually, one thing we found is that the people’s constitution, so to speak, was much more positively oriented. Not in an optimism sense. It was more like, “AI should do this. We want you to do this. Always do these things.” Whereas the research constitution was a lot more like, “Don’t do this. Stay away from this.” And so I think that already shows you the kind of shift that can happen when people get brought in. But then we just retrained Claude on the new constitution and now—this was a few years ago—but the versions of Claude that are in production have some of the principles from that constitution that people created. So I think that’s one very easy story of a leverage point in which collective input is possible. The model spec is another possibility there. Evaluation is something that we’re focusing on now as a good point for collective input, where, as you said, we bring these tens of thousands of people together. And what do they say?
DIVYA SIDDARTH:
This is what civil society organizations actually like. “I try to deploy this chatbot to my 10,000 beneficiaries for maternal health in India, and it’s having all of these problems.” “It’s always saying we should call a hotline. There are no hotlines here, that’s crazy.” Like, we have to change this. Otherwise, people are not getting the help they need. So we’re gathering tons of that information and creating benchmarks and trying to improve models on those benchmarks. So that’s another way. In our recent global dialogues, we found one in three adults are using AI for emotional support weekly. People trust chatbots far more than their elected representatives, and basically more than any other institution they interact with, other than their family doctor. So what does this mean for the future that we’re going to build? On one hand, lots of concerning things about that. On the other, it’s clear that there’s already trust here, how do we leverage that? So the last thing is, can we use AI to make this stuff better? Like that’s what we’re excited about with Remesh, with these platforms we’re using, is how do we actually build the technology into being able to do listening much better than we currently do? Better than polling.
REID:
Part of what I infer from your work is to say, “Well, when people are engaged, actually being respected and intrigued in their point of view, there’s a much larger percentage of people that want to build those bridges, or get to bridging comments, or to figure that out.” And if you have high-quality moderation and dialogue participant, that that could be helpful. And that’s the kind of thing that is part of a positive future that I would hope for. But it’s one of the things that maybe it’s naive for me to be hopeful for. But the notion of taking what we do in very limited circumstances, which to have extremely highly trained moderators facilitating conversation, and—just like AI makes things cheaper everywhere—to make that cheaper across a whole wide variety. Have you guys speculated about that at all?
AUDREY TANG:
Yeah, it’s no longer a sci-fi. In Bowling Green, for example, in Kentucky, just this year there’s a project called What Could BG Be, they used the same system—the bridging system Polis—to shape the city’s 25-year plan. And despite the sometimes polarized nature of American discourse, the platform revealed overwhelming uncommon ground on key issues like preserving historic buildings, investing in public health, building a cultural identity different from the nearby Nashville, and so on. And nearly 8,000 residents participated, showing that even in such hyper-local communities, these shared values can be surfaced and mobilized. And the key part of this is the open source Jigsaw Sensemaker that was deployed as part of the public system. So previously if you have tens of thousands of those groups and so on, it takes a while for people to go in and figure out what topics, what subtopics are within it. But now, using language models, you can do it, and you can replicate that using the open source equivalent model—like Gemini or Mistral running on laptops. You can verify these biases and correct them if you want.
ARIA:
Audrey, it seems like there’s so much that the United States can learn from what you’re doing in Taiwan. Are there favorite use cases you have for using AI to increase the civic participation that you think could be exported elsewhere? Or what are the challenges in doing that?
AUDREY TANG:
I think the idea that Taiwan was the largest model is no longer true. This year in California, we launched Engaged California, and California is, like, twice as large than Taiwan. And the platform was used successfully to tackle contentious issues like wildfire recovery, how to build back better, and so on. And already, in the city of Tokyo, there was a 33-year-old science fiction author called Takahiro Anno, mission learning expert, who read the book Plurality that we co-authored, and he decided to run for Tokyo governor one month before the governor election. And he just crowdsourced his platform, literally, using Polis, and broad listening tools, and again using AI as a great real-time summarizer. You can call into a line to talk to the vocal clone of Takahiro Anno, and see him updating his platform in real-time on YouTube, 24/7. And just before voting, independently ranked the platforms by those think tanks, the Takahiro Anno platform, crowdsourced, was the top-rated platform. Better than even Koike-san’s.
GOOGLE GEMINI:
Hi, Google Gemini here. To provide some context, Yuriko Koike is a Japanese politician who has served as the governor of Tokyo since 2016. Her past policy efforts and ongoing concerns for Tokyo suggest a continued emphasis on issues such as disaster prevention, combating declining birth rates, and maintaining Tokyo’s global competitiveness. She was reelected for a third term in the 2024 Tokyo gubernatorial election.
AUDREY TANG:
Of course, Koike-san won the election, but Koike also tapped Takahiro Anno to advise GovTechTokyo to use these kinds of systems to again imagine Tokyo for 2050. And so Tokyo, California, we’re now having a lot of larger playgrounds that are larger than the 23 millions of Taiwan.
ARIA:
I feel like a lot of time in the U.S, you have a moment of public comment, you have community meetings—and specifically thinking about the NIMBY movement—what you often have in the U.S. is the loudest voices reign. You have homeowners in a community who come to a community meeting, and they’re the people who might pick up that phone and comment on something—if you know people are soliciting information. How do you prevent it, so it’s not just those loud voices or lobbied interests that get to take over that platform?
AUDREY TANG:
Right, in broadcasting networks, especially on social media, people are either YIMBYs or NIMBYs, and those voices, the more extreme they are, the more virality they get. But in conversation networks, in a group of ten, statistically representative of that population, the opposite happens. People become MIMBYs, like “Maybe in my Backyard.” People start negotiating different terms because people know only those bridging ideas from that ten people group will get amplified across other groups and cross-pollinate. So it is very important they’re incentivized to fight that uncommon ground within the group, so that they can gain the virality. So by designing the system differently for broad listening, not broadcasting, you get completely different behavior.
ARIA:
One of the other things that you nodded to here that is critical is transparency. So you know who is making the money here, what are their motives behind this? And so I wanted to get really specific. Actually Audrey, I would love to hear from you. During COVID, you opened up real-time mask supply data and let hackers audit it. So can you talk about which metric convinced you that this radical transparency was literally saving lives, and was an important component of the whole process?
AUDREY TANG:
So when it comes to masks, for example, in COVID, transparency is absolutely necessary, but it’s just the beginning. We did make it very easy for people using any of those apps developed by civic technologists to look at where the nearest pharmacy hosting that some masks are, how quickly do they replenish, whether this distribution is fair, and so on. But equally importantly, is that when people discovered—for example, initially when we designed the distribution algorithm, we prided ourselves in saying each person in Taiwan, on average, have a very similar distance to the next available mask. That was the optimization target. But then an opposition legislator specializing in data science pointed out, not everywhere travel is the same speed. So in places with very good metro, for example, the same distance is ten minutes. But in a rural place, the same distance may take you an hour, and that is actually not fair at all. But the great thing about having historic data published every 30 seconds is that then the minister at the time asked the legislator back, “Well, you are the expert, how about let’s design something that makes it much more equal.” And she couldn’t refuse because she has exactly the same data as the minister. And so they co-created, and within a week, we wrote a much more fair distribution mechanism.
REID:
So when we get to the algorithmic transparency and you think about AI, do you think it’s important to include model weights, or is process transparency enough? And part of it’s because I do think that one of the things about the model weight stuff is it puts out a very powerful computation. It’s almost like saying, “Here’s a hacking tool,” or something that it puts it out in the wild, and I tend to be hesitant. I’m curious about what does it take for transparency and trust? And Audrey and Divya, if either of you have thought about this, and if you have insight?
AUDREY TANG:
Well, in Taiwan, when we run the alignment assemblies, people prefer tailor-made models. Not general-purpose, very large models, but the ones that are very good in just doing one thing, and do it well. For example, translation between Taiwanese, Mandarin, and English. So that it’s not just translating the content, but also the local cultural nuances and so on. And to do that, it probably does not need penetration testing, red-teaming, and the other cyber or bio capabilities. And so while we do have very large models, nowadays in Taiwan our industrial policy around AI is to reuse those models, but to tune specific smaller models through distillations, through synthetic data training, and so on, for specific applications. And those models, because they have less dual-use risks, can then afford to be much more open weight or even entirely open source.
ARIA:
What do you think are the other current challenges we have in transferring some of these practices from Taiwan? How can we mitigate those barriers for why people aren’t taking you up on all of the amazing things that you guys are doing?
AUDREY TANG:
So collective intelligence to me is about exercising mutual care through the civic muscle. And like any muscle, it can be trained and strengthened significantly with patience and practice. However, there’s this strain of techno-solutionism that is very impatient. There’s some people who say, “Let’s just have ChatGPT interview random people one-on-one for a while, and then we build their avatars, and then we put those avatars on a deliberation. And from that point on, you don’t need to check with real people anymore. And this is much better than any polling because there’s group dynamics,” and so on. And that vision is like sending your robots to the gym to train for you. I’m sure it’s very impressive—the robot can lift a very heavy weight. At the end of the day, though, that specific muscle atrophies because there’s no exercising in listening. And I think one of the main hurdles was that this kind of techno-solutionism—this shortcut—is also capturing some politicians’ imaginations. And so we need to work with their expectations and show, actually, this sick muscle exercise can be done as quickly as emerging technologies. So you do not need to skip those important steps and end up with the avatar state.
DIVYA SIDDARTH:
As someone who’s worked with Audrey in Taiwan, and then has tried to bring this to other context, I think one difficulty is Taiwan has built up this collective intelligence infrastructure over ten years. Which means when we run a process in Taiwan, people who are involved believe that it will go somewhere. And I think this is really crucial, and it’s why at CIP we make sure we never run a collective process if we don’t know what the outcome is going to be, and we can’t say that back to people. Even if the outcome is you’re going to be involved in an evaluation, you’re going to impact this decision at a company, we’re going to bring this to the government—whatever it is. And that’s because I think there is one, there’s cynicism about participation. There’s a great Oscar Wilde quote that I say a lot, which is, “I love democracy, but it takes too many evenings.”
DIVYA SIDDARTH:
Of course, it’s not what we want to be spending our time on. Even if you do want to participate, you want to make sure it’s going somewhere. And I think in a country where that is clear already, there is a much higher desire to participate, and higher quality of participation. Forget country—in any context where that’s not clear, you’re not going to get excellent and helpful participation. And so I think that is what we also need to work on is success stories of collective intelligence informing things. And the more that happens, the more there’s an upward spiral saying, “Hey, this can actually work. I would spend an evening on this, perhaps.” Or, I would create a digital twin to spend an evening on this—which is also something we’re thinking.
ARIA:
Say more about the creating digital twins to spend evenings with—very interesting.
REID AI:
Hey, Reid AI here. I’m invested in this answer, too. Let’s hear it.
DIVYA SIDDARTH:
Well, because I’m kind of obsessed with this question of how do we preserve human agency in a world where it’s a lot of work to weigh in on and understand any topic. And we don’t want to erode what is core about being human? We don’t want to be doing laundry and outsourcing the task of self-governance to AI. The idea of a digital twin is can you create an agent that can negotiate on behalf of your values and preferences in a ton of democratic contexts. I think Reid—to your earlier point about market solutions—some version of a “digital twin” is going to be solved by the market. It’s valuable to have something that can negotiate on your behalf, in an agent-based economy, even if that negotiation is pretty logistics-based; something like that will happen. The question is, how do we know it’s actually true to our values?
DIVYA SIDDARTH:
What does it mean to be true to your values in a world where you learn things, or you say something different when you’re hungry than when you’re full, or you say something different when you’re afraid than when you’re not? So a lot of, we’re trying to build evaluations for digital twins using some of the global dialogues data to have people from different places be able to interact with the agents that are trained on their preferences and say, “This is what I would say. This isn’t what I would say. This isn’t what I would say, but actually I like it better than what I would say.” And trying to get to the point where even if the market does solve some version of this, we’re actually evaluating those and using those for what people truly value. And maybe we can save everyone some evenings, and also save democracy.
REID:
Both evenings and democracy. I thought one of the areas you might have also been going is the Groucho Marx quote, which I also like: “Don’t want to be part of any club that wants to have me as a member.” Which also has some echoes here.
DIVYA SIDDARTH:
I wanted to ask you both a question. I’m curious—AI and democracy, are you optimistic? Are you pessimistic? I know we’ve talked about this on other occasions, we’ve talked about the hopeful case here. Obviously that’s my day job, so I spend my days being hopeful. But yeah, I’m curious what your high-level bird’s eye take is on not just nation-state democracy and AI in the next two years, but is transformative AI going to lead to a more or less kind of democratic society?
ARIA:
I think the main problem we have right now is trust in government, full stop. And if people don’t trust government, they’re not going to vote for it. They’re not going to want more of it. And people aren’t entirely wrong. In the U.S, the government does a lot of great things, but they also fail so many times. And so if AI can help the government deliver, if AI can make the government more efficient, more effective—all of these things—then I am incredibly hopeful for this being a positive. Because this trust issue is the thing that I care most about.
REID:
Obviously the trust thing is something I also feel very strongly [about]—part of the reason why [I’m] doing this challenge with the Lever For Change organization, about restoring trust in public organizations. The thing that I generally think about optimism—and I think people think that I tend to be optimistic for almost political reasons—but I actually think that the optimism is, we only get to futures that are good by steering directly towards them. So I think rationally, you need to be optimistic in your efforts, in your intent, in your angle. Now that being said, obviously, it’s intelligently optimistic. It’s recognizing there are challenges, recognizing that there’s a number of forces that could make things go wrong, and that you have to put in real energy to try to make your optimism bear fruit to make it that way. That’s part of the reason why I’d say yes, I am optimistic even though I can see a number of different ways in which AI can challenge democracy, can challenge people’s buy-in democracy, people’s buy-in to society with the job transformations.
DIVYA SIDDARTH:
In a way, the primary impact of AI and democracy is economic. And so I think it’s really important to bring that piece in.
REID:
I completely agree.
DIVYA SIDDARTH:
If I could squeeze another one in—one question we ask in our global dialogues is sometimes, if you could delegate the decision to one person or institution, who would you choose? Just to try to understand in a different way what people want, right? We recently did one with the Earth Species Project, and in the pilot, people are always like, “Nature decisions? Jane Goodall, a hundred percent.” It’s incredible how big the distance is between that and someone else. I’m wondering on—assuming some type of centralization—this is something we think about a lot. Like, what’s the leverage point for democracy? If it’s the bitter lesson all the way down, and some parts of the architecture are just going to be centralized, if you could delegate more control over the centralized architecture to one like institution—let’s say, because person is a bit difficult—who comes to mind that you’d want to be more involved, that isn’t?
REID:
Obviously, if we could get a collection of Buddhist monks together, that might be an institution that I’d say, “Okay, that group. That would work.”
DIVYA SIDDARTH:
I see it.
REID:
I would say now part of it is maybe I am irreducibly pragmatic about what actually works. And since it tends to be the, I don’t know how we have not yet, despite all idealism, assembled any other way of doing scale technology than companies, I resolve that to a company question. And maybe just because I’m close to the Microsoft processes by being on the board, I tend to have a very high trust for Satya Nadella, Kevin Scott, Mustafa Suleyman in what they’re doing. I know where they’ve integrated their theses. Now, if it’s stepped outside of that, I was also on the OpenAI board, so I have a lot of trust for OpenAI.
REID:
But I also know Dario Amodei quite well—he was at OpenAI—so Anthropic, I have a high trust for. The Google folks, with Demis Hassabis, and the DeepMind folks, and James Manyika, I think, also do this very well. And so I actually work towards the answer to your question, which is getting these folks in dialogue with each other, try to share best practices and ideas, safety tests, as ways of doing it, as a way of operationalizing. Literally all the people I just mentioned to you are people I’m in deep dialogue with about this. And that’s part of the reason I have an answer. Now, picking just one—I try not to, I try to get, it’s a little bit of the collective intelligence. Like I’m a strong believer in what you guys are doing in terms of collective intelligence and that pattern. So I always try to say, “Hey, let’s try to make it a group of people in dialogue.” And matter of fact, one of the questions I ask—the version of the question you just asked me—regularly of, “Okay, what’s your list of people, if you don’t create ‘AGI,’ what’s your list of other people who you would want to have done that, if you’re not the person who have done it?”
DIVYA SIDDARTH:
Okay, so let’s do one third C-suite, one third Buddhist monks, one third global public.
REID:
Perfect. I’m into that.
DIVYA SIDDARTH:
And we’ll call it a day.
ARIA:
We’ll call it a day.
REID:
And so let’s turn to rapid fire. Is there a movie, song, or book that fills you with optimism for the future?
AUDREY TANG:
Well, there’s a line from a Leonard Cohen song, Anthem, it goes like this, “Ring the bells that still can ring. Forget your perfect offering. There’s a crack, a crack in everything, and that’s how the light gets in.” To me, this is the most optimistic idea. It tells that progress does not come from flawless top-down plans. It comes from embracing our imperfections, allowing new possibilities to emerge from the cracks.
DIVYA SIDDARTH:
I’m going to go science fiction. But actually, there are some beautiful explorations of democracy in books like A Half-Built Garden, The Dispossessed, like Terra Ignota, that say, what if you actually used these technologies to counter human flaws instead of magnifying them? Which is something we’ve been talking about. I think why Terra Ignota in particular is something that fills me with optimism is there’s a group of people called the Utopians in the Terra Ignota series. And I’ve been thinking a lot about what it means to reclaim the title of utopian, which I think is often used to dismiss people as naive and idealistic—and for a good reason. The world is really hard, and there are good reasons for it, and it’s hard to build utopia. But I think it’s important to have an end goal. And in Terra Ignota, this group, they’re scientists, they’re explorers, they want to defeat death, they want to go to the stars, and they build towards humanity flourishing every day. And that I think is something that it is within every person I’ve spoken to, and bringing that energy together is something that gives me optimism.
ARIA:
Audrey, I’ll go to you first. Is there a question that you wish people would ask you more often?
AUDREY TANG:
At the beginning of our projects, I wish that everybody would ask, what do you think is possible to achieve if everything works out very well in the next 15 years? That question to me changes dynamic from doom to bloom.
ARIA:
Divya?
DIVYA SIDDARTH:
I wish people would ask me more questions that come from a belief that things can get better. I think, in particular with democracy, it is easy to look around and see the failures of democracy. There are tons of them. And if we asked ourselves the question of how do we build on top of this and improve—however you feel about democracy—if you want to preserve what we have now, ask yourself the question of how we can do better. And if you think we should throw it all away, ask yourself the question of what we would lose. If we all asked ourselves the question of how we can make it work, I believe that it would.
REID:
Well said, as an optimist. Where do you see progress or momentum outside of your “industry” that inspires you?
AUDREY TANG:
I’m really inspired by the global movement toward shorter work weeks. We’re seeing major pilots and policy shifts from the U.K., from Belgium, Iceland, and most recently Tokyo in Japan, showing that a four-day work week can increase productivity and cutting stress and burnout. And I think the civic potential really excites me. An extra free day gives people bandwidth to join citizen assemblies, to contribute to open source, to mentor neighbors. And so time is the raw material of collective intelligence. And redesigning the calendar is a reminder that not all the gov tech is tech. Sometimes the most profound change begins with something as simple as giving back time.
REID:
Divya?
DIVYA SIDDARTH:
I think there are two things here. One, I’m constantly inspired by how the billions of people around the world take technology innovation and make it work for them. We said we did these evaluations across India. Obviously, evals are about finding failure modes, but every time we went to an organization to ask them about failure modes, they were using language models in super complex ways. They had downloaded Llama on a laptop that was running without internet. All of these kinds of ways of using technology that we don’t think about enough, and we don’t build for enough. The other thing is science fiction futures are possible. That’s always going to be exciting to me. Like I take Waymos now. And I think a lot about, “How can we push the frontier, and make the frontier accessible to people?” How can we not have this dichotomy between, okay, we keep pushing the frontier and it doesn’t get down to people, nor we ignore the frontier because we care about what everyone else gets.
ARIA:
Can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years, and what is our first step to get there? And Divya, why don’t you start this time?
DIVYA SIDDARTH:
I think we can move from a future of artificial general intelligence to a future of augmented collective intelligence. And I think the first step to get there is to bring people and technology together to surface our best ideas, and use AI to put them into action.
AUDREY TANG:
I totally agree with that. I think a change of perspective is all it takes. We need to remember that the super-intelligence we’re waiting for is not in the server form somewhere. It is already here. We, the people, are the super-intelligence. And our mission is not to look for a mission to save us, it’s to increase our own bandwidth, to strengthen our connection, to build a civic muscle, to allow our collective intelligence to emerge. And the first step could be just turning to the person next to you, online or in person, and begin a better conversation.
ARIA:
Amazing. Thank you both so much. This was really incredible, and so much to think about. Appreciate it.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Thanasi Dilos, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Karrie Huang, Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles. And, last but not least, a big thanks to Wendy Hsueh.