This transcript is generated with the help of AI and is lightly edited for clarity.
///
REID:
Elon had decided that OpenAI should only be a company that he owned the majority of. After a town hall, saying this is what he wanted and the company said no. He said, “Well, you’re all a bunch of jackasses,” and left.
REID:
Sam asked me a question out of the blue, no prep, just like “So?”
ARIA:
And this is in front of the whole OpenAI staff?
REID:
– an entire–it’s an all-hands interview.
REID:
It started very early with Brian because when he and his co-founders were sitting down with me and pitching me on Airbnb, by about minute three, I was like, okay, I know I want an offer to invest.
///
ARIA:
Reid, it is so thrilling to be here at the Masters of Scale Summit. We get to do Reid Riffs in person, which we don’t always get to do. And today we are going to talk about trust. You have invested in some of the most important technology companies — Airbnb, Figma, Facebook. They are about building trust with the consumer. Trust is a social platform.
And actually just a few weeks ago we were talking to an entrepreneur about the entrepreneur–investor relationship. Is it contentious? Are investors and board members pushing the entrepreneur to do things? Should the entrepreneur share everything with their investors, or keep some things close to the vest? Talk to me: how do you build trust with entrepreneurs so they are open and honest with you about the state of the company and how you can help?
REID:
The first thing, as general advice for both entrepreneurs and investors (person X, person Y), is you have to first start with a working assessment of how much trust the other person can hold. Because building trust that’s not held in trust can actually in fact, be bad — even disastrous. And that’s part of the reason why frequently in places where there’s conflict in a relationship, people tend to start with, “Oh, let’s not go into the trust thing.” Because since we have potentially interests that are in conflict with each other, okay, we’ll start from a non-trust perspective.
But of course, that collaboration is super important. Without trust, you can’t make decisions effectively, you can’t communicate effectively; you miss out on problem-solving, and you might not survive, as well.
Most companies go through difficult times. If you have no trust — or mistrust or distrust — as opposed to trust, that’s all very challenging. So it’s important for founders, CEOs, and investors to try to choose people with whom you can build a deep trust relationship. Otherwise it’s basically non-functional.
But trust building is a journey. And by the way, you may not go like, “okay, I think I can.” And now I need to go through steps doing it. And all trust building is a journey, right? It’s never like, “Oh, I just, like, I met you five seconds from now; I would trust you with my children,” and all this. It doesn’t work that way.
REID:
And part of that journey starts with, for example, the reason why the entrepreneur you referenced mentioned this to us is that, one of the things that he was noticing was that I tend to be pretty explicit. I go, “Look, here’s where my interests are in this stuff, right? And here’s the advice I’m giving you that’s in my interest. Here’s the advice I’d give you that’s outside my interest. Here’s the advice I’d give you as part of our partnering interest.” And then you, by being explicit and putting it all on the table, you’ll go, “Oh, you’re not trying to masquerade everything as interest, that your interests are my interests. You’re not trying to hide the fact that you have some interests that are not my interests in this; you’re putting it on the table explicitly because you want to share the navigation of it.”
Like kind of a classic issue that when you have– some parts of life and work and investing are a zero-sum game. We try to play as many non-zero-sum games– as many positive-sum games as we can, but when you’re doing an investment — “OK, I’m putting in $100; what percentage of the company do I get?” — that’s a zero-sum negotiation. How do you navigate that? Part of the thing I actually use that kind of negotiation for, is say: can we build trust in each other? Can we identify what our interests are? Can we navigate them? Can we deal with the fact that we have an area of conflict, that’s actually of conflict between us, to be positive in how we navigate the relationship?
Now there’s actually, as you know, parallels between this and the other thing that I thought deeply about for decades, which is friendship – because part of the thing that too often, for example, in friendship, people say, “Oh, I try to avoid conflict because I don’t want to have conflict in the friendship.” And actually, in fact, what you want is you want to steer into the conflicts that are potentially real.
REID:
Because part of what happens when you do that, and you, of course, try to do it gracefully and try to do it constructively and try to do what you’re learning about yourself and about the other person in terms of doing it. But when you steer into that, you either become closer friends or not as close friends. But if you’re not as close friends, that’s actually a better outcome, right, in terms of doing it.
So when you find an area – and I do the same thing with people I’m working with – with entrepreneurs, with investors – I go, okay, let me steer into the area of conflict. Let’s see how we’re navigating it. Now, one of the challenges that you have, that’s more high stakes in the investor relationship than vs friendship (obviously, seriously, still deeply important in the meaning of life) –
REID:
–but with friends, you can kind of go a little closer. You could say, hey, we’re not going to cover this area and we’ll still be friends over here, and that’s fine. And depending if it’s not too deep of a moral topic or other kinds of things, that works just fine. One of the things that I kind of most often tell founders, young founders doing this stuff, is one of the weird things about taking an investment in a board seat from somebody is it’s kind of like getting married on like two PowerPoint presentations and one or two dinners. Right. And because once you’re in it, you’re like, you’re in it for a while.
ARIA:
10 years. Here I am.
REID:
Yes. And so it’s actually one of the reasons why I highly recommend both investors and founders reference check, because it’s like, how do I make that proxy work? These questions around trust building are actually extremely important. Is one of the things that, like, I most often like. I surprise, you know, founders when they’re calling me and asking me about different, possible investors. Like, look, action back. There’s a huge virtue in that. That’s someone you can go on a trust journey with. Because all startups go through moments where we’re like, “Why did the hell we think this was a good idea?” And you want someone who goes through that with you, by the way. Now then, founders, “Oh, I want someone who just blindly follow what I do.” It’s like, well, you’re losing out on tons. I don’t want friends who say, “Oh, yeah, Reid, whatever you are, that’s fine.”
No, I want friends who go, “Hey, you’ll be a much better person. You’ll be more capable in the world. You’ll be a better friend. You’ll be happier…if you start doing more of X and less of Y.” As an instance of doing this, that’s the kind of partners you want in business, and you have to show it. This gets back to that kind of trust building.
ARIA:
Can you give specific examples of ways you have built trust with founders you’ve invested in? Brian Chesky from Airbnb, or Sam Altman and Greg Brockman from OpenAI. How do you set that relationship on the right foot?
REID:
So the first thing, just as I was saying before, is that it’s important to be genuine, right? So, this is one of the things that’s a little icky when someone comes up and says, “I’m a networker, I want to have you in my network.” It’s like, no, no. It’s a relationship, it’s a dance. It’s bi-directional. It’s actually important to be intentional. But it’s intentional in the same way that, for example, if you’re learning to dance the swing with someone or dance the tango, you’re collaborating in it. It’s not the way that you’re manipulating them, it’s the way that you’re going into the dance. And that’s an important first part of the mindset on part of both parties. But then of course, people say, oh, no, it should be wholly natural. You’re not explicit. No, no, you learn how to dance better. You learn to do this stuff better.
Part of it started very early with Brian because when he and his co-founders were sitting down with me and pitching me on Airbnb, by about minute three, I was like, okay, I know I want an offer to invest.
I said, “Let’s change this into a working session. Let’s start working on your hardest problems.” Because both of us – I will get a sense of you, and you’ll get a sense of me. So we’ll start working on this problem, and that’s how we’ll go through this. And they were really surprised because no other investor had done that before, “How do you solve that problem? Isn’t that an issue? That’s not an issue – prove it to me.”
Like, “How do we solve this? What are the things that you’re thinking about doing? Here’s something I could think of.” You know, da, da.
“Boy, this will be hard. We have to stay focused on this as we’re going on, et cetera.”
So that’s one where you kind of roll up your sleeves and you’re kind of working with them, and you’re showing that you’re doing that, and you’re showing that you’re capable of throwing ideas against the wall. And when, you know, like, you threw an idea. I threw, remember I threw an idea against the wall. And then five seconds, I was like, oh, that was a dumb idea. Let’s move to the next one.
ARIA:
Well, you’re also not, like, afraid of the problems. You’re saying, tell me the problems, because I’m actually here to help.
REID:
Yes, exactly. And that’s one of the pieces of advice I give entrepreneurs in dealing with investors is don’t try to say we have no risks. Right? It’s like, duh, you have risks. So if you’re not telling me about the risk, you’re either dumb, bad, or lying to me. Also bad. Right? So it’s like, “No, tell me what the risk is and tell me what your theory of the game is with these risks.”
So that was with Brian now the other ones with, like, Sam and Greg. I mean, there was lots with Brian, too. But, you know, maybe the interesting thing kind of came from when, you know, basically Elon had decided that OpenAI should only be a company that he owned the majority of. So after a town hall saying this is what he wanted, and the company said, no.
He said, “Well, you’re all a bunch of jackasses,” and left. And, you know, Sam called me and said, “What do we do?” And I said, “Look, I’ll step in. I’ll help.” And, you know, part of that was to say, “Hey, I’m not just there for you in the good times, I’m there for you in the bad times.” It was literally like the very first reaction. Greg didn’t actually know me that well.
REID:
Sam said, “I’d like you on the board — Would it be okay if Greg and Ilya interview you?” Of course it is. It’s not, “I’m Reid Hoffman; how could you interview me?” It’s like, no, that would be great. I would love that. I would get to know them better, that kind of thing. And I understand that. Like, they’re just, like, they’re just coming out of having had an investor who was like, “You’re all a bunch of jackasses. And I think you’re going to die. I’m leaving the building.” And it was like okay, I’ll come and do that.
And so in sitting down, I was like, as part of it, “Okay, one of the natural questions you guys should ask that you might feel intended in asking, ‘how do I think that I’m different than Elon Musk as a partner?’ like, as someone who’s collaborating, working with you?” And they said, “Yeah.” I was like, “Yeah, let’s start with that question, right?” And here’s one of the things –
REID:
By the way – part of the thing, like, what I always do when I’m talking to investees: “Look, I’m going to give you a bunch of references. I would like you to find anybody that I perfectly work with, you’re allowed to talk to anyone. And you can tell them, they know this, that they should tell you my strengths and weaknesses. And I will identify people to you that I think are my negative references that I’ve worked with before.”
And one of those people got so sick of it that when one person called them was like, “Well, Reid told me to call you for negative references?” Like, “Can you tell him to stop calling people for negative references?” Right? You know, as an instance. Because look – I want to be in truth with you.
And even before they’d done the calls, and I know what, they went out and did some phone calls because I usually get a call saying, “Oh, hey, Greg called me and blah, blah.” It was like, “Oh, that’s good.” Great. Right? Let’s have a real conversation. You want to have that real conversation.
And again, like the earlier things was by being there and present with them. Now, obviously, part of the most amazing companies like OpenAI, like Airbnb, and I’ve had the honor of being a part of many of these journeys is they all go through value of shadow moments.
ARIA:
Right.
REID:
And trust is intensely built when you show that you are there to share the pain and help with it in navigation. Because until you get pressured that way, people go, “Well, no, actually, no. You take all the pain now, this is all your fault, you know, you fucked it up.”
And I’ve seen a lot of investors lose their cool and like, just heap garbage on the entrepreneur when you’re like – look, this is a risky game. You encounter problems. The point is not to panic. The point is not to blame the entrepreneur – unless the entrepreneur really, like, oh, you were embezzling? Blame the entrepreneur – it’s the nature of building these things. And so, I would get these calls, like, “Okay, is Sam Altman a good guy?”
REID:
And I’m like, “Yes, he is.” Right? And yes, he is. Because it doesn’t mean he’s perfect. Like, saying Reid is perfect, that’s dumb, right? Aria is perfect. That’s dumb. But it’s the, “No, no.” He actually, in fact, has humanity and what’s going on for how AI affects not just industry, but society at fundamental. And part of, Sam sees me do that, and he knows that I’m like – he hears echoes of it – because I don’t just then call and say, “Hey, Sam, I gave a positive reference.” It’s like, no, no – it’s part of what that is. And I think that’s where people learn that you’re trustworthy–
ARIA:
Absolutely.
REID:
– in the relationship.
ARIA:
What do people say about you when you’re not in the room? That’s what you want to know, that you are the person you’re there with during the board meetings, but you’re also there repping them when you’re talking to other investors, when you’re talking to other–
REID:
Founders, journalists — the whole thing.
ARIA:
You told me a story about you going into an OpenAI meeting where Sam was interviewing for Fireside Chat. Can you say a little bit more about that meeting?
REID:
So this was a classic Sam thing, which was hilarious, because part of, again – trust maintenance, actually – which is one of the reasons it was unusual. Trust maintenance is: you have to have CEOs that trust that you, as a board member, will not operationally undermine them except by direct discussion, either one on one or in the board meeting. Like, if I’m undermining you, it’s because I’m doing it to your face, right? It’s part of what we’re like, the thing that we’re doing for the responsibility of the company.
And so one of the things I’ve learned over time, one of the reasons why now, literally, for maybe even a two decades at this point, CEOs trust me to talk to anyone in the company, trust me to talk to the exec staff.
REID:
I’m like, “Hey, I want to call these execs that want to talk about this,” and they’re like, “Great, let me know how it’s going,” because they know I don’t undermine them.
ARIA:
Absolutely.
REID:
You have to be careful because, “Oh, Ari is the CEO of Do Something!” and I’m sitting with your exec staff and I’m saying, “Oh, so how good of a job do you think Ari’s doing?” That question itself is undermining you. Do not ask that question right now. Say, for example, that’s a question I’m interested in. The way that I would ask that question is say, “So tell me a little bit of how you think the org’s doing?”
ARIA:
Right.
REID:
Like, what are the things? And as you go through it, you’ll get the echoes of the things that you need in order to get an assessment of that. And that’s the way to do that and to help the organization, to help Aria, et cetera, without actually, in fact doing the undermining thing. So I’m very careful about not undermining the CEO.
So, we’re sitting down and Sam asked me a question out of the blue, no prep, just like “So?”
ARIA:
And this is in front of the whole OpenAI staff?
REID:
– an entire–it’s an all-hands interview. Because again, like, Sam was like, “Would you do this to help build confidence in the company?” And so absolutely, of course. And he goes, “So what are the conditions on which you’d fire me?” (laughs) I’m like, “Uhhh…”
ARIA:
You’re like, what do you want me to say?
REID:
Yes, I don’t undermine confidence! I was like, “Well, you know, it’s – obviously what I would start with is, you know, kind of like, I wouldn’t just do that. I would actually be working to try to help the whole company and all the rest.”
He’s like, “No, no, no. Answer the question. What would you do to fire me?” And I’m like, “Well, okay, right. Like, look. There’s the obvious things.”
ARIA:
You’re stealing money.
REID:
Those are all things that are like, you’re gone the next minute, right?
ARIA:
Totally.
REID:
But there’s also things around like – okay, working in this company is an instance of great trust – and so if you’re not holding AI’s benefit for humanity, if you’re not holding that in deep trust. And obviously there’s certain things that could break it. That’s equivalent to a minute. But there’s also other things that would be like, hey, I’m working with you and if it’s not working, I’m going to be escalating, “This isn’t working.” Right?
And one of the weird things that has happened as part of functionally every board I’ve been on, when we’ve had to fire the CEO, I’m always the person who’s asked to fire the CEO, right? And so I actually do do that.
REID:
But by the way, I’ve maintained a relationship with every single CEO because I do it in a human way, I explain it, I don’t ask them to agree with the judgment. But, here’s how we got to this judgment. These are the kinds of things we’re doing. And it’s like, look, if we got there, we would get there.
Because it’s not just about your performance, it’s your performance for the company, but it’s also your performance for society, for humanity. And are you like delivering in the right way? And obviously I want you to. My first job is to help you succeed, but my next job is to see the transition when it’s important to happen.
ARIA:
Well, continuing on the theme of trust, a lot of people, when they talk about AI, they talk about trust. They say, “I don’t trust these companies, I don’t trust these models, I don’t trust the outputs because of hallucinations.” How are we going to get to a place where we’re building trust with AI and society?
REID:
Well, there’s a couple of things as baselines. So one is, a lot of times people say I mistrust it. Look, there’s the journalists and everything else is generally, “Oh, mistrust this, mistrust this. It’s bad because it was using copyrighted material to get to be made or blah, blah, blah.”
But the first is: Use it. For individuals, use it right? Get a sense of it. They say, “Well, but I don’t know what’s going on with my data that’s going into this.” And look, these companies are trying to build trillion dollar franchises. They care a lot about not screwing up you with your data, right? So like people say, “Well but I have this genius secret idea.” They’re not looking through their chat logs to mine with it.
And if you say, “Well, if I say something personal like I’m worried about my friend’s mental health,” and so forth, they’re not going to go publish that.
There’s much more natural alignment of incentives there. So try it and try it for serious things. Now, that’s on the individual side for realizing what kinds of things that could hold trust on and the users. Now, the companies obviously need to do a lot of things too. Part of those things is to hold that trust. So for example, you say, “Well I’ve got a whole bunch of chat logs with a bunch of people.” Well, make sure that you’re using the best that you possibly can in modern cybersecurity.
REID:
Make sure hackers can’t get access to it, make sure it doesn’t get accidentally revealed, or published to the Internet by the dark web and so forth. You have a responsibility and commit to that and engage in it and take your lumps if you get it wrong. Right? And so, that kind of thing and that’s important.
One of the things I’ve been saying at least pre Covid for some number of years, and obviously still now, is as technology becomes more and more part of our everyday fabric, the kind of our entire life around us, mobile phones and AI agents on them, etc. I think companies should be very explicit about: Here are the things that we think we are holding and trust for you. Here is what we are building towards here. Here is what we’re doing, here’s what we’re trying to do, the changes we’re trying to make in the world. Here’s what we’re trying to be good for you and here’s what we’re signing up for that you can hold us accountable for.
And I think that dialogue is very important now. It’s of course still important that they have like a 35 page terms of service which is, you know, the legal gotcha. Because this person goes, “Oh, I want to sue you for this.” And so you read through a terms of service, they’re always like, “We’re not responsible for this and we’re responsible for this and you’re responsible for that!” And you have to because of a legalistic society.
REID:
But nevertheless, I think it’s important to say, “Look, here is what our theory of what’s good for you is. Here is what we’re building towards. Here is what we’re seriously investing in and this is why we’re doing it.”
And I think being in that dialogue with the individual consumer, with the communities, with the societies, with governments, with NGOs, it’s one of the reasons why people are surprised when I’m like, “No, I like journalists playing critical functions. It’s actually important.”
And then we’re like, “Oh, well, why did provide funding for ‘Cofounders?’” Like, no, actually, in fact, it’s important that all of us are engaged in this dialogue to make this technology more human. That’s a critical part of it. And so as opposed to that, “You’re not allowed to criticize me!” That’s bad. That’s trust. Trust eroding. It’s like, no, I welcome your criticism.
I may disagree with it. I may say, “Hey, I can’t get to that yet, etc, but that’s an important part of the dialogue for how we improve, right?”
ARIA:
I mean, to your point on dialogue, the politicians who do town halls vs. the politicians who say, “I’m never doing a town hall because I don’t want any questions from my constituents.” Who do you trust more?
REID:
Yes, exactly.
ARIA:
So we’re in the midst of a video model renaissance. We’ve had so many video models come out in the past few weeks. And Sora 2 is everywhere. And last week, my Twitter was full of Sam Altman stealing things from Target, and our internal Slack was full of our co-worker hitting game winning shots as a 76ers basketball player. It is everywhere. And some people are saying, “This is amazing.”
This is so many people. Like, the burst of creativity that people are getting from these new video models is incredible. Some people are saying, “This is great because this is getting more people using them. And the more you use them as a social platform, the more you trust them.” Other people are saying, “I thought these companies were gonna like, get to AGI and cure cancer. Like, why are they giving me seven second videos of my face on them?”
How do you see sort of the new release of Sora either playing into the trust or maybe even if you don’t – what does this new release and amazing adoption tell us about this moment in AI?
REID:
Well, maybe there’s three things. And by the way, all technology iterative deployments are important. There will be things that are fail moments in it. They’ll have to iterate and change it. That’s, by the way, not a fail for them. That’s called iterative employment and learning. And so the feedback is really important in terms of what are the things happening.
And you cannot get it perfect before you get it. Like you tried to get a car perfect before you put the car on the road, you’d have no cars. And so it’s kind of like, some of it comes from regulation, which is like, questions about seat belts. And some of it comes from – oh, shoot, airbags – you know, like just industry kind of making work improve as we go.
And so part of it is a very strong plus one to launch and iterate and engage. Now, obviously, try to avoid catastrophic things. Try to avoid things that break systems, break financial systems, cause serious harm across a large number of human beings –or even a few human beings– if it’s very serious harm. And obviously they did a whole bunch of that trying as best they can before they got out. They did a lot of alignment testing and a lot of what things happened. But that doesn’t mean it’ll be perfect and it’ll be iterated. Now, that being said, I think there’s a couple things. So one is, again, it gives another surface for people to interact with this thing. I think part of this surface is to enable people to engage with their creativity.
REID:
A picture is worth a thousand words, as part of the image generators, but also now video and like our co-worker, going, “Oh my God, I always wanted to be a 76er basketball player.” It’s a good way that’s, “Oh, it’s sycophantics.”
No! It’s fun, it’s play.
ARIA:
Totally. Having a good time.
REID:
Yes, that’s a way of doing this. And I think it creates a palette. I mean, also, for example, the Bro-botz stuff that Greg’s doing, you know, hilarious.
ARIA:
Yes, it’s hilarious.
REID:
And you know, if people haven’t checked out Bro-botz, they obviously should. Because it’s part of his thesis that was to say, “Look, part of what Super Agency is,” – because it’s Greg Beato, co-author of Super Agency – is “I have zero experience with making any video, any music, and look at what I can do because of this.”
And I can express my kind of funny – like, you get Greg’s sense of humor and commentary on AI systems and all the rest in a fun music video – as a way of doing it. So that creativity landscape I think is important.
REID:
Now, “I thought you were going to cure cancer,” which of course Sid and I are. But what people don’t realize is, look, there’s a bunch of different iterations for the evolution of these models. Part of the reason why they work on images, part of the reason why they work on video, is because the multimodal side of these things – both in comprehension and understanding – but also output are actually in fact part of what gives us this massive elevation and functionality across all this.
And so one of the simplest lightweight ways to say, “Hey, we’re going to have this thing start understanding real world through interaction of consuming understanding videos, through generating videos, through getting feedback on the videos,” is actually a part–
ARIA:
It’s on the path to–
REID:
It’s on the path of the AGI mission and a path onto the mission of like, for example, one of the things that companies like Aurora and others, Google and Waymo, are doing intensely is creating simulation with AI data to try to be– because it’s like, “Well, can we have these trucks drive in every possible scenario and circumstances?” No, no, you can’t do that. It just doesn’t happen that way because some of these circumstances are very rare.
So what are the things you use generated data for, simulated data, et cetera, is like, let’s go through every possible really bad circumstance.
ARIA:
What’s the rarest edge case we would not see in the real world even in a million instances that we can generate?
REID:
Yes. And part of the reason you do that and train them all, is because it’s part of how you get in substantive ways safer than human drivers – because human drivers never encounter those circumstances. Maybe they have some more natural, biologically inspired reflexes in terms of how you’re driving, but they haven’t been trained on all the what happens when this, and that, and the other is coming together, and what do you do to recognize you’re in that circumstance, to interact on it, to do that? And that’s again, part of the world generation part of these things. And that’s part of where going into the video generation steps in that direction.
ARIA:
We talk about trust, we’ve spoken about trust and entrepreneurs trust in AI. And then if we actually zoom out, I feel like geopolitical trust is also a big issue, whether it’s the US and China or what is the potential AI collaboration. You helped set up the UK Safety Institute and most of your focus has been on American AI and specifically actually how AI is American intelligence. And we want to imbue AI with sort of Western values, democratic values. But you’ve also spent some time recently in Europe and you were talking to leaders in London, in Paris, et cetera, about their AI efforts. So why do you think it’s also important to engage with some of these European leaders?
REID:
I think there’s a number of reasons. First, I actually think the cross-cultural, cross border dialogue with other cultures that are taking the key values that we find is very essential – human rights, individual rights, autonomy, ability to participate in a democracy and have a voice, ability to say what are the ways that we should empower individuals in society where governments are very important – but government should help shape and enable those values versus oppress them. And that’s part of what we mean by Western democratic values. And this may shock some people: it’s like, the US is one of the oldest amazing democracies on the planet, but there’s others too, and we can learn from them.
And the fact that we would not say, “Hey, we should learn from them, we should collaborate on this together, we should learn about these things,” is just super important. And I think it’s better in partnership, better in alliance, better in collaboration. And so obviously, I’ve got years of connectivity with the Brits, went to school at Oxford, Marshall Scholarship – which is part of the thing they gave as part of the thank you plan for the Marshall Plan – building bridges. But one of the things I learned about being at Oxford was: ah, here are the things that we can learn from a more global society that I hadn’t learned just by being an American.
And by the way, of course that’s part of the wonder of America, is like being in a country based on immigration from many different places. There’s a number of things where like immigrants bring in some of their best things, like hard work, cultural values and certain things.
ARIA:
Not to mention food.
REID:
Food, yeah, it’s like, you know, people like when people say what’s your favorite food? They usually say something like Mexican, or sushi or da-da-da, as their answer.
And so part of it –and I started this decades ago here in the Valley, and now it’s kind of just also a little bit more on the road– is when a minister from a Western democratic country comes to Silicon Valley, and says “I want to learn from it, I want to understand how to help my country with this,” – if I can, I will always meet with him.
Because the more Silicon Valleys there are, around the world, especially around the Western world and creating technology, having humanistic values in terms of what you’re building, etc – the better we all are.
ARIA:
I mean, it harkens back to your point. We are not playing a zero sum game, actually, when it relates to building up amazing Western values with AI talent in other places, this is positive sum. And so we all win.
REID:
Yeah, exactly. And so the first time I met Macron was actually when he was a minister, not the president, because he came over here with, “Hey, I’m trying to figure this out.” And so I get a call from our friends, say, “Hey, this minister from France, Macron,” – who I’d heard of, read some article about – was here.
I’m like, “Great.” He’s like, “Can you come by and say some things about entrepreneurship, investing, technology, risk taking, what are things – he wants to figure out how to help amplify this within France.”
It’s like, great! I’d be delighted. We spent two hours talking. Matter of fact, I think he’s the first minister that ever went, “I want to take a selfie with you.” He was like, “I got my phone, let’s take a selfie. I’m tech forward!”
REID:
–as a way of doing this. And so, and it’s been, Singapore, Italy, France, the UK, all of these folks. And part of it is – I actually even tell a whole bunch of US government people because it’s really funny – too few of the US people come to Silicon Valley and say, “Look, how do I learn from here about what I can do to help the rest of the country? Like, how do I bring economic prosperity, technological transformation, innovation, invention?”
It’s not to say, “Do it exactly Silicon Valley style.” No, it’s better to have diversity, right? And so, sometimes as a governor from Colorado – which I’ve done – but it’s like that kind of thing is actually super important.
ARIA:
So you mentioned in your most recent meeting with Macron, you talked about the Lafayette Fellowship, which is actually having Americans go to study in France. So when you support something like this, like, why do you do it? Is it like for a tech pipeline? Is it exchange of cultural values? Like, what do you think the essence of the Lafayette Fellowship is?
REID:
Well, I’m not adverse to a tech pipeline, right? I mean, win-win is business slogan, but it’s a good principle.
ARIA:
Sure.
REID:
And so — but in this case it was as simple. Like, I’ve also endowed some of the Marshall Scholarships because I think this kind of alliance and bridge building is a massive win on both sides. And so for the countries to whom we’d go, “Hey, we are allies in the world. We are trying to create an elevation of the human condition. We have these principles about how we deal with children. These principles are how we deal with human rights, these principles about how we deal with political systems and enfranchisement. We should be allies on these things.”
REID:
And so when I was asked, I was like, “Yeah!” I think this kind of bridge building – it’s one of the things that certainly helped me open my eyes. Because I had – until I studied Oxford – I hadn’t realized both how much of an American I am and also how much of a Californian I am. And it was like, oh, that’s really interesting. And by the way, it’s both the delight of that and also then evolving and learning from there.
ARIA:
I love it. Reid, thank you so much.
REID:
Appreciate it.
ARIA:
Always a pleasure and a big thanks to Dan Nielen, Eve Troeh, Bryan Pugh, Aaron Bostinelli, Jeff Berman, and the awesome teams at WaitWhat and Earth Species Project.

