This transcript is generated with the help of AI and is lightly edited for clarity.
ARIA:
Reid, great to be here today.
REID:
Yes. In person in New York. Exactly.
ARIA:
In this beautiful setting. So the tech world is a buzz because last week President Trump announced that they were going to institute a hundred thousand dollar—perhaps annual, we don’t know, some people thought annual, some people thought not—on H-1B visas. And as we all know, Microsoft, Amazon, Google, these big tech companies really rely on H-1B visas for a lot of their workers. And so some people in the tech community are saying, “This is going to destroy innovation. We are getting the best talent from all over the world to come here.” Other people are saying, “These big tech companies, they can afford it. This is fantastic. This is a way to raise revenue, but still have these employees here.” What do you think about this new potential—we’ll see, it’s an evolving topic—this potential idea of a hundred thousand dollar fees on H-1B visas annually?
REID:
Well, so this is funny, and here we are. So we’ll share a first awkward moment, which is actually Trump’s idea resembles an idea that I’ve been pitching for eight plus years. So it’s like, “Oh, okay.” Right now, I think you have to do the whole idea; otherwise, it’s a disaster. And the problem is there’s only a part. Which roughly the idea that I’ve been pitching is you should have unlimited H-1Bs. You should impose an additional tax on them—whether $100,000, once, yearly, whatever is the right thing. And you should make some provisions for startup companies. Because the startup companies obviously can’t afford the $100,000, or that sort of thing.
ARIA:
For startups, would it be cheaper? Cheaper than for the Amazons of the world?
REID:
Yes. Right. Because it’s how do we get, because you absolutely want the talent. That is one of our great superpowers, from the very founding of our country is immigration. You want that talent. And by the way, well, wait a minute, does it take away American jobs? It’s like, no, if this person comes over here and does this high-paid job—because it should be, there’s a bunch of regulation about the fact that it has to be comparable salaries and all the rest–does this high-paid job. Then they’re also patronizing restaurants, hiring accountants, using dry cleaners, hiring electricians, renting apartments, and all the rest. And all of this stuff is actually adding to our economy. That’s why you want it here. And frankly, one of the challenges that you have to be careful about in this is that—and you have to set the pricing right—is you say, “Well, for the large multinationals, they can hire in other countries too.”
REID:
So you say, “Don’t hire them here, hire them in a different country.” Then America loses all of that derivative revenue. So you actually want that immigration. You want the people here. But adding an extra tax is good for a couple reasons. One, is this seriously talent you can’t get here? Because then you’ve actually got an economic incentive. And that’s why unlimited. Because you’re like, “Well, hey, if I’m paying extra for it, I’m only going to pay for it if I can’t hire it locally.” But then we’re bringing that talent in. We’re having all that economics on our side. So unlimited H-1Bs. Yes, a tax—a $100,000, whatever, figure it out. Is it $100,000? Is it something else? It could simply be an additional X percent payroll tax. In which case, by the way, you might even then be able to not make a special provision for startups because that may be within the economic framework.
ARIA:
Right. If it was based on revenue, if it was based on something so that smaller companies, people who were starting out, were paying less. Bigger companies were paying their fair share. And so what do you say to the people who are like, “Yeah, that’s great innovation, but this absolutely takes an American job. This is an American tech worker coder who would be working in that job.” And you say, “No, that job would be overseas anyway.” Why doesn’t that explanation work?
REID:
Well, for most of these companies, they’re hiring, there’s a quality bar they want to get to. But at that quality bar, they’ll more or less hire as many people as they can find. So, there’s all kinds of natural incentives—why they would hire an American. And the usual complaint is, “Oh, you’re lowering the salaries because the immigrant will take the lower salary.” Now there’s a bunch of different regulation about how to make that not happen, and what you have to prove, and all the rest of that within H-1Bs. But that’s part of the reason why, I think it may even be ten years ago when I started pedaling an idea like this. It was like, look, if you just make it structurally more expensive for the companies, that will naturally play it out. And so any answer of, “Well, could you hire an American?” It’s like, well, “If hiring Americans is cheaper for me, I’ll hire an American.” So then you don’t have to get the “Well, does the regulation work? Am I actually really not lowering wages across it by bringing in people…” et cetera, et cetera. And that’s part of the reason why this portion of the idea, of making it more expensive for companies, is, I think, a good system.
ARIA:
Got it. So, ultimately, I agree with making it more expensive, but there has to be provisions for startups because we cannot disallow that ecosystem.
REID:
By the way, you should make it unlimited. And that’s part of the dumb thing that exists right now is this lottery system and all the rest of it.
ARIA:
Absolutely. Okay, so when we talk about AI, we often talk about programmers, developers, those are the people who, especially now, can get some of the very early benefits of AI. And a new survey from Stack Overflow said that 84% of software developers use AI or are going to use AI in creating their code—which is super positive—but 46% have real concerns and don’t trust the code that they’re deploying. So half of them say they waste time debugging, they have to go check the code, they go get a second opinion. And so the question is, how do we bridge this gap from software developers using AI but not trusting it? Or do we not need to bridge it? Is it good to get a second opinion to debug, to make sure to check the AI output? Or is this a problem that needs to be solved?
REID:
Well, it’s definitely not a problem that we need to focus on. So the baseline is it’s good not because there’s necessarily saying AI code is buggy, but it’s good to be diligent about what the outputs are, and what you’re doing and how to do that. And of course, we’re in the early stages of both this, in the classic, the worst AI you’re going to use is the AI you’re using today. And so it’s accelerating, but also like what is our new pattern of doing software development? And how is the software development work relative to producing all different kinds of code? And this is one of the things is like some kinds of code you’d say, if it’s like, “Hey, I’m producing the equivalent of a bunch of different scripts.”
REID:
I wouldn’t be that wonkish about it unless there was mission-critical in something. “I’m producing something that is infrastructure code about how the whole service is working.” I’d be more wonkish about it, as you should even before AI in terms of how this works. So, I don’t think it’s a big challenge. And I do think that one of the things that is probably the most important thing for developers to keep in mind is to say—because the natural thing for every skilled professional, a developer, a lawyer, a doctor, every skilled professional—is to say, “Oh, I discovered one bad output. Tool not ready.” And you’re like, “Nope. That’s a bad way of putting it.” Because the tool’s constantly improving. And one of the things that I tell people pretty constantly is, if you haven’t found something where the tool, where AI is useful to you in a serious way, not just, “Hey, what can I make from dinner from the ingredients in my fridge?” Or, “Can you craft me a sonnet for my friend’s birthday?” All of which are great. But like something that actually is impacting part of the way you’re working and actually adds to your skill set and your capability, then you haven’t tried hard enough. Because it exists for everybody right now. Now, and that’s true for developers. So don’t find an error, wave it off, but be constantly experimenting with, “Okay, how do I use this?” Which are the things that work really well right now, which are the ones that don’t work well now? But by the way, even if they don’t work right very well right now, then keep trying it in various ways and keep an open mind. Because it’s going to be improving.
ARIA:
I feel like so often we hold AI to the standard of a hundred percent when humans aren’t a hundred percent. And if we can use AI just to get better than where we are with humans, we’re going to be seeing big improvement. And one place where you and I talk about this all the time—where AI can be enormously helpful—is healthcare. So a recent study from Johns Hopkins found that when they were doing pre-surgery echocardiograms, this is what they do, they get a score so that they can see in the next 30 days, how likely is this patient to have complications—a stroke, problems from surgery. And their model right now spits out a number they’re able to understand, and they have 60% accuracy as to what patients are going to have complications. They created a new model, they fused existing echocardiogram data with also age, type of surgery—all of this info about the patient. And they found that AI—and this was a study of 37,000 patients—AI can get it right 85% of the time. So you’re going from 60% to 85%. And still some people are saying, “That’s not good enough because we’re only at 85%,” even though there’s this big delta. What do you think are the ethical considerations that we’re going to have to navigate, especially with something as critical as life and death? And how can we convince people that this step-change improvement is worth it?
REID:
Well, there’s a couple of things where most people misunderstand medicine and a bunch of other things. So the first is, when you’re kind of like taking a drug about a serious condition or a surgery or anything else, it’s a percentage game already. It’s not zero, a hundred—you’re already playing a percentage game. And even though, like a doctor might say, “Well, I’m actually prescribing this medicine to you because this is the way we understand the biology to work. And this is the question around what we’re trying to do,” it’s actually never a hundred percent. It’s actually about, “We think this on a high percentage actually works in your condition and sometimes doesn’t work for people.” And maybe once we get to really deep precision medicine and genetics and a whole bunch of other things, my guess is it won’t get to a hundred, it’ll just improve the percentage.
REID:
So it’s a percentages game. So the fact that people misunderstand that, because they do have exactly as you say, “Well, humans are infallible.” It’s like, no, that’s not the way it works. And by the way, they’re very, very good, and they’re playing a percentage game on your behalf. So the short answer is, in this case, you need to say, “No, no, we need to deploy the thing that the numbers dictate.” Now this gets to the second part of it, which is part of the progress of how medicine in specific works, is people start trying things and they see if it works, and then if it works a percentage of the time, they go, “Oh, we’ve got something here. This could possibly be…” And then they start frequently—not always, sometimes they figure it out because of causal stuff in advance—but sometimes they figure out the causal stuff afterwards. That after they started going, “Oh, this works.” And by the way, this is how the earliest medicine started working. It’s like, “Oh, aspirin works…”
ARIA:
Right, Tylenol works too?
REID:
Yes, Tylenol works too. I think we all should be carrying around some Tylenol as part of our—you know, we’re, we’re pro-science, we understand. And so, that percentages game is the thing that makes it work. And so people need to understand, it’s like, “Oh my God, you increased the percentages? Give me that.” And so give me the AI, even though we don’t understand—okay, it’s making a prediction. We don’t understand why it’s making that prediction, but if that prediction is all the more accurate, then we’ll do it. Now we don’t stop there, we go, “Of course, you should always try to increase the prediction.” But also go, “Okay, why is that?” And then you go, “Great, we’ve got this thing. Why is it”—and because then by the way, we can begin to see the sortal cases—”Why is it identifying these people, who we weren’t identifying before as very important to keep in the hospital, treat again, bring in again more soon, et cetera. Why is that? What is the thing it’s seeing that we’re not seeing?” And then that improves our science.
ARIA:
So to make changes and improvements in any field, but especially in the medical field, you obviously need better technology. You need humans to come on board. You need the clinicians and practitioners to say, “Oh, this is working.” And then third, is you have the legal and regulatory framework. And that’s going to be an impediment here, too. What do you think are the ways that we can get the government, legal—all of those things—on board so that we can integrate AI better?
REID:
Well, I think there’s two parts to how government regulation affects in this area. One part is a set of very good thoughts around, okay, so does it work appropriately? Are you avoiding downsides? Are you being appropriately inclusive and broad-minded across many different conditions or many different—like young people, old people, multiple racial characteristics, a bunch of other things? Men, women. Like, “Let’s cancel a whole bunch of science studies because they have the word women in them.” That’s a great idea. That’s sarcastic in case anyone misunderstands that. And so that’s good. The problem also is that we tend to be very influenced in our legal system by trial lawyers.
REID:
And so the trial lawyers go, “No, no, no, we should have this to a much higher standard.” And we should do it because that way we can try to impose essentially the trial law lawyer tax on the whole system. And that one should be much more contained and regulated. And not allowed to infect what the regulations are, how it operates, as much—not zero, even in that case, but much less. And so, you need to say, “Well, we should be able to deploy, we should experiment.” As long as we can demonstrate good evidence. For example, the classic problem with this is where we’re going to save ten new lives, but we’re going to lose this other life that we didn’t lose. But that’s by the way medicine works, too. So it’s like, okay, if you’re saving the ten and doing this in the right way, even if you lost a different life than you would’ve lost before, that’s how we progress on medicine.”
ARIA:
Absolutely. I mean, medicine is trial and error, and we’re just getting better and better, and building upon previous evidence, to see how we can make it even better. So recently, Zoom’s CEO, Eric Yuan, he joined many big-name tech leaders—Bill Gates, Jensen from NVIDIA, Jamie Dimon from JP Morgan Chase—in saying that because of AI, and because of the increases in productivity, we might be heading to a three- or four-day work week. For some people, that sounds like utopia. This is amazing. They can have more free time, more time with their family, whatever it might be. Some people think that this doesn’t sound good. So I think there’s two parts of it. It’s one, is this going to happen? And then also, what are the technical and cultural barriers? And I think the alternative—some people are saying—is that instead of a reduced work week for everyone, we actually might just have a bifurcated system where there’s a lot of unemployment, especially entry-level roles. And then other folks are doing just fine with AI. What do you think about the three-day work week predictions, and what are the barriers to getting there?
REID:
So the first is, if you had to choose from a society perspective between a three-day work week and a bunch of people out of work, you’d choose the three-day work week. It’s a really important part of our social objective to actually make sure that a bunch of people actually in fact, have employment, have things to do, have a sense of at least some purpose in the work. I mean, obviously, people say deep purpose—great, even better to have deep purpose. But it’s like, “I feel like I have a role. I have a role in the organization. I contribute, et cetera, and I earn my money.” And I think that’s good to have. And I think there’s various ways that people are worried about the cognitive industrial revolution that AI is bringing, that go, “Oh shit, we’re going to have a bunch of unemployment.” And by the way, we’ll have a lot of job transitions, which will involve unemployment at minimum in the job transitions, and so forth. And we need to do things to solve that. And one of the benefits, as you know, is AI is a good tool for that. The AI can help you figure out other work you can do, AI can help you upskill, reskill for that, AI can help you do the work. We just want to make sure we’re deploying AI to help these job transitions, too. Even as what the work looks like, what the job looks like. Now, your average person says, “I don’t want the job change. I’ve been doing this for X years. I’m perfectly happy.” And it’s like jobs change. I understand. You’ve got to think about it as like what you were was a horse and buggy driver and the cars are here now.
REID:
And you could say, “Well, but we should all run with horses and buggies.” Like, no, actually, in fact, society’s much better with cars, that’s what we’re going to do. And s,o like, let’s help you adjust as part of it. Now, that being said, I actually don’t think that we’re heading—I think we have a lot of job transition—but actually, in fact, I think that in a lot of cases, even though you have massive productivity increase of AI—just like you had massive productivity increases in the industrial age—I don’t think that ends up with a systemic, “Well people don’t need to work anymore.” I think that’s way further out than most of the critics who were on that side think, and I’m not sure exactly when we might get there.
REID:
I mean, it might be many lifetimes. Not just my lifetime, your lifetime, our children’s lifetime, et cetera. So, I’m uncertain about that prediction now. Not a hundred percent uncertain. It’s one of the things to think about and prepare for and whatnot. Now you get to the, “Okay, well, does that mean with all the productivity we go to three- to four-day work weeks?” Which, by the way, some societies have done in various ways. The Germans are an obvious example of that. Now, the problem that’s brittle with that is that human beings divide into groups and we compete. And part of how we have competition is this group’s going to work a lot harder. So one of the things I think we see coming for the European auto industry is the Chinese auto industry.
REID:
And it’s also coming, of course, for the American auto industry too. Because if they achieve much higher productivity—and some of that’s robotics and all the rest—but they go, “Well, we’re willing to work six days a week. We’re willing to do 996. That’s what we’re doing.” Then their industry can very well wipe out the other industries. And so there’s a competitive landscape to this that’s the underlying kind of point of view on this. That’s part of the progress of capitalism, everything else. That competitive landscape is part of what happens. Now, part of the reason why the Germans could do that is they had a very well-tuned system of technology, high quality—not just technology of the end product of the cars, but also how they make it, the apprentice system, about how they trained deeply skilled people in doing it. I think there’s a bunch of organizational techniques that I think are good to learn from in the rest of the world, which is how do you have capital and management working alongside labor and transition. But what you have to understand—and this is one of the things that’s going to be very difficult for Germany in the next ten years, difficult for the U.S., more for Germany, I think in this case—is your competitive situation is suddenly increased. And all of a sudden then three-day workweeks, four-day workweeks don’t work in that competitive situation. And I think this prediction of, “Well, we have this much work to do and now we have this much productivity.” It’s like, well, actually work always goes up. There’s always competition. We’re always generating productivity. So I don’t think we’re anywhere close to a three- or four-day work week there.
ARIA:
Okay. So I mean, economists have got this wrong forever. A hundred years ago, they predicted in the U.S., as people’s incomes went up, they would work less. And now in the U.S., we’re in this sort of strange situation where actually the more you make, the more likely you are to work more. And folks at the lower end are actually fighting to work more. Some of them only are working 24 hours in either a gig economy or in an entry-level job. And they want to work more, but their employers actually won’t give them that more time, and so they’re asking for more hours. And so I know you don’t like predicting—especially more than two years because you think it’s a fool’s errand—but, if I had to say ten years from now, would you predict that the economy that we’re in now looks more like a three- or four-day work week? Or you don’t think that’s going to happen?
REID:
I don’t think it’s going to happen.
ARIA:
Alright. So we are now going to shift from a more focus on tech to a focus on government and institutions. For our listeners who don’t know, you launched The Trust in American Institutions Challenge with Lever for Change, which is all about rebuilding trust in important American institutions—our government, hospitals, libraries, the media. How can we ensure that we have a cohesive American electorate, consumers, et cetera? And the good news is that yesterday we announced the five finalists for the challenge. And I’m going to get to them in a second, but I would love for you to talk about Lever for Change and why, in particular, you wanted to partner with them and their model for doing this.
REID:
So, unsurprising to many people who know me, we live in a networked age—is my point of view. There’s a lot of different things that come from that. Now, that also is how should we do philanthropy? Because when you think about a serious problem in society, and climate, and anything else, you should be thinking, how do I have a potential network solution? Because we have network challenges, we should have network solutions. And Lever For Change realizes this. Because it says, well, actually, in fact, let’s layer in multiple layers of networks to help solve problems that otherwise might look at like, “Ah, is that solvable? Reall,y we need some new ideas on this. A bunch of other things.” And so it’s like, okay, we use an RFP prize process to go out to many, many different individuals and nonprofits and say, “Here is a problem we’re trying to solve.” Then, because the money is serious, people then go, “Okay, I’m going to submit a proposal,” a request for proposal—that’s an RFP.
REID:
And then you deploy two other networks, one network of experts who the Lever for Change has through its excellent origins in MacArthur that are experts on all kinds of things—and it’s not just the MacArthur genius people, it’s many others in terms of how this operates—to look at these proposals, comment on them, rank them, score them on different things. Say which things have high probability, which things have low probability, which might be challenges that are not there. Then you have another network of judges that works through it and says, “Okay, these are the ones that we get to.” And you run through this process by which you get to semi-finalists and finalists, and then allocation of money for doing this.
REID:
And it’s a call from a network of ideas—by the way, just like networks of entrepreneurs in Silicon Valley—networks of experts and decision makers, which investors and venture capital plays into. Now they tie it the capital. Then the judging of it and making it happen. But it doesn’t actually even just end there, in terms of the networks. Because then what you’ve done is you’ve just built massive network databases, including of interesting individuals and nonprofits with projects, and Lever for Change has a whole bunch of funders who are specifically interested. And you go, “Well, actually, in fact, this one won this particular challenge, but what this nonprofit’s doing over here is of exact interest to this funder.” So, part of the benefit to participating in this network to the nonprofit is Lever for Change puts out actually as much energy into routing every good project to any funder or any other thing. They go, “Oh, this one is actually a good match for this.” And that’s again, a network property. And so that’s why Lever For Change, which has just been doing amazing work.
ARIA:
You believe in Lever for Change and the model, it’s so based on networks, which is what you believe in. And we were talking about what issue should we focus on? We love this model, but what is an important issue? And we chose trust in American institutions. Why was that so critical to you?
REID:
Obviously, we live in this very tumultuous time. There’s a whole bunch of erosion in trust of all kinds of institutions. And in the last couple months, a sustained assault on the trust of the institution of science, among many other things, whether it’s universities, corporations, government, et cetera. And what people don’t realize is it’s fundamentally—even as institutions have limitations—this is how our society works. The mistake, if you look at any history, is tearing all the institutions down. It is the French revolution, it is the cultural revolution, it is year zero in Cambodia. It’s a disaster to tear institutions down. You want to renovate them. And trust is an important part of it. And people say, “Well, how do I trust that one has this?” Well, one, you should have a trust and constant renovation anyway.
REID:
But we should all be working on how do we restore trust because that’s how we live in a healthy society. This isn’t even just talking about democracy; this is talking about a healthy society. Like, why does money work? Because we trust it. Why does banking work? Because we trust it. If you don’t do that, we have a very, very serious problem in society. So it was partially, okay, what is this problem that I think many, many people understand? What is countervailing to the general, many people saying, “Burn it all down,” including people who are in government, in terms of how it operates. And to say, no, no, what we should be focusing on is what are the steps we do to renovate and build trust. And so focusing on that building of trust is—again, in the Lever of Change—is something I want everyone to think about how important that is, and start acting in that direction.
ARIA:
And I think, importantly, this is a non-partisan initiative. You decided on this challenge in this area long before the presidential election. So now, no matter who’s in charge, what party, trust in American institutions is critical. And so, as I said just yesterday, we announced the five finalists. And again, the great thing about Lever for Change is that each of these five finalists will get $200,000 to create their project, plan for the future, make a better plan. And then one of them will receive $9 million, which we’re very excited about. But to your point, all five of them are going in the Lever for Change network. And so, regardless of who wins, we’re excited for all five of them to get the spotlight. They’re all very worthy projects. So I’m going to go through each of the projects. I would love for you super briefly to tell us why that is important. And just for those who are watching and listening, we are not putting thumbs on the scale. All five of these are amazing, and we just want everyone to know about them.
REID:
And there’s a judging panel. Aria and I are not the judging panel.
ARIA:
Alright, so the first finalist is the American Journalism Project.
REID:
So, local journalism. Well, there’s obviously a whole bunch of aspersions around journalism on a national level, I think actually, much of that is misplaced. But local journalism—also super important. You can see it. It’s in your life. It’s what happens. And it’s one of the ways that you can begin to restore trust in, well, what is the reporting and investigative approach for things that matter in my life?
ARIA:
Absolutely. And I love that if you are in a rural town in Tennessee, the New York Times might not be covering stuff that’s relevant to you, but you want to know what’s happening in your town, and that can build that local community trust and fabric, which is so important. Alright, the second finalist is CalMatters.
REID:
So CalMatters has taken this great idea of saying, “Hey, look, we’ve got these government institutions, which too often run bureaucratically, slowly, opaquely, et cetera, and trust gets lost in government because of that.” And by the way, we want it to be more transparent. We want to shine a spotlight. So, let’s do investigations, let’s have dialogue, let’s state clearly what’s working, what’s not working. And it’s not just a what’s not working, but to have that visibility, which then creates increased accountability. And that’s one of the things where people say, “Ah, you are actually providing me services and actually you’re providing me services even better than you were before.”
ARIA:
Yep. Third finalist is Recidiviz.
REID:
So for Recidiviz, the area is how do we make much better how we apply data science to parole and other kinds of things. And you say, “Oh, is that just something for people who’ve been incarcerated?” No, it’s for them too, but what people don’t realize is if you do this well, it works much better for society. You get people out, you’re not paying the public bill—the money that comes out of your pocket to pay for them to be in prison. If you could do effective parolling decisions, you want it, because it’s good for you, not just good for the person. And then integrating these people into communities and everything else. But it’s basically data science, which is our modern thing, applied to much better decision-making.
ARIA:
I think if most Americans knew that there were 70,000 people sitting in prisons and jails who have paid their debt to society, who should be out, and we’re paying money for them to stay in jail just because some paperwork didn’t go to the right place, they’d be very in favor of getting people out of prisons and jail and back into their communities. So the fourth finalist is Results for America.
REID:
So one of the key things that really works in business is we share information. We say, “Hey, what’s the best way to do this?” This is actually one of the things that makes Silicon Valley work—it’s an intensely learning network. So, Results for America is to say, “Hey, let’s have local governments that are doing things, experimenting things, trying things, and share them with other local governments to say, hey, here’s a really good way of solving this problem.” It might be sanitation or trash on the streets. It might be traffic. It might be policies in schools. It might be zoning. Anything. Here’s things that worked for us—might work for you. And of course, sharing that information means that it’s a very cheap way of increasing quality of local government, quality of services.
ARIA:
If we’ve already solved the problem, let’s learn from that and solve it in our own local community. The final finalist is Transcend, working in Public Education.
REID:
So the basic idea in Transcend—which is of course, a very good thing—is bringing the communities much better in to try to be partnering and working with how do we have better schools? How do we have better outcomes from the schools? What are the kinds of things we might experiment on? “Oh, there’s this piece of information over here. Have we tried this? Maybe we should try this.” Because then that sense of co-ownership is actually one of the things that is part of how you get trust in institutions. Part of how you go, “Oh, well, since I have some participation and voice, I can help make it better.” And I think that’s one of the things that Transcend can do so well.
ARIA:
Thank you so much for that information about the five organizations. I think what’s critical here is all five of these organizations are worthy of trust and worthy of support. And so please, everyone who’s listening, go check them out. We want as much support as possible for all five organizations. And then we will be announcing a winner in the spring.
ARIA:
Thank you so much, Reid.