Amidst finals week last quarter, The Phoenix’s editor in chief, Shubh Malde, spoke to Noah Birnbaum, co-president of UChicago Effective Altruism, about why we ought to think more about how to do good. Their conversation covered a range of topics, from short-termism and AI policy to shrimp welfare. 

Below is a transcript, edited for brevity, of select segments of the conversation. The full transcript will be published shortly.

Shubh Malde: What is Effective Altruism (EA)? 

Noah Birnbaum: Effective Altruism is a practical and research community dedicated to the idea of how to do the most good. The general idea is that doing good is actually quite difficult. We should probably spend a lot of time thinking about how to do the most good before we start acting. 

Malde: Can you talk me through the larger history of the movement? 

Birnbaum: Effective Altruism, depending on how you look at it, started about fifteen years ago, though there were some proto-communities online before then. It was inspired by certain philosophers, notably Peter Singer. He wrote Animal Liberation, which argues that we should care about animals, and a paper called “Famine, Affluence, and Morality,” which presents the drowning child thought experiment. The basic idea is that if we would save a drowning child right in front of us, we should also feel morally obligated to donate money to save lives elsewhere. 

The movement has moved away from the idea that we are all “moral monsters” for not doing more, but it has kept the belief that we are in a greater state of moral obligation than we might initially think… 

…a lot of early EA ideas revolved around long-termism—the notion that future people matter as much as present people and shouldn’t be discounted as much as we tend to do. They also focused on global health and development, finding that some interventions are significantly more effective than others. 

At some point, wealthy individuals started getting involved, which played a role in EA’s expansion. Dustin Moskovitz, the co-founder of Facebook and Asana, became a major donor through Open Philanthropy, one of EA’s largest grant-making organizations. Then there was Sam Bankman-Fried. His involvement was a big turning point for the movement because it caused a lot of controversy. 


Malde: What about EA’s approach to influencing policy? Given its focus on long-term thinking, how does it deal with political systems that are short-term biased? 

Birnbaum: Policymakers often discount the future too much—there’s a concept called the social discount rate, which essentially says that future people matter less than present people. Some philosophers, like Derek Parfit and Tyler Cowen, have written about why this leads to troubling conclusions. 

EA generally tries to encourage policymakers to take a more long-term view. A good example of this is AI regulation—most policymakers aren’t thinking far ahead about AI risks, but EA-affiliated organizations are trying to push the conversation in that direction. 

More broadly, EA advocates for cost-effectiveness and evidence-based policy. There’s always a risk of over-prioritizing certain causes, though, so some EA organizations take a “hits-based approach”—funding a range of plausible interventions rather than putting everything into one cause. 

Malde: AI has become a big topic within EA. Why is that? 

Birnbaum: A lot of it comes from the idea that we’re living in a pivotal moment. There’s a well-known paper by Holden Karnofsky called “The Most Important Century,” which argues that technological progress is accelerating, making this century uniquely consequential. 

EA also grew out of the rationalist community, which had early discussions about AI risk. Thinkers like Eliezer Yudkowsky, Robin Hanson, and Nick Bostrom debated whether artificial intelligence could have massive consequences for the future. The concern is that if AI surpasses human intelligence, it could become uncontrollable. Since EA focuses on neglected existential risks, AI safety became a key priority. 

Back in 2021, there were reportedly only about 200 AI safety researchers worldwide, which is extremely low given the potential risks. 


Malde: A more particular discussion in the EA community has been around shrimp welfare. What’s that about? 

Birnbaum: If you take the general premise that animal welfare is important, then you start looking at the numbers. Around 300 billion shrimp are killed each year, often in cruel ways—like being thrown onto ice and left to suffocate for twenty minutes. 

Because the numbers are so large, even small interventions could prevent massive amounts of suffering. Some estimates suggest that with just one dollar, you could prevent 500 hours of shrimp suffering. Given the scale, I think this is an area that deserves more funding, even though it’s still a niche within EA. 


Malde: Do you think EA can be separated from utilitarianism, or does it rely on it? 

Birnbaum: That’s a really interesting debate within the movement. While utilitarianism is definitely influential, I don’t think EA is strictly utilitarian. 

Most EAs follow something closer to “utilitarianism with deontic constraints”—a phrase Will MacAskill has used. That means we should generally aim to maximize welfare, but we also respect certain moral rules and constraints. For example, even if breaking the law in a certain way would theoretically maximize utility, we probably shouldn’t do it because the broader consequences are too uncertain. 

People within EA come from a range of moral perspectives. Some are classical utilitarians, while others are more pluralistic and incorporate ideas from deontology or virtue ethics. 

Malde: If someone wants to join the EA group at UChicago, what can they expect? 

Birnbaum: We have weekly socials and an introductory fellowship that covers core EA ideas. The fellowship walks through concepts like cause prioritization, long-termism, and critiques of EA, and it ends with a session on taking action. 

Next quarter, we’re also hosting a Midwest EA retreat, where students can meet professionals working in EA-related fields. If someone is interested, they can email me at dnbirnbaum@uchicago.edu or join our mailing list.

Readers’ picks