11 min read

Our Post-EA World

Our Post-EA World

Recently I've found myself making not-very-funny jokes about EA. 

A few weeks ago, I moved, and two of my friends came over to help me unpack books. While doing so, we found a collection of the EA classics: Doing Good Better, 80,000 Hours, an advance copy of What We Owe the Future that I'd accidentally stolen from a friend. Maybe The Precipice, too.

My friend asked, "Do you want these in the living room, or on the shelf hidden away upstairs?"

"Let's put them upstairs," I said. "I don't need the EA propaganda machine in my living room."

Around the same time, I wrote about “Big EA”. Someone asked me what it meant. I was using it as a tongue-in-cheek reference to the Berkeley/Oxford EA communities and the grantmaking institutions and non-profits around them (CEA, OpenPhil, Effective Ventures, etc.). But it was a fair question – the definition is not at all obvious.

Where did all all this cynicism come from? 2022, of course.

Remember those times, the summer of love that we’re all still slightly hungover from? In those halcyon, just-post-COVID days, Effective Altruism was everywhere. Suddenly, it seemed that everyone I knew had been EA all along. A surprising number of my friends altered their life plans to go work for EA organizations or do EA fellowships. The movement had gone mainstream: in the pages of the New Yorker, in Time magazine, all funded by ZIRP money. What We Owe The Future was playing in every bookstore. Nonprofits barely took calls from mere deca-millionaires. Ivy League students uprooted their lives to move into group houses in Berkeley, dropping out of college to start non-profits.

I, a perennial EA-adjacent, wanted to know what was up. I'd always thought EA was onto something with the malaria nets and cash transfers. But I wasn’t a longtermist, or worried about AI risk. I wondered if maybe I should be. What was I missing?

When I asked my very smart friends why they were so worried about everything, I overwhelmingly kept hearing the same answer. "I sat down and thought really hard about it, and I realized I should be,” they said.

Every time I heard this, I wanted to beat the speaker over the head with a paperclip plushie. What?! Where was the second order reasoning, here? What was the probability that everyone would come up with the same conclusion, through the same process? Shouldn't we be very suspicious of this on the meta-epistemics alone?

That phrase also implied that if I tried enough, I would also reach that conclusion! Clearly, I didn’t agree with the EA mainstream because I wasn’t trying hard enough or, worse, smart enough. As if the process of deciding how to do the most good in the world was as simple as some sort of mathematical proof!

To clarify, I don’t think my friends ever actually meant to imply that. They were just using an idiom common in the culture. (For a similar, but not exact, example of this line of thinking, consider this comment on the EA Forum. I’m not the only one who has been disconcerted by this turn of phrase.)

Nonetheless, the choice of words reflects that culture. And to me, something about this EA methodology stank. I won’t do a good job of describing what here – it’s not my main focus – but if I had to hand-wave at something, I’d say that I didn't like the assumption that thinking hard enough was enough to answer a difficult moral question, especially when all the people doing the thinking came from roughly the same elite, analytical Euro-American background. Reasoning depends on its axioms, and it felt to me that longtermist EA had limited awareness of what axioms it was depending on, because they were so commonly shared, so much in the groundwater.

I don’t want to emphasize this too much, as more mainstream critics do. I mostly think it's an issue that, for instance, few analytical Euro-Americans types have lived in a country with a sustained, high growth rate. Few have had neighbours in grinding poverty, or shared a roof with someone who supports a family on $500 a month. My complaints here are not "social justice" ones. I'm a fan of elite Euro-American analytical cultures, and EA’s focus on analysis is its relative strength, compared to other social movements. Sometimes EA just likes to take it too far.

There still was clearly something good here, and I wanted to find it. But it troubled me that when I tried to sift through and figure out what parts of EA I did like, I struggled to find anything. In 2022 EA, it felt sometimes like the only important thing was what other people thought of you. Which surely couldn’t be right.

Of course, every community has status dynamics. One thing I find it helpful to do, when debugging a community, is to ask myself, "How can an outsider come in and quickly become well-respected?" From this you find what it is that the community values, and what it is really about. In sports, it’s winning. In finance, it’s making money. In Silicon Valley, it’s developing new technology.

EA in 2022 felt so insular and circular, I couldn’t pin down what it valued. Working on AI alignment or community-building around AI alignment made you cool, I could tell – but everyone freely admitted that the field was in its early stages and that it was hard to assess who was really being successful in it. Being a successful community-builder worked too – but that was even more self-referential. Being an OpenPhil grantmaker definitely did it too, but that was because they funded all the projects that everyone else was working on. On what basis did they make their decisions? Surely not just vibes?

I wasn’t the only one who felt this way. It was systemic. Everybody thought they weren’t cool enough, there was an inner ring that they weren’t in. Even Dustin Moskovitz, the person who single-handedly funds half of EA, was told that he wasn’t a ‘core EA’!

How did a group of people dedicated to doing the most good in the world end up like this? And why were so many people — including me — drawn to it?


If you're reading this, maybe you were also there for this. You also know how it ended.

EA has become fun to kick in the last year, now that it's a wounded dog. Sometimes I wonder how people manage to find this much vitriol for it. Have you met the EAs? They’re such nice people!

Maybe people felt that EA played motte-and-bailey with malaria nets and longtermism. Maybe it’s that there are few things as harsh as the contempt we have for those who have shined brighter than us and then fallen from grace. It probably doesn't help that the implicit message of EA is an unforgiving one: "You aren't altruistic enough. You are not talented enough. You are not good enough, and you are not doing enough to save the world." 

Everybody who seriously considers the effective altruist cause has to grapple with this, and face the unending amount of suffering in the world. You can look at it and despair, but that way madness lies.

Instead, the EAs grapple with it, and try to figure out what is enough – and what they can do. And so at the core of effective altruism is a social movement focused about what you can do, today, to make the world better. It starts with the malaria nets. You, yes, you, can donate $3,000 and save a life. Right now! On the internet! All you have to do is click here – it’s not a scam, I promise. From the comfort of your living room, today, you can statistically prevent an African child from dying!

It’s okay. I haven’t given the Against Malaria Foundation my money, either. Maybe you’re an adult with a car payment and a mortgage and children to feed. In our world, your property is yours, and you don't owe it to someone you have never met. The EAs are quite alright with that.

But remember children and teenagers – university students, even – are more ideologically pure, kinder than us, and raised in a world where they have little say in what happens. To realize that there is a path to doing good that is so simple yet meaningful? That is a powerful thing.

Compare it to two other social movements that young people today are drawn to: the environmental movement and social justice causes. To me, these activist movements focus overwhelmingly on fighting the system.

Imagine the climate change movement. What do you think of? I see Greta Thunburg shouting in the United Nations about how the older generations are ruining the world for those of her age. Or people gluing themselves to subway trains, shutting down key infrastructure because they don’t know how else to amplify their voice.

The means for action that the environmental movement gives its adherents are nonsensical. I cringe when I see people separating recycling into streams which are largely dumped anyway. It breaks my heart when people decide not to have kids, because they worry about its impact on the environment.

It's imbued with this sense of hopelessness, the idea that if we plead to our parents or teachers enough then the adults will step in and fix the problem. But a powerful person doesn’t protest, or sort their recycling and hope for a gold star. A powerful person plans, prepares and acts.

The central conceit of EA – the part of it that, for some reason, drives people nuts – is that it dares to ask: what if there is no system oppressing you, just thoughtlessness? What if the problems in the world come not from villains, but from an uncaring void in the world that simply doesn't care? What if the way to fix the problems in the world is not to protest and rail against them in the hopes, but to acquire the technical, political and operational skills you need to solve them yourself, and then actually go and solve them?

The best projects that I have seen come out of EA are inspiring examples of this mindset. Consider Asterisk, one of the best intellectual magazines in print today. Or Lead Exposure Elimination Project, a nonprofit that has been called one of Founders Pledge’s “most cost-effective charities”. And, unless you’ve been living under a rock, you’ve heard of Anthropic, one of the current darlings of the tech world, a product of the EA attempt to solve AI alignment.

(As an aside, this is also why I find the apocalypse cult of AI doom so repellant, as exemplified by Pause AI or MIRI’s death with dignity strategy. It's a natural conclusion of some EA lines of thinking, but it is the most nihilistic one.)

For all of its problems, 2022 EA took this to an extreme. Old EA focused on earning-to-give and buying malaria nets. 2022 EA had learned about the limits of what money could buy, and shifted its focus. New EA was about the importance of being agentic.

In a world that has been shaped by our elders, where technological progress has stagnated for fifty years and our lives seem overly shaped by faceless bureaucracy, where most young people can’t buy a house and we’ve given up on putting humans on Mars, it's easy to lose hope that we can change anything. Many of us have; I see it often when talking to non-EA friends.

EA is not the first community to think that small groups of motivated people can do great things, but it is one of the first to apply it to the idea of doing good, and the first to send missionaries throughout the West's elite institutions to proselytize it widely. Paired with its penchant for learning deeply about cause areas, and applying basic mathematical analysis to its work, it's a potent force for positive change.

Its ideas are sometimes crazy. I can’t quite believe that shrimp welfare is important, despite some well-made arguments in its favour. But you need some level of craziness to challenge the status quo, and the world could use more of this brand of crazy. EA’s anti-authority nature can drive people up the wall – I know it has driven me up the wall. But it is the natural outgrowth of a culture that thinks the world could be so much better than it currently is, if we tried. 


So that’s EA, I guess. The bad and the good. What now?

I’ve come to believe that EA is at its best as an incubator of ideas. At its best, it gives people the funding, framework and confidence to start odd ideas that otherwise wouldn't happen. EA should still do this! 

But EA funding is a double-edged sword. It's granted to people who otherwise wouldn’t get funding, for ideas that seem crazy when proposed. This makes those organizations, and their founders and employees, very vulnerable to EA groupthink and its status dynamics, because they are completely dependent on it. And this makes those organizations more likely to be focused on cool EA ideas, and less focused on what is actually good for the world

Like most movements, EA is at its worst when it is overly focused on itself. In 2022, EA was so all-powerful and rich in funding that it seemed it would never need to interact with the outside world again. This was the part of Big EA that I found so dangerous: the cult that drew in talented young people and sent them to work on problems of dubious efficacy, with complete disdain for any received wisdom from the rest of the world. It seemed to have reached escape velocity, just before it came crashing back down to earth.

If I could change one thing about EA, I would change a standard part of the grant-making process. I would ask every grantee how it expects to raise from outside fundraisers eventually. (This should also help with the Girardian terror and the status dynamics. The mimetic pressure will come off when not everybody in EA is trying to raise from the same set of organizations and hiring from the same talent pool. Is it surprising to you that the EA organizations I like most, and find most effective, are ones perched on the border of EA and some other subculture?)

Of course, there’s no reason EA should change itself, and it probably won’t. But I don’t worry too much about it. More and more, I find when I'm describing someone I think will do great things, I describe them as “post-EA.” 

Maybe there was a time when EA was the best subculture around for young people who wanted to improve the world. Many of those people now are very familiar with EA, wary of it and yet inspired by it. Slowly, I see many of the best lessons from EA taken up by other movements. I see glimmers of it in progress studies, the abundance movement and even in today’s Silicon Valley, where the cool factor has shifted from enterprise SaaS to AI and hard tech. (While those movements are great, they don’t seem to have community bonds as strong as EA's. I don't know why. I worry sometimes that they won’t work as well.)

For all of its problems, EA taught us how to take the hardest problems in the world and work on them, that it is worthwhile and possible to do so. It brought people together and set them on one of the hardest and most intractable problems in the world: suffering. In doing so, it gave us the methods and motivation to attack many other hard problems: regulatory capture, political polarization, climate change, economic stagnation. Whatever else it does, it is important and good for that reason. It's why I’m glad to live in the world that EA has made.


Thanks to Eric Neyman and Ross R for extensive feedback and Holden L, Rebecca G and Aditya G for discussion. Thanks to Rachel B for discussing an early version of these ideas with me.

Tanner Greer shared several of his insights into EA in a small group discussion that I was a part of almost a year ago. I've tried not to repeat them, but they've been an influence. In particular, he pointed out to me that EA is an ideology largely marketed towards elites.