Improving on Effective Altruism

The next three posts, are about avoiding very particular ways of messing up when you’re trying to save the world. The next two posts, are very specific to a subset of people who are in the effective altruism movement, and will seem obvious/weird/irrelevant to those who aren’t.

Let’s say you already believe effective altruism (EA) is bad at achieving its stated goals and you’d like to do better. Go do your own research instead of contacting me if you don’t believe this yet, such as by looking at the fifteen links above.

Let’s also say that you’ve spent a few hundred hours, mostly in meditation, working through the content in Parts I and II of this blog, such that you have a ton of willpower to throw around. And for good measure, let’s say you’re interested for saving as much of the world as you can, not just making some sort of difference. What might you do, then?

A fact I won’t justify, is that changing effective altruism is off the table. People don’t change, and movements sure as hell don’t change when their power structures are ossified like that.

One idea I see thrown around, is to make your own movement of like-minded people. With people who care about the high variance interventions, like AI safety or anti-aging research. But I suspect this is bound to fail, and turn into a social club just like EA. Maybe if you curate it carefully and only let in people who are similarly high willpower and also care about the world? That might be work, but it’d be hard to find people for a couple reasons.

First, it’d be hard to find people to join you, just because not many people can sit still long enough to learn the skills in Parts I and II, and not many are interested in saving the world.

Second, and this is why I really don’t think you should try to start a movement, is that people who are actually going to have an impact, are drawn to impact. GiveWell started out by actually doing work in charity evaluation, with the founders’ own funds, and the movement invited itself along afterwards. People aren’t going to trust you if you’re just talking about how to have an impact; you have to show them you can actually do things, and then build a movement to save the world from there. So people can tell you’re not one of the scams that wants to eat their energy.

Don’t do things to “prove yourself”. Fuck that. Instead, take the first steps to actually save the world. The things you’d be doing anyways, if you were all alone in wanting to save the world.

What are those things? You should try to figure this out for yourself, really. I’ll give my own thoughts, and you can evaluate them one by one, based on their merits.

You can look to the decaying corpse of EA for ideas, too. I don’t have much information on the value of various lines of AI safety work, but it’s likely worth digging into. Yeah, EA’s analysis of this area is poisoned, so ignore their thinking and do your own. It could be a worthwhile area anyways. Same goes for anti-aging work. I’ve looked into that much more, and it’s very unlikely to work.

It’s very important when you do this, to be very thorough, and in particular, to ignore other people’s expected value estimates of courses of action. People try to influence each other by having high estimates of things they approve of, so throw everything out the window and start from bare data and your own instincts. Never just copy part of someone else’s analysis. Most people are trying to control you through your models of the world, especially on higher levels like, “something like EA has some chance at saving the world”.

As for what you can do personally? You could do some sort of startup intended to make you a billionaire, and then go from there. Fund anything that seems promising, while staying skeptical and remembering that everyone’s “expectation values” are social bullshit or fake hope. But you still can’t save the whole world this way, and if you decide to go for less than saving the whole world, you probably won’t solve any long term systemic issues if you’re working within existing power structures.

Maybe gather similar people once you’ve done enough that they come. Beware that most will not be willing to do real work. Especially beware if your model is, “I’ll enable other people and they will save the world”. How would they do it if you don’t know how to?

Yeah, it’s disappointing. I don’t have a reliable plan to save the world. So I’ll keep working on my own bottlenecks, and exploring.

Next post: Improving on Effective Altruism II

Leave a Reply

Your email address will not be published. Required fields are marked *