All Trains Lead to Crazy Town: Why I am Not an Effective Altruist (or a Philosopher)

If you are reading this post then you almost certainly have already heard of Effective Altruism, or EA. The EA movement has become increasingly influential over the past decade, and is currently getting a major publicity boost based on Will MacAskill’s new book What We Owe the Future, which among many other things was featured in Time Magazine.

For people who have not heard of EA at all, a brief summary is that it’s a combination of development economics and working to prevent Skynet from The Terminator from taking over the world. There is much more to it than those two things, but the basic idea is to take our moral intuitions and attempt to actually act on them to do the most good in the world. And it happens to be the case that many people’s moral intuitions imply that we should not only try to donate money to highly effective charities in the world’s poorest countries, but also worry about low-probability events that could end the human race. The thrust of “longtermism” is that we should care not only about people who live far away in terms of distance, but also those who live far away in terms of time. There are a lot of potential future humans so even super unlikely disasters that could ruin their lives or prevent them from being born are a big problem.

The fact that these conclusions probably strike you as a little crazy is not a coincidence. EAs are constantly pushing people to do things that seem crazy but are in fact consequences of moral principles that they agree to. For example, a number of EAs have literally donated their own kidneys to strangers in order to set up donation chains that lengthen or save many lives. That’s good! I haven’t donated a kidney myself, but my mother donated hers—to a friend, not a stranger. She’s not an EA and probably hasn’t ever heard of them; she’s just a very good person. But I give some credit to the EA movement for helping normalize kidney donation, which appears to be getting more common. Similarly, EAs have done a ton to push more money toward developing-country charities that have a huge impact on people’s lives, relative to stuff that doesn’t work or (more radically) charities that target people in richer places. When I argue that Americans should care as much about a stranger in Ghana as they do about a stranger in Kansas, they think that sounds kind of crazy. But a) it’s not and b) people find that notion less crazy than they used to. We are winning this argument, and the EAs deserve a lot of credit here.

My issue with EA is that its craziest implications are simply too crazy. One running theme in MacAsakill’s PR tour for his new book is the idea of the train to crazy town. You agree to some moral principles and you start exploring the implications, and then the next thing you know you’re agreeing to something absurd, like the repugnant conclusion that a world with 10^1000 totally miserable humans would be preferable to our current world.

Picture1

The specific longtermist conclusion that seems crazy is that there’s a moral imperative to care almost solely about hypothetical future humans, because there are far more of them than current humans. By extension, we should put a lot of effort into preventing tiny risks of human extinctions far in the future. One response here is that we should be discounting these future events, and I agree with that. But it’s hard to come up with time-consistent discount rates that make moral sense and put any value on current humans. Scott Alexander thinks that the train to crazy town is a problem with EA or with moral philosophy.*

I think that’s wrong: the problem isn’t with moral philosophy, it’s that all trains lead to crazy town. I have every impression that this is how philosophy works in general: you start from some premises that seem sensible and then you dig into them until either everything falls apart or your reasoning leads to things that seem nuts.  My take on this limited due to a kind of self-fulfilling prophecy; I didn’t study philosophy in college, but that’s because the basic exposure I got as a college freshman made me think that everything just spirals into nonsense. There are many examples of this. The “Gettier problem” attacks the very definition of knowledge as a justified true belief:

Picture2
This is what philosophers actually believe

Another example comes from a conversation I had in graduate school, with a burned-out ninth-year philosophy PhD student who studied the reasons people do things. He summarized the debates in his field as “reasons are important because they’re reasons—get it?” He planned to drop out. It’s worth noting here that Michigan’s Philosophy department is among the very best in the world; it’s ranked #6, above Harvard and Stanford. Reasons Guy was at the center of the field, and felt like it was a ridiculous waste of time.

This problem recurs in topic after topic. It feels related to things that we know about the fundamental limitations of formal logic, starting with Gödel’s proof that any sufficiently powerful formal system is either inconsistent or incomplete. The Incompleteness Theorems were pretty cool to learn about but they didn’t exactly motivate me to want to study philosophy.

This clearly isn’t a novel idea—Itai Sher recently tweeted something that’s quite similar. But it’s pretty different from the notion that philosophers waste their time overthinking things that don’t matter. Instead, what’s going on is that if you drill down into any way of thinking about any important problem, you eventually reach a solid bedrock of nonsense.

Picture3

Why does it matter that philosophy leads to these crazy conclusions? I think they matter for the EA movement for two reasons. First, well, they’re nuts. I think the fact that this is clearly true—everyone involved seems to agree on this—tells us that we should be skeptical of them. We don’t really have all these implications worked out fully. We could be totally wrong about them. We should remain open to the possibility that we are running into the limits of the logical systems we are trying to apply here, and cautious about promoting conclusions that don’t pass the smell test.

Second, they might undermine the real, huge successes of the EA movement. Practically speaking the main effect of EA has been to get a lot more money flowing toward charities like the Against Malaria Foundation that save children’s lives. It seems clearly correct that we should keep that going. The arguments that yield this conclusion to this might also lead to crazy town, but they aren’t there yet.

It seems as though MacAskill agrees with me on the practical upshot of this, which is to not actually be an effective altruist:

Picture4

What should we do instead? I think MacAskill is exactly right, and that his suggestion amounts to basically saying we should all act like applied economists. Think at the margin, and figure out which changes could improve things. Do a little better, and don’t feel the need to reason all the way to crazy town.

Full disclosure: I plan to submit this post to this contest for essays criticizing EA, which was part of what originally motivated me to think about why I disagree the EA movement.

* You might assume that the repugnant conclusion is a specific failing of utilitarianism, but MacAskill claims it’s not and I trust that he’s done his homework here.

2 thoughts on “All Trains Lead to Crazy Town: Why I am Not an Effective Altruist (or a Philosopher)”

  1. Thanks for sharing! I’m a huge podcast listener so I’ll check it out.

    To be clear, I’m not the originator of the “crazy town” terminology – MacAskill uses it himself in the quote I screenshotted. I wouldn’t say that all EA ideas are in “crazy town”, but they often put you on board a train that leads there.

Leave a Reply

Your email address will not be published. Required fields are marked *