How Not to Attack Effective Altruism with the Sierra Club
A friend posted this meme the other day:
I think we can add a third arm in there: Emma Morris criticizing Effective Altruism (EA).
This is not the first time some blogger has a warped misunderstanding of EA, and then attempts to discredit it, nor probably the last. Sadly, that is what we have here with Morris’ The Trouble with Algorithmic Ethics https://www.sierraclub.org/sierra/trouble-algorithmic-ethics-effective-altruism.
Introduction to Effective Altruism
We will start with what Morris could not be bothered to do, and that is to learn what EA is about and how it works. Going only to the homepage of EA’s website (https://www.effectivealtruism.org/) we see “Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.” Ok, that sounds nice. But the important bit is a little further down and is how EA defines problems worth working on. To be a cause area for EA, the cause must be important, neglected, and tractable.
Important means that once the problem is solved, it will help something. Neglected means the majority of the world is ignoring the problem. And tractable simply means the problem is solvable. Should a cause area not sufficiently meet these three pieces of criteria then the cause area is not worthy of EA. The EA forums are full of posts with people making the case various problems are worth EAs attention because they believe the problem meets these three criteria. Some only meet two of the three, but to very large degree such as in the case of wild animal suffering, for example, the problem is important and neglected, but tractability is unknown. Full details of the EA framework are here: https://80000hours.org/articles/problem-framework/
What gets lost to the casual reader (or one that doesn’t read at all) is that as a problem gains attention, money, and solutions, those problems fall out of the boundary for being an EA problem. Problems in EA change over time. No one in EA doubts the importance of, say, climate change, but given the amount of focus and money on the problem of climate change right now, it is debatable if climate change is an EA problem. Climate change does not meet the demands of negligence. Should the world abandon climate change initiatives tomorrow, I have no doubt many in EA would take up the cause.
Now let’s break down the article
“Tech bros [apparently being in EA means you are a tech bro]… ideal system [EA’s system, that is] is one that allows them to keep making lots of money as long as they give some of it away.“ - Would love a site source on this. $5 says there isn’t one. Any talk of money in EA is in reference to donating it. But a good ad hominem is a fun way to start a misguided article (such as mine!).
“Oxford philosopher William MacAskill published a book-length argument for effective altruism, What We Owe the Future” - Almost. EA is divided between two main areas of thought: what we can do now to make the world a better place and what we can do for the future. Those interested in what we can do now work in areas like cyber security, ending factory farming, biosecurity, and economics, to name a few. MacAskill’s book is his case for caring about future populations, called longtermism. So when Morris says “book-length argument for effective altruism” she is forgetting (or doesn’t know) about the other half of EA. Leaving out the other of EA and implying EA is full of tech bros is important to help paint the reader a picture of what EA is like so that those following EA are easier to tar and feather.
“Effective altruism doesn’t play well with most environmental ethics theories, in part because in the universe of effective altruism, only entities that can suffer matter. Trees, rivers, species—none of these are intrinsically valuable. Effective altruism distills all of ethics into an overriding variable: suffering. And that fatally oversimplifies the many ways in which the living world can be valuable.” - Understanding how a problem gets attention in EA renders this criticism null. Problems have to meet the three criteria of importance, tractability, and negligence. I don’t know enough about environmental ethic theories, but my guess is that they fail on negligence. If I am wrong, feel free to submit your proposal to the EA forms:
https://forum.effectivealtruism.org/. More on caring about other values later.
“MacAskill and another Oxford philosopher, Toby Ord, launched a group called Giving What We Can. The charity has helped donors direct money to ‘the world’s most effective charities.’ Their picks range widely,” - Ranged widely? I thought they were only interested in making money?
“But the thing about basing your ethic on a single principle is that you have to follow it to its logical conclusions—and some of those conclusions get a little weirder than a straightforward desire to prevent malaria or tackle climate change. “ - Please name an ethical framework which does not contain issues. If you can, a Ph.D. and a Berggruen Prize (the equivalent to the Noble Peace Prize in philosophy) await you. This obviously doesn’t excuse bad ethical theories, but pointing out a problem with a theory is hardly evidence of its outright dismissal.
“A major criticism of longtermism is that our ability to predict what actions taken in the present will improve well-being in the future is extremely poor.” - Morris is talking about longtermism from MacSkill’s book. However, in this book, MacAskill talks about this criticism. Anywhere you read about longtermism, you will read this criticism. Morris’ thoughts here are as old as the concept of longtermism. This criticism is so common in fact, MacAskill did his Ph.D. dissertation on uncertainty in future forecasting. How to forecast the future is literally an area EA is interested in so that EA can make better predictions (https://80000hours.org/podcast/episodes/philip-tetlock-forecasting-research/). Neither MacAskill nor any other longtermist claims to have the answers like a religious leader or Sierra Club blog writer. They simply point out that this is important, neglected, and possibly tractable.
“So how do we ensure wild animal welfare? If we take EA seriously—if only the pain and pleasure of sentient creatures matters—then there is nothing wrong, in principle, with completely disassembling ecosystems to save prey from predators. “ - Once again, another source cited for this wild claim would be helpful because in my 3+ years of following EA, I have yet to see anything like this. Wild animal suffering is a new topic in EA, and what can be done, if anything, is hotly debated. To suggest we have a solution (let alone solutions like disassembling ecosystems) is intellectual dishonesty or willful ignorance.
“Since he [MacAskill] believes that wild animals’ lives are plausibly ‘worse than nothing,’ he thinks that the serious decline in the abundance of animals is probably a good thing. “ - Another attractive feature of EA is that no one person is the spokesperson for EA. EA is a collection of evolving ideas about how we can do better as humans to ensure a prosperous and healthy future. Just because MacAskill makes a claim, it does not represent all of EA nor does it mean his claim is correct within the framework of EA. No one has a monopoly on the EA gospel. Furthermore, having a problem with one or even a few claims about a person’s beliefs hardly discredits a movement. You can take part in EA while disagreeing with what MacAskill says. Despite what Morris is trying so hard to portray in her article, EA is a collection of ideas, some conflicting with each other, and full of people trying to sort it out. There is no One Answer, One Solution, or One True Speaker of EA. If we have an issue with a claim, we should combat that claim. Not the person. Not the wider framework and all within it.
“More broadly, sentient individuals are not the only valuable things that exist. …This value goes beyond humans’ aesthetic appreciation of the more-than-human world.” - EA is concerned with suffering because suffering sucks. Suffering is likely the worst part of living. To want to reduce suffering does not imply nothing else matters. But what would you say is more important: funding the Vatican, the local opera house, or saving poverty-stricken children from preventable diseases? I am sorry if the Sierra Club does not meet the list of EA concerns. It does not mean the Sierra Club is not doing good work. It means there is other work out there that could be helping more people or animals. And if the Sierra Club doesn’t make the EA list then perhaps that is the conversation they should be having instead of spreading false truths and strawman arguments about EA.
“The Economist found that effective altruists gave a whopping $600 million to charity in 2021. Insofar as these donors would have spent that money on private jets or NFTs if they hadn’t gotten into EA” - Once again displaying a complete lack of knowledge of who makes up the EA movement. The statement I’m about to make will be the most non-controversial, brain-dead, boring sentence in my post: Most people that agree and practice EA to some degree cannot afford to purchase a jet, even if they wanted to. Morris starts out the article talking about techno-bros, then complains about two philosophers, and now ends with suggesting we can purchase jets. If that is the criteria for being in EA then I believe EA is a dead movement as it comprises of about zero individuals.
“Focusing on suffering alone ignores relationships between people, between species, between ourselves and place. It ignores the value of autonomy, the value of justice, the unfathomable complexity of an ecosystem.” - Which would mean jack shit if an asteroid annihilates Earth next year or an engineered pathogen spreads killing everyone or AGI takes one of the many paths that renders life obsolete or nuclear winter from WWIII causes all life to die. Wanting to work on how humans do not destroy themselves so that people like Morris can enjoy the trees and oceans is a noble goal. I’m willing to bet all the money in my bank account that when polled, most folks practicing or influenced by EA would value autonomy, justice, and the natural ecosystem, but their concern is that someone has to be around to experience those things for those things to matter.
“Environmental philosopher Sandler says, … ‘The beauty and wonder of the world is that it isn’t reducible to a single parameter.’” - Cool. Go tell that to the kids in sub-Saharan Africa dying of malaria when all they needed was a $2 malaria net to keep mosquitos away.
Conclusion
If I’m allowed to make my own ignorant, speculative guess, I would say most donors to the Sierra Club are not interested in EA because EA is not aligning with the Sierra Club’s values of experiencing the beauty and wonder of camping in the woods. Morris’ article is written to keep donors from even considering moving on to greener pastures.
What Morris should write is a piece on how her values differ and/or are better than EAs. This is an excellent topic and worthy of conversation. Keep it pure to the philosophy and practicality of the topic. Then talk about these differences in reference to the Sierra Club’s work. No need to bash anyone.
Finally, a note on how to criticize EA. EA welcomes criticism within its own community. It is how they maintain a robust evolving framework for evaluating problems. They have their own yearly essay contest of the best criticism of EA (https://forum.effectivealtruism.org/posts/8hvmvrgcxJJ2pYR4X/announcing-a-contest-ea-criticism-and-red-teaming). For anyone in the EA community, this is an excellent outlet to challenge norms. Did Morris supply an essay? Was there a form post on her criticism? Perhaps a reddit post on the EA sub? Did she even know EA held such contests? Or was this article made to keep Sierra Club readers in the dark about EA, and without any intention of providing meaningful feedback to make EA better?
For more thorough and less sassy answers to various questions of EA, please see https://www.effectivealtruism.org/faqs-criticism-objections.