Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’
Last updated: 2022-12-04 Sunday
Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’
Is a well thought out piece written by Timnit Gebru and can be found at wired.com. Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). some quotes from the article:
This philosophy—supported by tech figures like Sam Bankman-Fried—fuels the AI research agenda, creating a harmful system in the name of saving humanity
[Effective altruism (EA)] is currently being scrutinized due to its association with Sam Bankman-Fried’s crypto scandal, but less has been written about how the ideology is now driving the research agenda in the field of artificial intelligence (AI), creating a race to proliferate harmful systems, ironically in the name of “AI safety.”
And “evidence and reason” have led many EAs to conclude that the most pressing problem in the world is preventing an apocalypse where an artificially generally intelligent being (AGI) created by humans exterminates us. To prevent this apocalypse, EA’s career advice center, 80,000 hours, lists “AI safety technical research” and “shaping future governance of AI” as the top two recommended careers for EAs to go into, and the billionaire EA class funds initiatives attempting to stop an AGI apocalypse. According to EAs, AGI is likely inevitable, and their goal is thus to make it beneficial to humanity: akin to creating a benevolent god rather than a devil.
Some of the billionaires who have committed significant funds to this goal include Elon Musk, Vitalik Buterin, Ben Delo, Jaan Tallinn, Peter Thiel, Dustin Muskovitz, and Sam Bankman-Fried[…]
This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it.
Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now.
We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever elusive techno-utopia promised to us by Silicon Valley elites.
what do I think about it
The effective altruism movement is an incredibly dangerous group with extreme amounts of money behind them. The movement is dominated by young, white, male and upper-middle-class people. Effective altruists and their extreme cousin ’longtermism’ have a core idea that you have to look extremely far into the future for all the potential lives you can save before you commit resources to a cause. In a way that makes sense: putting money into effective vaccinations saves millions of lives and is extremely cost effective. So I like the idea. But think about who is sponsoring this movement. In practice proponents of this philosophy just want to make a lot of money right now (and not pay taxes) with the idea that they can then later put that money into saving potential lives in the far future. You can see why this is appealing to billionaires. These billionaire say they donate a lot of money, but it is all going into the effective altruism organizations.
Much of the fear mongering about ‘AIs that are going to kill us into the future’ comes from this movement, effectively preventing us from talking about real harms that are happening right now to actual people. But it seems that these people are not important for effective altruists, because they reason their money will save billions of hypothetical people later on.
Our biggest problems of this age are: climate change, monopolies and unfair treatment of entire groups. And billionaires and their power are shaping and or causing most of these. Every billionaire is a policy failure.