Con Them With Kindness – “Longtermism” and “Effective Altruism” are the New Faces of Transhumanism
The FTX crypto scandal has unmasked the hypocrisy that drives our “philanthropic” elites
Futurists don’t just predict coming events. They also seek to shape them. Each prediction is framed to bring about change in a certain direction, whether by bold promises or doomsday prophecies.
For instance, when you hear warnings that artificial intelligence could outsmart humankind and destroy us all—or more precisely, if you believe those claims—there is no choice but to rethink your long-term strategy. Maybe you smash the machines preemptively. Or maybe you teach the AI to be nice and then jam a Neuralink trode in your brain to keep up with the conversation.
To the extent our immediate decisions are guided by these imagined futures, we’re all unwitting tools of the futurist.
After the spectacular crash of the FTX cryptocurrency exchange, the twin philosophies of “effective altruism” and “longtermism” have invaded the public consciousness. Like an infestation of eusocial termites, once you notice the first effective altruist crawling around the house, you start to see them everywhere.
Effective altruism is an egghead academic movement focused on helping large numbers of people, or perhaps all conscious entities, on a global scale. One popular proposal is to accumulate as much money as possible, then give it away to charity. These strategies often rely on the sorts of elaborate calculations and convoluted ethical frameworks that only “experts” could concoct.
Longtermism takes this do-gooder ball and runs with it into the distant future. Its proponents imagine how our altruistic actions today might benefit all the conscious minds that will eventually come to exist. In theory, that includes untold trillions of humans living in space, as well as mind uploads and AI bots living in vast digital simulations.
You may have problems today. But if we’re gonna be serious about utilitarianism, the highest moral priority belongs to all the cyborg space invaders yet to be. Sorry, but at the end of the day, there are more of them than you.
—
In practice, these philosophies seem to involve a lot of virtue-signaling and spending of other people’s money. The fat, slovenly FTX frontman Sam Bankman-Fried projected these selfless ideologies as a smokescreen for his crypto Ponzi scheme, where he defrauded investors of billions. Bankman-Fried assured his clients, and the world, that he’d use his gargantuan profits to help save humanity from the next pandemic or nuclear war—or both.
Instead, he blew all the cash on mansions and other luxuries, wasted his time on video games, wrecked his brain with exotic cognitive enhancement drugs, and apparently hosted weird orgies of the repulsive nouveau riche. On the bright side, his hilarious scandal did shed light on the intellectual movements that spawned him, unmasking the delusional arrogance that underpins effective altruism, longtermism, and most elite theorizing.
As I mentioned, both philosophic schools are utilitarian. Advocates seek to maximize happiness for the greatest number of people—or in more universal versions, the greatest number of sentient beings—including all animals and future digital minds. Similar to the ancient belief that the gods require blood to make crops grow, however, they admit sacrifices must be made. As the New York Times writer Walter Duranty said of communist collectivization in 1933, “To put it brutally—you can’t make an omelet without breaking eggs.”
As a hypothetical example, if you wanted to stop a deadly pandemic, then for the good of all humankind you’d have to lock vaccine-refusers in their homes. If they kept sneaking out, you’d send cops to beat the living hell out of them or maybe lock them in quarantine camps. When push comes to forced jabs, kill em’ all with kindness.
Of greater interest, both philosophies also overlap and derive from transhumanism—that infamous techno-cult whose members believe humans shall be as gods by merging with machines. Oxford University was the crucible of effective altruism and longtermism, where the well-paid philosopher and avowed transhumanist Nick Bostrom had a decisive impact on the movement.
Bostrom is best known for warning that artificial intelligence is an “existential risk” to humanity in his 2014 book Superintelligence: Paths, Dangers, Strategies. His ideas were enormously influential on the world’s richest transhumanist, Elon Musk, who suggests we implant brain-computer interfaces to keep pace with machine intelligence.
Bostrom is also a founding member of the Future of Humanity Institute, located at Oxford, to which Musk donated £1 million. Futurists can always use more money to save us from the futures they’re predicting.
This elite transition from techno-optimism to whitewashed corporate altruism is readily apparent in the global agendas set at the World Economic Forum. In 2016, WEF chairman Klaus Schwab published The Fourth Industrial Revolution, in which he announced the “merging of the physical, digital, and biological worlds.”
Four years later, his abhorrent tract The Great Reset identified the COVID-19 pandemic as an “unprecedented opportunity” to accelerate this technological revolution. Naturally, normal people across the planet were furious. So, in December of 2021, Schwab reassured everyone of the elites’ good intentions with a follow-up volume entitled The Great Narrative.
“This emerging narrative is most helpful because it shows that this capacity to care,” he and his co-author wrote, “can be harnessed for social good.”
To put it another way, your bleeding heart can be used like a battery to power the Machine.
—
Longtermism and effective altruism first showed up on my radar last August, when Elon Musk quote-tweeted a plug for William MacAskill’s then forthcoming book What We Owe the Future. “It makes the case for longtermism,” the Oxford philosopher wrote, “the view that positively affecting the long-run future is a key moral priority of our time.” Incidentally, MacAskill was also a mentor to Bankman-Fried, and advised FTX on the mystical path of effective altruism.
“Worth reading,” Musk responded, breaking from his usual vague self-descriptions. “This is a close match for my philosophy.”
In a flash, Musk’s latest TED interview made a lot more sense. “SpaceX, Tesla, Neuralink, and the Boring Company are philanthropy,” he insisted.
Tesla is accelerating sustainable energy. This is a love—philanthropy. SpaceX is trying to ensure the long-term survival of humanity with a multi-planet species. That is love of humanity. You know, Neuralink is trying to help solve brain injuries and existential risk with AI. Love of humanity.
Why did he buy Twitter? “I think civilizational risk is decreased the more we can increase the trust of Twitter as a public platform,” he answered.
Soon after Musk’s “personal philosophy” tweet, Twitter’s algorithms introduced me to the work of Émile Torres. This person may stand on the far left end of the political spectrum, but I’ll readily admit Torres is an excellent writer. The philosopher’s articles in Aeon and Salon expose the basic ideas of longtermism, which Torres calls “the world’s most dangerous secular credo.” As a former longtermist who left the movement in disgust, Torres would know.
This futurist crazy train departs at a quiet station where everyone works to increase overall happiness and decrease suffering, always taking the long view. The farther we chug along, though, roaring past genocidal bioweapons and robotic terminators—alongside the proposed solutions to these extinction-level events—the quest for well-being goes off the rails.
“Longtermism might be one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about,” Torres wrote in Aeon. “The crucial fact that longtermists miss is that technology is far more likely to cause our extinction before this distant future event than to save us from it.”
Torres is so spot on here, I’m inclined to ignore those pesky “they/them” pronouns in the Twitter bio. Such day-to-day squabbles seem petty in the face of a dysfunctional techno-dystopia.
—
As we’ve learned from the recent pandemic freakout, the “cure” is often worse than the disease. It’s the difference between coughing into your sleeve and wearing three masks to go get your tenth booster. Longtermism and effective altruism are comical embodiments of such extremes. They start with a few mosquito nets for starving kids in Africa and end with all-seeing smart dust gathering under your bed.
To illustrate, Nick Bostrom has argued for an inescapable global surveillance system, which involves every citizen wearing “freedom tags” that feed into “patriot stations” so governments can identify potential extinction-level threats, such as illicit artificial superintelligence or basement-made gray goo nanobot swarms. And he was just getting started.
In Superintelligence, Bostrom floated a curious long-term plan to raise average IQ and reduce human stupidity: fertilize an army of fetuses in vitro, screen their genetic codes, select the supreme specimens, implant them in actual women or gestate them en masse in artificial wombs, and then huck the rejects into biowaste bins. He never seems to worry about spawning a brood of emotionless sociopaths, but then, maybe that’s the point.
You want omelets? Better get to crackin’ some eggs.
We already see similar eugenics programs underway in the intelligence-obsessed pronatalist movement. Sam Altman, the gay tech magnate who co-founded OpenAI with Elon Musk, has invested in the company Genomic Prediction, which screens out unwanted zygotes, and another called Conception. According to a fascinating Business Insider exposé, the latter startup “plans to grow viable human eggs out of stem cells and could allow two biological males to reproduce.”
Along similar lines, Vitalik Buterin (the co-founder of Ethereum cryptocurrency, a notable FTX booster, and yet another node in the effective altruism movement) recently suggested that babies should be gestated in plastic bio-bags to safeguard women’s rights:
Disparities in economic success between men and women are far larger once marriage+children enter the picture. Synthetic wombs would remove the high burden of pregnancy, significantly reducing inequality.
On a long enough timeline—and with enough capital for uninterrupted navel-gazing—longtermists and effective altruists might cook up the ultimate suicidal scheme to eliminate pain. They already have a decent running start.
A few years back, William MacAskill argued “To truly end animal suffering, the most ethical choice is to kill wild predators (especially Cecil the lion).” Sounding like a suburban Buddhist having a bad acid trip, the scrawny vegetarian elaborated:
By killing predators, we can save the lives of the many prey animals like wildebeests, zebras, and buffalos in the local area that would otherwise be killed. … And there’s no reason for considering the lives of predators like lions to be more important than the lives of the prey.
It’s not hard to imagine this extermination campaign extending to hyper-aggressive human males. Or, with advances in stem cell-derived gametes, you could do away with males altogether. So long as we’re philosophizing, though, why not apply this standard to autistic philosophers? If they ever took over the world—say, through a global technological infrastructure—they’d cause far more suffering than any wild predator.
—
Much of MacAskill’s thinking, and effective altruism as a whole, was inherited from philosopher Peter Singer. In the early 90’s, Singer famously argued that infants—especially the disabled—are fair game for abortion up to thirty days after birth. The premise was this would reduce the suffering of both overburdened parents and their potentially retarded children. It’s just a matter of moral calculation.
Dismissing all regard for the sacred or profane, Singer’s philosophy milks human kindness for all it’s worth, however sour that milk may be. If we’re not killing deformed babies and useless old people, for their own good, we should be freeing the farm animals.
“The only justifiable stopping place for the expansion of altruism is the point at which all those whose welfare can be affected by our actions are included within the circle of altruism,” he wrote in his 1981 book The Expanding Circle. “This means that all beings with the capacity to feel pleasure or pain should be included.”
Today, as the Fourth Industrial Revolution unfolds across the planet, Singer has extended his altruistic extremism to artificial life as well. He recently explained this position on a Big Think episode:
If we created robots who are at our level, then I think we would have to give them, really, the same rights we have. There would be no justification for saying, “Ah yes, but we’re a biological creature and you’re a robot.” I don’t think that has anything to do with the moral status of the being.
To recap, newborns deserve less moral status than farm animals, while robots deserve human rights. You can’t stop progress!
Regarding the moral value of artificial minds, MacAskill makes similar arguments. In his 2021 paper “The case for strong longtermism,” co-authored with Hilary Greaves for the Global Priorities Institute, he estimates that “digital sentience” should expand the pool of future minds exponentially, maxing out around 10^45—or a quattuordecillion—digital souls expanding out across the Milky Way.
If you weigh this overpopulated cybernetic future against our present-day needs and desires, that’s a whole lotta moral concern.
“Assuming that on average people have lives of significantly positive welfare,” MacAskill and Greaves write, “according to a total utilitarianism the existence of humanity is significantly better than its non-existence, at any given time. Combining this with the fact that both states are persistent, premature human extinction would be astronomically bad.”
The implication is horrific. If our moral purpose is to benefit the greatest number of possible beings, then it’s our moral duty to avoid extinction—such as asteroid impacts or malign artificial superintelligence—by any means necessary.
Taken to its logical conclusion, such moral reasoning would justify anything from starving out present-day populations in order to feed the Machine for future digital minds, to wiping out the huddled masses who are too stupid to understand how important this Machine really is.
After all, what are the 8 billion dumdums living today worth when weighed against the quattuordecillion digital souls who are yet to be born?
Do you wanna colonize the galaxy or not?
—
The market-rocking meltdown of FTX, coupled with Sam Bankman-Fried’s moralistic rhetoric, gives us some indication of where longtermism and effective altruism can take us as ruling philosophies. First off, we see how vapid all this corporate virtue-signaling really is. As the firestorm was building, Bankman-Fried was asked via text by a Vox interviewer—who is herself an effective altruist—if “the ethics stuff” is “mostly a front.” He responded:
yeah
i mean that’s not *all* of it
but it’s a lot
He went on to explain:
it’s what reputations are made of, to some extent
i feel bad for those who get fucked by it
by this dumb game we woke westerners play where we all say the right shiboleths and so everyone likes us
The second thing the FTX debacle shows us, by introducing longtermism and effective altruism into the public consciousness, is how pervasive this sort of ideology is among our tech, academic, and economic elites. “Woke” culture, Covidian madness, radical philanthropy—it’s all a new spin on an ancient scam:
“If you don’t obey the Powers That Be, you must be a selfish person. In fact, you’re hardly a ‘person’ at all.”
Finally, this scandal lays bare how easily duped our elites really are, and how stupid they believe us to be. FTX was hyped by everyone from Sequoia Capital and established Oxford professors to island-hoppers like Bill Clinton and Tony Blair. Even as the lies unravel before our eyes, the New York Times and the Wall Street Journal continue to spin the entire affair as a series of unfortunate business decisions. Presently, Bankman-Fried is still scheduled to speak at the NYT DealBook Summit alongside Janet Yellen, Mark Zuckerberg, and Volodymyr Zelensky.
The lunatics really are running the asylum, and should you dare to call out their madness, they’ll have you put in a straitjacket and injected with tranquilizers. It’s long past time to steal the keys and escape this nuthouse. And if there’s any spirit left in us, we’ll leave the asylum burning behind us.
FOLLOW — Twitter: @JOEBOTxyz | — | FOLLOW — Gettr: @JOEBOTxyz
If you like the work, gimme some algo juice and smash that LIKE button.
Another fantastic paper!
I use to believe that we were heading to some kind of future that would be a mixture between Minority Report and Mad Max. After the SBF scandal unfolded I'm coming to think I missed Idiocracy in the picture. And Brazil (silly me)...
Excellent read Joe Allen especially after watching Smart Cities last night!! You always scare me though you always enlighten me and assist in pealing off scales.