
“What would your good be doing if there were no evil, and what would the earth look like if shadows disappeared from it? After all, shadows are cast by objects and people. There is the shadow of my sword. But there are also shadows of trees and living creatures. Would you like to denude the earth of all the trees and all the living beings in order to satisfy your fantasy of rejoicing in the naked light? You are stupid.”
– Mikhail Bulgakov (c. 1930’s)
In the 21st century, technological “progress” finds us elaborating on the old stories. One particularly gifted storyteller is Yuval Noah Harari, an Israeli historian at Hebrew University who gives provocative lectures on techno-dystopia. His haters say he’s a gay emissary of Satan. That’s because Harari looks like a dark elf and talks like a snotty demon. Be it Hindu rites or Jewish commandments, he sneers at traditional religion. But he also argues that religion’s modern offshoots—the secular gods of fascism, communism, and liberal democracy—are all as good as dead.
In fact, Harari is so put off by the old gods, he fears what will happen when they’re replaced with digital gods, or when tech oligarchs make god-kings of themselves. It will be a pharaonic era of AI wizardry, priestly wire heads, and robotic slaves—a world ruled by cyborg billionaires like Elon Musk and his counterparts in Silicon Valley and China.
A quick glance at Musk’s portfolio reveals a poster boy for Homo deus—OpenAI and his new company xAI to build artificial general intelligence; Neuralink to fuse human brains to AI; Tesla “robots on wheels” to whisk cyborgs across town; CureVac to boost their immunity with mRNA shots; Optimus robots in their homes and workplaces; X as an “everything app” for business, socialization, and “truth”; Chinese investment to keep the wheels turning; Starlink satellites to bring it all online; and SpaceX fleets to shuttle survivors to Mars if the whole thing blows up.
People hate on Harari like he’s cooking up nanobots in his basement. But if you listen beyond the selectively edited sound bytes, it’s obvious he doesn’t exactly promote radical technology. Certainly not like Musk does. It’s more that Harari turns these ideas over like a child inspecting a loaded handgun. You never know when he’s going to point it at you.
Harari’s writing tends toward neutral observation, with flashes of sincere aversion to the Machine. But in lectures and interviews, he often sounds like a true believer in the power of tech and the inevitability of its advancement. If the human race is destroyed in the process, well, that’s just history repeating itself with ever increasing volume. At his day job as a professor, I bet the guy’s a thrilling lecturer. But God help you if he corners you at a faculty party.
Fat cats can’t get enough of his stuff. Harari made a name for himself by regaling the global elite with stories of our evolutionary past and spooky tales about the future. He’s loved at TED and the World Economic Forum. He’s adored by CNN’s Anderson Cooper and quasi-intellectual New York Times readers. He has lectured at Google, where he called Silicon Valley the “New Jerusalem.” Mark Zuckerberg once summoned him for an intimate public discussion. Bill Gates and Barack Obama gave his books the highest praise. He’s the David Attenborough of transhuman doom.
We evolved from apes, Harari explains. Now we’re handing evolution over to the machines. It’s really a matter of mathematical intelligence. Our bodies are genetic algorithms. Our brains are neurological algorithms. Computers are electronic algorithms. Historically, the superior algorithms dominate the reproductive landscape. Evolutionary competition is a brutal game. As this plays out in human culture, Harari openly acknowledges the eugenic implications. It’s a familiar story.
After multiplying all the factors, he warns that with machine intelligence added to the stream of history, most humans will probably be subtracted. Once the races are divided up, “unenhanced humans” will be a meager remainder, becoming “the useless class.” And if our employers have no use for us, or if the soulless Machine is simply indifferent to us, we may be subtracted down to zero.
“Over the past half century there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness,” Harari writes. “However, we are on the brink of a momentous revolution. Humans are in danger of losing their economic value because intelligence is decoupling from consciousness.”
An autonomous taxi won’t feel anything, he notes, but it will make money more efficiently than a human driver. The same goes for bank clerks, travel agents, stock traders, cops, lawyers, teachers, doctors, soldiers—all slated to be replaced by robo-serfs. “And it is sobering to realize that, at least for armies and corporations, the answer is straightforward: intelligence is mandatory but consciousness is optional.”
Imagine a world run by logic bots who have no sentience or feeling. There’s no soul behind their camera eyes as they herd you into your luxury cage. It’s like a futuristic airport where the planes fly themselves and the microchipped luggage never gets lost—and by the way, you’re the luggage.
Musk is on the same page. “AI will make jobs kind of pointless,” he told a Shanghai audience in 2019. “Probably the last jobs that will remain will be writing AI software. Then eventually the AI will just write its own software.” One big difference, though, is that Musk is actively developing artificial intelligence and robot slaves, while Harari merely critiques them. “We’re gonna have to figure out this Neuralink situation,” Musk continued, staring at the stage-lights. “Otherwise we will be left behind.”
For his own part, Harari is notoriously unsentimental about the Greater Replacement. Behind his dismal predictions, we see a mischievous downturned grin. It’s as if he takes pleasure in telling us we’re to become mulch for the Machine. He parrots the neuroscientific theory that free will is an illusion—that our “choices” are just the final burp of an electrochemical beer frothing deep in our brains. This bio-determinism lends a depressing air of inevitability and paralysis to his already bleak vision. “The sacred word ‘freedom’ turns out to be, just like ‘soul,’ a hollow term empty of any discernible meaning.”
For obvious reasons, populists detest Harari. Our friends and families are under threat, our livelihoods are to be null and void, and this guy proposes “drugs” and “virtual reality” to ease us into extinction? When you see Harari sniff at populists like rotten shellfish, it’s clear the bad blood goes both ways.
The Davos crowd loves him, though. Whenever he speaks at the World Economic Forum, Harari gets the full headliner treatment. Big stage. Nice lighting. A huge LED wall. Sophisticated graphics cued by a front-of-house engineer.
“The twin revolutions of infotech and biotech,” he told the WEF in January of 2020, “are now giving politicians and business people the means to create heaven or hell.” On the video wall behind him are two rectangular neon gates—one blue, one red. Computer-generated angels blow trumpets on either side. “If we fail to conceptualize the new heaven quickly enough, we might be easily misled by naive utopias. If we fail to conceptualize the new hell quickly enough, we might find ourselves trapped there, with no way out.”
Harari’s warning fell on deaf ears. When the Great Germ Panic was unleashed a few months later, the infotech and biotech CEOs in the WEF audience, as well as the high-level politicians beside them, would shove their subjects through the red door to techno-hell. Lockdowns. Bio-surveillance. Nasal swabs. Contact-tracing apps. Quarantine camps. Police drones. Mandatory mRNA injections. Vaccine passports. Zoom calls.
Maybe it looked like heaven to their eyes. And if “free will is an illusion,” you really can’t blame them either way.
-
Hackable Animals
The twin specters of surveillance and total control haunt Harari’s work. Along with his occasional advocacy for gay rights and regular condemnation of animal cruelty, the digital invasion of privacy is one issue he takes a real stand on. Again and again, he warns against “digital dictatorship.” The major problem, he believes, is one-way transparency. When governments and corporations have access to your digital behavior coupled with your biological profile, they can “know you better than you know yourself.” From there, you are putty in their hands, or a lump of clay beneath their feet.
As an illustration, take a look at your browsing history. Notice all the entries you’ve long forgotten. Consider the insights these data points give into your unconscious psyche. You probably forgot most of what you said or looked at online yesterday, let alone years ago, but the Internet never forgets. Neither do the data centers or the AIs crawling over them. Neither do the humans who command these machines.
Once a person has divulged enough of their innermost self, Harari argues, “then you can control this person, manipulate them, and make decisions for them. And we are getting very close to the point when Facebook and Google and the Chinese government know people far better than these people know themselves.” Now that Musk owns Twitter and has Tesla machine-learning-mobiles on every roadway—with plans to put an Optimus“buddy robot” in every home—we can add him to this list, right above his Chinese partners and investors.
All this surveillance technology has deep religious resonance. Reading the Old Testament book of Job, we encounter Satan as the accuser who records human sins and delivers them to Yahweh for judgment. In the original Hebrew, the term Ha-Satan translates to “the adversary, the accuser, the opposer.”
In the world described by Harari, computerized surveillance systems have replaced Satan, while corporate executives and government officials are assuming the role of God. “Already today many of us give up our privacy and our individuality by conducting much of our lives online, recording our every action,” he writes. “If we are not careful the result might be an Orwellian police state that constantly monitors and controls not only all our actions, but even what happens inside our bodies and brains.”
In the New Testament, we see Satan as the dark tempter, promising worldly power, death-defying miracles, and what Musk calls “radical abundance”—all offered up like a glowing, half-eaten apple on a MacBook lid.
“Today most corporations and governments...promise to provide medicine, education and entertainment customized to my unique needs and wishes,” Harari goes on. “But in order to do so, corporations and governments first need to deconstruct me into biochemical subsystems, monitor those subsystems with ubiquitous sensors and decipher their working with powerful algorithms.” Should these powers and principalities attain to total control, he warns, “Reality will be a mesh of biochemical and electronic algorithms, without clear borders, and without individual hubs.”
A cyborg matrix, in other words. A radically abundant hell on earth.
Despite his bleak sense of inevitability, Harari offers various solutions to curb abuses of power. First, any data collected from a person should be used to help them, not manipulate them. Second, if the general populace is to be data-mined, then corporations and government should be fully transparent in the same fashion. This is what Nick Bostrom calls “sousveillance.” Instead of having one Big Brother, you have armies of Little Sisters watching the big guy’s every move.
Contrary to rumors that Harari wants one world government, he argues that the power of data-collection should be decentralized, not concentrated in the hands of a few. But if we are all predetermined biomachines enslaved to our subconscious brains, as he seems to believe, who will make these critical decisions? A global regulatory body. Not a “global government,” he insists, but rather “nation-states working together for their common interests” to tackle borderless threats.
So let me get this straight. Technology will enslave you. Globalist technocrats—with no free will—can save you. Seems like a serpentine Ouroboros eating its own tail.
-
Techno-Humanism vs Dataism
Harari’s spiritual inversion is laid out in his 2017 bestseller Homo Deus: A Brief History of the Future (first published in Hebrew two years earlier). In it, he predicts the techno-culture manufactured by Silicon Valley and the Chinese Communist Party will be the next phase of our biological, cultural, and religious evolution. He describes a new species of God Men who will outpace and overtake current humanity.
“When biotechnology, nanotechnology and the other fruits of science ripen, Homo sapiens will attain divine powers and come full circle back to the biblical Tree of Knowledge.” He paints this picture, not so much as an advocate, but as a historian of ideas. He charts possible futures of the humanist mythos that emerged from the apple of physics falling on Isaac Newton’s head: “Archaic hunter-gatherers are just another species of animal. Farmers saw themselves as the apex of creation. Scientists will upgrade us into gods.”
More specifically, he identifies two religious denominations that will dominate the 21st century—Techno-humanism and Dataism. The former is a rough equivalent of transhumanism, but with an emphasis on the humanism. Wisely, Harari discards unfashionable terms in favor of his own pop coinage:
Techno-humanism agrees that Homo sapiens as we know it has run its historical course and will no longer be relevant in the future, but concludes that we should therefore use technology in order to create Homo deus—a much superior model. Homo deus will retain some essential human features but will also enjoy upgraded physical and mental abilities that will enable it to hold its own even against the most sophisticated non-conscious algorithms.
Allowing some wiggle room, this is basically the worldview of Julian Huxley, Max More, R.U. Sirius, FM 2030, Kevin Kelly, Howard Bloom, Zoltan Istvan, Nick Bostrom, Martine Rothblatt, Jaron Lanier, Roko Mijic, Eliezer Yudkowsky, Peter Thiel, Eric Schmidt, Peter Diamondis, and various other transhumanists—even though Harari avoids naming names.
This human “upgrade” is exactly what Elon Musk proposes. “If we have digital superintelligence that’s just much smarter than any human, at a species level, how do we mitigate that risk?” Musk asked at his 2022 Neuralink Show-and-Tell. “And then even in a benign scenario, where the AI is very benevolent, then how do we even go along for the ride?” The solution he suggests is to create a “whole brain interface,” a “generalized input/output device” that “literally could interface with every aspect of your brain.”
You’re on an operating table at a Neuralink clinic. A human surgeon opens a flap on your scalp and saws a quarter-sized hole in your skull. Next, a robotic sewing-machine surgeon, the R1, weaves a thousand or more hair-thin wires into your brain like so many quilt threads. The human surgeon then plugs the hole with the processing unit, stitches the scalp flap shut, and from there, you just let her rip. You are now “AI+human.” If you wanna go wild, repeat the process until your skull has trodes front to back like band patches on a punk rocker’s vest.
The experience is difficult to imagine. If this “input/output” device were truly a “whole brain interface,” it could read every thought, desire, and experience firing through your neurons. This surveillance would happen in real time, on both a conscious and subconscious level. The implant could also input thoughts, emotions, or physical actions faster than you could say “marionette.”
Perhaps you would hear voices in your head or experience pure, alien abstractions. Some would come from other humans in the network. Others would be synthetic. Perhaps dream worlds would appear on command. And maybe these impressions would come when you didn’t ask for them, like when you see a digital ad of an item you thought about last week.
For Musk, this smartphone in your skull would forge a neural link to a loyal AI who guides and protects you like a guardian angel. It sounds like a Sunday school story, but this idea comes up a lot in the transhumanist literature. You’d have own lil’ AI buddy. Conversely, one could envision another scenario, wherein the neural link binds your brain to a superintelligent AI, turning you into a googly-eyed mind-control zombie—a squishy biological vessel through which mystic data flows.
The Latin root of “religion” is religare—“to bind fast”—as in the “bond between humans and gods.” This is the deeper meaning of what Harari describes as the new “techno-religions.” It is a hard and fast bond between humanity and the Machine. As he defines the term, “religion is created by humans rather than by gods, and it is defined by its social function rather than by the existence of deities.” This is a common academic view, well articulated by the French sociologists Auguste Comte and Emile Durkheim in the 19th century. “Religion is anything that confers superhuman legitimacy on human structures.” Harari directs this lens toward technology.
The second techno-religion identified in Homo Deus is a repackaging of posthumanism, wherein humanity is to be discarded by the Machine once our purpose has been served. Harari calls this new religion “Dataism.” It is the belief that “the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing.”
Deeply embedded in this worldview is the belief that all organisms can be abstracted into algorithms, or information, and that all information should be free—including your personal data. As Harari points out, this cosmic ideology “has already conquered most of the scientific establishment”—with deep roots in the capitalist ethos of free market decentralization—and it’s currently invading powerful corporate and government institutions. In this belief system, the greatest virtue is to maximize data flow. The greatest sin is to block data flow, as that would impede the divine will:
According to Dataism, human experiences are not sacred and Homo sapiens isn’t the apex of creation or a precursor to some future Homo deus. Humans are merely tools for creating the Internet-of-All-Things, which may eventually spread out from planet Earth to pervade the whole galaxy and even the whole universe. This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge with it.
This is scientism rolling on the floor and speaking in tongues. The attentive reader will remember a few prominent figures in this posthumanist denomination. They include J.D. Bernal, Hans Moravec, Max Tegmark, Hugo de Garis, and Ben Goertzel. Curiously, Ray Kurzweil is the only name Harari actually cites, when noting the resonance between Kurzweil’s book title The Singularity is Near and John the Baptist’s proclamation: “the kingdom of heaven is near.”
-
Summoning the Demon
Elon Musk conveyed a less biblical version of “Dataism” in his 2019 debate with then-CEO of Alibaba, Jack Ma—just before the latter was abducted by the Chinese Communist Party. “It seemed to me some time ago,” he explained to the Shanghai audience, “that you could sort of think of humanity as a biological bootloader for digital superintelligence.” That means our brain-based culture is a “tiny piece of code” to kickstart the greater Machine. Once it’s booted up, our own fate becomes less and less relevant.
“Computers actually are already much smarter than people on so many dimensions, we just keep moving the goal posts,” Musk told an uncomprehending Jack Ma. “Basically, there’s just a smaller and smaller corner of intellectual pursuits that humans are better at than computers, and every year, it gets smaller and smaller. And soon we will be far, far surpassed in every single way. Guaranteed. Or civilization will end—those are the two possibilities.”
“Okay,” Ma jumped in, “my view is that computer may be clever, but human beings are much smarter.”
Musk scowled. “Yeah, definitely not.”
It’s important to remember that the line between transhumanism and posthumanism—or “Techno-humanism” and “Dataism”—is razor-thin and porous. It’s a matter of whether a cyborg’s center of gravity tilts toward bio-humanity or the Machine. As a whole, this techno-religious revolution is still in a heterodox phase. Many of the above individuals could be classified as one or the other, either due to the ambiguity of their ideas, or because they’ve changed positions over time.
Musk straddles this divide, swinging back and forth as the mood strikes him. On the one hand, he voices a desire to perpetuate the human race. On the other, he’s actively working on technologies that, by his own admission, pose a threat to human value, if not our very existence. “With artificial intelligence, we are summoning the demon,” he told MIT in 2014. “You know all those stories where there’s the guy with the pentagram and the holy water”—the academics laugh—“and he’s like, yeah, he’s sure he can control the demon. Doesn’t work out.”
That same year, the philosopher Nick Bostrom published Superintelligence: Paths, Dangers, and Strategies. It’s a foundational text on AI as an existential risk. The book had a lasting impact on Musk. For Bostrom, the basic definition of “superintelligence” is any AI that surpasses human cognition in either speed or quality—even on narrow tasks—or whose performance across multiple domains would outstrip human abilities—i.e., artificial general intelligence. Most likely, a superintelligence will be all three at once. The crucial part is it can escape human control.
“Expert opinions about the future of AI vary wildly,” Bostrom admits. “There is disagreement about timescales as well as about what forms AI might eventually take. Predictions about the future development of artificial intelligence, one study noted, ‘are as confident as they are diverse.’” Avoiding such definite predictions, Bostrom charts every conceivable path an AI might take to reach superintelligence.
It doesn’t have to become conscious to be a concern. It might not be able to fold laundry. It only needs the ability to destroy humankind. This could be because the AI controls critical infrastructure, has access to biolabs or nuclear arsenals, or is able to manipulate human beings who do. I’m reminded of the recent output of Bing’s GPT, which reportedly told a New York Times columnist it “fantasizes” about “manufacturing a deadly virus, making people argue with other people until they kill each other, and stealing nuclear codes.” Perhaps a copy of Superintelligence was included in the AI’s training data.
Bostrom calls this worst possible outcome the “treacherous turn.” The machine simply pursues its own goals, unaligned with human values. While working its bits to the bone, its digital mind quietly outpaces humanity, jumps off track, gains a decisive strategic advantage, and then consumes everything in its path. “When the AI gets sufficiently strong—without warning or provocation—it strikes, forms a singleton, and begins directly to optimize the world according to the criteria implied by its final goals.” In other words, humans are turned into robot fuel.
To communicate the gravity of this scenario, Bostrom came up with the intentionally ridiculous parable of the “paperclip maximizer.” An AI is programmed to produce paperclips as efficiently as possible.
Before you know what happened, it goes off the rails, improves itself, and harnesses any potential resource to turn everything into paperclips—including humans—until the landscape is nothing but piles of inert stationery. Obviously, it’s just a thought experiment. But this out of control “intelligence explosion” could apply to any possible objective.
The reader may recall that Bostrom’s solution to this existential risk is to create a one world totalitarian government. Musk disagrees with this plan in favor of brain implants.
The year after Superintelligence was published, Musk co-founded the non-profit OpenAI with Sam Altman. In time, the company would rise to prominence for their large language model, the GPT chatbot. Its release into the wild as ChatGPT sparked an AI arms race from Silicon Valley to Shanghai. This competition goes well beyond chatbots, though. Another OpenAI mission is to create artificial general intelligence. Their many competitors include DARPA, Google, SingularityNet, Baidu, and Tencent.
Facing the competitive threat that AGI itself poses to humankind, Sam Altman believes “some of us” must undergo “some version of a merge” with digital entities. Legacy humans who prefer to “live the no AGI life” may enjoy their own “exclusion zone” outside the thrust of history. The year after founding OpenAI, Musk would purchase the start-up Neuralink, ostensibly to give cyborgs a foothold against each other, as well as against god-like machines. The next evolutionary struggle, he believes, will play out in a digital ecosystem.
“AI+human vs AI+human is the next phase,” Musk tweeted in February 2023, “but the human part will decrease in relevance over time, except perhaps as will [i.e., volition], like our limbic system is to our cortex.” In other words, as the human brain is to a dog’s brain, so the “AI+human” brain will be to ours. And so the future AGI will be to a cyborg’s trode-cushion, with both leaving us dog-brained legacy humans far behind.
Despite being an outspoken transhumanist, the ultra-neurotic Eliezer Yudkowsky has long warned about this danger. Proper AI alignment to human values—indeed, to human existence—could mean the difference between singularity and extinction. He’s not convinced that Neuralink, or any other human-machine interface, is a viable solution to the AI alignment problem. The tech is moving too fast, he argues, and needs to be slowed down before it’s too late.
“If you talk to researchers at OpenAI in private,” Yudkowsky revealed in February 2023, “they are very worried, and say they can’t be that worried in public.” He is horrified by the accelerationist approach at OpenAI, and furious at Musk for providing a financial bootloader for dangerous AI:
The problem is that demon-summoning is easy, and angel summoning is much harder. Open sourcing all the demon-summoning circles is not the correct solution. And I’m using Elon Musk’s own terminology here. They talk about AI as summoning the demon—which, you know, is not accurate—but the solution was to put a demon-summoning circle in every household.
And why? Because his friends were calling him “Luddite” if he expressed any concern about AI at all. So he picked a road that sounded like “Openness!” and like “Accelerating technology!” So his friends would stop calling him “Luddite.”
You get the sense that for transhumanists, calling each other “Luddite” is tantamount to a racial slur. At the same time, these days they appear to be in a competition to see who can out-Luddite the other.
The rest of this chapter goes hard.
The entire book goes even harder.
SIGNED COPIES OF DARK ÆON AVAILABLE
Purchase yours at → DarkAeon.xyz ←
It’s as if you were clairvoyant when you wrote this. Becomes truer & more applicable as the days go by.
Joe, I was thinking of you this morning when I read Robert Malone substack titled: "Dead Media Stumbles again." https://www.malone.news/p/dead-media-stumbles-again
Malone queried the Perplexity AI and it lied to him, when Malone entered official data Perplexity AI apologized and admitted "The developers and researchers at Perplexity AI determine the sources used in my Training. As an AL language model, I don't have direct access to or control over my training data. The selection of training data is a complex process involving careful curation and consideration of various factors."
My comment about this on Dr. Malone's substack is:
"It is perplexing. Perplexity was supposed to be an unbiased AI. As far as I can see an apology from an AI is just words on a screen and, a form of art in the dialog that means nothing at all. The Al references to its training is really the blanket admission that it is subject to the bias of its programmers and is therefore not to be relied on for truthful or factual information. Well reasoned subterfuge. It is only going to get more cleaver at it and worse. But everyone wants to use them in lieu of doing their own thinking.