Consciousness of the varied rising Silicon Valley ideologies might present a useful lens by which to research present occasions.
The techbros have management of the applied sciences that more and more run our lives, billions and billions of {dollars} to affect our politics, and a ruthless drive for energy and management.
Seems they’ve been spending fairly a little bit of time debating the massive questions amongst themselves and now their philosophies are spilling over into our actual lives.
I’ve been desiring to complement my mini-series on the businesses pushing AI (aka Giant Language Fashions) with a take a look at the putative “thought programs” which have fascinated Silicon Valley.
Then I learn Matt Stoller’s “Is There a Silicon Valley Plan to Subvert Elections?” and Emile P. Torres’ “Meet the Radical Silicon Valley Professional-Extinctionists!” and knew it was time to get on that.
To not point out Ross Douthat’s “Peter Thiel and the Antichrist: The unique tech proper energy participant on A.I., Mars and immortality” within the New York Occasions.
Stoller’s piece mentioned:
“…the creation of a brand new political slush fund by titans in Silicon Valley. I don’t wish to be alarmist, but when it goes to plan, it might functionally subvert elections in America.”
…if somebody can simply spend an infinite amount of cash to name you a trans-loving pedophile, you’ll possible lose your race. As an illustration, in Ohio in 2024, long-time Senator Sherrod Brown confronted $40 million of crypto spending alleging all kinds of issues, and that chipped away at his recognition such that he misplaced.
Immediately, Fairshake can flip most politicians with out spending a dime, safe within the data that aspiring office-seekers wouldn’t wish to lose simply over what they understand as a minor coverage round finance. These corporations received every part they wished; they’re now operating crypto coverage for Trump, and have terrified most members of Congress into voting for no matter they need. Fairshake has amassed one other large warfare chest for 2026, and it’s unlikely that crypto’s energy will probably be dented till there’s a monetary crash.
Sadly, the lesson of Fairshake was not misplaced on others in Silicon Valley. Marc Andreessen, who’s on the board of Meta and concerned in Fairshake, has been organizing this technique in different areas. Meta CEO Mark Zuckerberg, and AI enterprise capital traders, have now chosen to launch their very own Fairshake-style slush funds, to make it unimaginable to manage generative AI or large tech.
The web impact of those pots of cash is that it might turn into functionally unimaginable to enact public coverage round AI by our democratic system. As AI turns into extra essential, which means American legislation will look the way in which Andreessen and some different titans need it to look. Furthermore, different company giants will begin enjoying of their areas, closing off different areas to democracy.
Now, it’s at all times been tough, particularly within the Residents United period, to make progress, as large cash does drown out quite a lot of good coverage. Certainly, what we’re actually seeing is the ultimate phases of an organized try from the Seventies onward to permit cash to overwhelm democracy. The Lever’s Grasp Plan is a superb podcast sequence on it. These huge slush funds might imply that voting actually has turn into decorative.
Torres, within the second of his three-part sequence on Silicon Valley pro-extinctionism, wrote (half 1 is right here):
A journalist requested me the opposite day what I believe is most essential for folks to know in regards to the present race to construct AGI (synthetic common intelligence). My reply was: First, that the AGI race instantly emerged out of the TESCREAL motion. Constructing AGI was initially about utopia quite than revenue, although revenue has turn into a big driver alongside techno-utopian goals of AGI ushering in a paradisiacal fantasyworld among the many literal heavens. Therefore, one merely can’t make sense of the AGI race with out some understanding of the TESCREAL ideologies.
Second, that the TESCREAL motion is deeply intertwined with a pro-extinctionist outlook in line with which our species, Homo sapiens, must be marginalized, disempowered, and finally eradicated by our posthuman successors. Extra particularly, I argue in a forthcoming entry titled “TESCREAL” for the Oxford Analysis Encyclopedia that views throughout the TESCREAL motion virtually with out exception fall someplace on the spectrum between pro-extinctionism and (as I name it) extinction neutralism. Silicon Valley pro-extinctionism is the declare that our species must be changed, whereas extinction neutralism says that it doesn’t a lot matter whether or not our species survives as soon as posthumanity arrives.
Torres goes on to provide capsule summaries of the considered varied Silicon Valley figures, together with Carnegie Mellon’s Hans Moravec, Google co-founder Larry Web page, Turing award winner Richard Sutton, shitposter Beff Jezos aka Gill Verdon, Singularity prophet Ray Kurzweil, and:
Sam Altman (facepalm)
Altman isn’t solely a serious purpose the race towards AGI was launched and has been accelerating, however he believes that importing human minds to computer systems will turn into attainable inside his lifetime. A number of years in the past, he was considered one of 25 individuals who signed up with a startup referred to as Nectome to have his mind preserved if he have been to die prematurely. Nectome guarantees to protect brains in order that their microstructure may be scanned and the ensuing info transferred to a pc, which might then emulate the mind’s functioning. By doing this, the one who owned the mind will then abruptly “get up,” thereby attaining “cyberimmortality.”Is that this a type of pro-extinctionism? Sort of. If all future persons are digital posthumans within the type of uploaded minds, then our species can have disappeared. Ought to this occur? My guess is that Altman wouldn’t object to those posthumans taking up the world — what issues to many TESCREALists, of which Altman is one, is the continuation of “intelligence” or “consciousness.” They haven’t any allegiance to the organic substrate (to humanity), and on this sense they’re on the very least extinction neutralists, if not pro-extinctionists.
Peter Thiel (blech)
Thiel holds a specific interpretation of pro-extinctionism in line with which we should always turn into a brand new posthuman species, however this posthuman species shouldn’t be completely digital. We should always retain our organic substrates, albeit in a radically remodeled state. As such, this contrasts with most different views mentioned right here. These different views are clear situations of digital eugenics, whereas Thiel advocates a model of pro-extinctionism that’s extra historically eugenicist — specifically, it’s a pro-biology variant of transhumanism (a type of eugenics).
Torres additionally references a key Freudian slip on Thiel’s half when he was interviewed by NYT conservative midwit Ross Douthat:
Thiel was requested whether or not he “would like the human race to endure” sooner or later. Thiel responded with an unsure, “Uh —,” main the interviewer, columnist Ross Douthat, to notice with a touch of consternation, “You’re hesitating.” The remainder of the alternate went:
Thiel: Properly, I don’t know. I’d — I’d —
Douthat: This can be a lengthy hesitation!
Thiel: There’s so many questions implicit on this.
Douthat: Ought to the human race survive?
Thiel: Sure.
Douthat: OK.
Torres doesn’t get into Douthat’s try and reconcile his views along with his claimed Christianity:
However it nonetheless additionally looks like the promise of Christianity ultimately is you get the perfected physique and the perfected soul by God’s grace. And the one who tries to do it on their very own with a bunch of machines is more likely to find yourself as a dystopian character.
Thiel: Properly, it’s — let’s articulate this.
Douthat: And you may have a heretical type of Christianity that claims one thing else.
Thiel: Yeah, I don’t know. I believe the phrase “nature” doesn’t happen as soon as within the Previous Testomony. And so there’s a phrase by which, a way by which, the way in which I perceive the Judeo-Christian inspiration is it’s about transcending nature. It’s about overcoming issues. And the closest factor you may say to nature is that persons are fallen. That’s the pure factor in a Christian sense, that you just’re tousled. And that’s true. However there’s some ways in which, with God’s assist, you might be speculated to transcend that and overcome that.
Douthat: Proper. However most people — current firm excepted — working to construct the hypothetical machine god don’t suppose that they’re cooperating with Yahweh, Jehovah, the Lord of Hosts.
Thiel: Certain, positive. However ——
Douthat: They suppose that they’re constructing immortality on their very own, proper?
Thiel: We’re leaping round quite a lot of issues. So, once more, the critique I used to be saying is: They’re not bold sufficient. From a Christian viewpoint, these persons are not bold sufficient.
I must also let quote from Torres earlier work for TruthDig to clarify his acronym TESCREAL which mixes the primary letter of the ideologies transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, Efficient Altruism and longtermism:
“…the constellation of ideologies behind the present race to create AGI, and the dire warnings of “human extinction” which have emerged alongside it…
On the coronary heart of TESCREALism is a “techno-utopian” imaginative and prescient of the longer term. It anticipates a time when superior applied sciences allow humanity to perform issues like: producing radical abundance, reengineering ourselves, changing into immortal, colonizing the universe and making a sprawling “post-human” civilization among the many stars stuffed with trillions and trillions of individuals. Probably the most simple approach to notice this utopia is by constructing superintelligent AGI.
These ideologies, we imagine, are a central purpose why corporations like OpenAI, funded primarily by Microsoft, and its competitor, Google DeepMind, are attempting to create “synthetic common intelligence” within the first place.
…In (the view of Marc Andreessen), the probably end result of superior AI is that it’ll drastically enhance financial productiveness, give us “the chance to profoundly increase human intelligence” and “tackle new challenges which have been unimaginable to sort out with out AI, from curing all illnesses to reaching interstellar journey.” Creating AI is thus “an ethical obligation that now we have to ourselves, to our kids and to our future,” writes Andreessen.
Torres additionally pointed me to David Z. Morris whose “DeepSeek and the AI Homicide Cult” argues that “Rationalism hyperlinks a wave of murders, FTX embezzlement, and crashing markets.”
From his piece:
(Rationalism) lurks on the coronary heart of Sam Bankman-Fried’s rampant embezzlement at #FTX, of which $500 million {dollars} went to Anthropic, an “AI Security”-fueled startup that employs Amanda Askell, ex-wife of Efficient Altruism founder Will MacAskill. $5 million in cash stolen by SBF additionally went on to the Middle for Utilized Rationality, considered one of Yudkowsky’s two organizations. Half 1,000,000 in FTX funds additionally helped facilitate the acquisition of a lodge that turned the headquarters of a CFAR subsidiary referred to as Lightcone Analysis, which notoriously featured a number of eugenicists and white supremacists at occasions.
It additionally helps clarify, I believe, why OpenAI and different U.S. synthetic intelligence startups simply received embarrassingly annihilated by a Chinese language hobbyist: as a result of they’re pushed by a number of the similar concepts which have led fringe Rationalists into insanity.
There have now been not less than EIGHT violent deaths over the previous three years tied, to various levels, to splinter factions of the Rationalist motion based by Eliezer Yudkowsky in San Francisco. The Rationalist neighborhood is keen to disown the perpetrators, and it’s true that the factionalists have been in battle with the primary group for years. Extra to the purpose, they appear merely insane.
However, I’d tentatively argue, the supply of the battle is that these dangerous actors took Yudkowsky’s fundamental concepts, above all concepts in regards to the imminent destruction of humanity by AI, and performed them out to a logical conclusion – or, not less than, a Rationalist conclusion. This wave of homicide is simply essentially the most excessive manifestation of cultish parts which have bubbled up from the Rationalist motion correct for happening a decade now, together with MKUltra-like conditioning each at Leverage Analysis – one other splinter group seemingly pushed out of Rationalism correct following sure revelations – and throughout the Middle for Efficient Altruism itself.
In his piece “FTX, Rationalism, and U.S. Intelligence: A Conspiracy Idea” (an excerpt from his e book “Stealing the Future: Sam Bankman-Fried, Elite Fraud, and the Cult of Techno-Utopia”) Morris has related some alarming dots:
the Middle for Utilized Rationality, which obtained (and has resisted returning) funds stolen from FTX prospects by Sam Bankman-Fried and his co-conspirators, bears a placing resemblance to the agendas for each particular person brainwashing and large-scale social engineering that drove a number of the Central Intelligence Company’s most annoying packages.
Now, with the revelation {that a} group of rogue Rationalists often called the “Zizians” have been tied to a wave of murders throughout the U.S., it appears justified to discover the chance that the Rationalist motion isn’t merely a misguided ethos turned poisonous by cult-like insularity. Positioned in a broader context, its tenets and practices start to resemble each the Human Potential Motion centered round establishments just like the Esalen Institute; and, in fringe sub-groups which have splintered from Rationalism correct, the illicit human experimentation carried out by the CIA beginning within the Fifties underneath the code identify MKUltra.
Morris’ piece “What’s TESCREALism? Mapping the Cult of the Techno-Utopia” will help us get again to present occasions:
the AGI fable is why reality-based efforts to make current AI algorithms secure for currently-living people have virtually zero traction among the many loudest proponents of “AI security.” In simply the identical means that Sam Bankman-Fried stole buyer funds to make long-term bets, immediately’s AI leaders are actively and vocally dismissing the present, materials dangers of machine studying algorithms, and focusing as a substitute on a long-term future that they confidently predict with out a shred of precise proof. (Simply two baseless assumptions of the doomer fantasy are that A.I. will turn into self-improving, and that it’ll simply grasp nanotechnology.)
This patent show of foolishness could be the deepest underyling purpose the tech business needed to purge Timnit Gebru. The imaginative and prescient of AI shared by folks like Sam Altman is considerably derived from sci-fi like James Cameron’s Terminator, and going way back to Karel Capek’s R.U.R., the origin of the phrase “robotic.” Capek’s 1923 play far preceded something like AI, making clear that the intentional, humanoid, considering “robotic” has at all times been primarily a metaphor for the far more advanced dialectic by which man-made know-how turns into a risk to human essence. The Singularitarians have made the infantile error of mistaking these simplified storybook tales for the complexity of actuality, and so long as Gebru and her cohort stay dedicated to describing how know-how truly works, the collective fantasy of superintelligent but extremely harmful AI is threatened.
Norris additionally connects TESCREALism to the newly launched publication The Argument and the Abundance bros in his “Efficient Altruism In a Skinsuit: “The Argument” is Laundering Austerity“:
The launch of latest “liberal” information outlet The Argument has been unambiguously hilarious, essentially as a result of most of their marquee writers, notably Matt Yglesias and Kelsey Piper, should not a lot “liberal” in any generally understood American sense as “center-right-to-secretly-eugenicist.” Piper and Yglesias are each previously tied to Vox, and The Argument additionally options Derek Thompson as a workers author – Ezra Klein’s companion within the ideologically very related “Abundance Liberalism” venture, which is basically about co-opting right-wing deregulation rhetoric.
Whenever you take a look at the funding for The Argument it turns into very clear why this “liberal” publication is dedicated to undermining the case for a welfare state. The Argument is primarily funded and staffed, not by “liberals,” however by a mixture of Efficient Altruists like Dustin Moskovitz strategically shifting away from that model after the FTX debacle confirmed its strategic and ideological vacancy; and entities tied to far-right funding sources together with Peter Thiel and the Koch Brothers. That is “liberalism” in 2025.
If you realize Yglesias and Piper, you realize their whole shtick is sustaining a strategic ignorance that serves their ideological goals.
Freddy deBoer has some supplementary ideas on Ezra Klein that doesn’t explicitly hyperlink again to Silicon Valley ideologies however gives extra perception:
Klein, in his earnest credulity in the direction of the claims of AI maximalists, exhibits us a technique this refusal performs out. Ezra’s entranced by the prospect of radical technological transformation, by the chance that generative fashions or robotics or biotech are going to completely remake the human situation.
He’s interviewed dozens of individuals on the topic, and although he hedges and qualifies, there’s at all times an underlying openness to the concept we’re on the brink of a sci-fi future. “Particular person after individual… has been coming to me saying… We’re about to get to synthetic common intelligence[!]” says Ezra, in his breathless fashion, not pausing to acknowledge that each a type of individuals is somebody who has direct monetary funding not in AGI being actual and imminent however within the impression that AGI is actual and imminent.
Klein doesn’t wish to let go of the chance that he may dwell in Star Trek or Blade Runner or Terminator; he needs to imagine that our lives may be so totally altered that the load of abnormal existence will probably be lifted. And I promise I’m not blowing smoke once I say that, the place I discover most AI evangelists to be disingenuous charlatans, I discover every part Ezra says to be aching with sincerity and sentiment. Which, analytically, is in fact the precise downside. He’s too desperate to imagine.
…
Klein needs the AI story to imply that we’re on the verge of a post-scarcity society, that the arduous grind of politics and labor may quickly be obviated by miraculous machines; he’s savvy sufficient to not say the opposite half out loud, which is that he needs to pilot a mech on the sands of Mars, to information his X-Wing into the mouth of a wormhole that can lead him who is aware of the place.
…
Klein’s fantasies threat destroying the world financial system.
The factor that deBoer will get is that Klein is determined to imagine in magic. What he’s lacking is that Klein’s fantasies are structured and guided by Silicon Valley “thinkers” who’re equally dedicated to a fantasy-life imaginative and prescient of actuality.
Sadly for everybody else, they’ve received the cash and energy to impose these fantasies on the remainder of us.