“ those characteristics of a society— ...within its economy, its polity and its ideology—which thrust it in a direction whose outcome must be the extermination of multitudes ”
AI-Exterminism: The Death Drive of the World-System
AI-Exterminism applies E.P. Thompson’s Cold War theory of "Exterminism" to the contemporary era of artificial intelligence. It posits that the geopolitical and economic logic of the current world system has locked humanity into a deterministic drive toward self-destruction. Just as the nuclear arms race created a "thrust towards extermination" that seemingly operated independently of human will, the race for ASI is a race over the precipice in which no one is in fully in control. Subject to competitive pressures and ideological impulses, the new AI military-industrial-complex is intent on increasing capabilities (instrumental rationality) towards superintelligence without bounds: supposed 'rational strategy' causing a collectively irrational trajectory towards the end of civilization. Philosophically, the condition of AI-exterminism is characterised by a fatal 'means-ends reversal': the overdevelopment of the technical "means" of AI privileged above the moral "ends" of society, threatening human welfare and ultimately survival in service of a deeply flawed vision of technological utopia.
❝ There are contradictions within this gathering determinism, and countervailing forces in both blocs, as to which I have said, in these notes, very little. It remains to indicate what an anti-extremist configuration of forces might look like, and what its strategy might be, if it were to stand any hope of success.
First, it would have to mobilize itself with great rapidity, since we are already within the shadow of collision...Fourth — and this may be the most critical and decisive point — it must engage in delicate and non-provocative work to form alliances between the peace movement[s]...
This is of necessity; and without such internationalist alliances which reach across the fracture we will not succeed. The exterminist thrust (we have seen) summons up and augments the thrust of its exterminist antagonist. The counter-thrust cannot come from the other, but only from within the resistance of peoples inside each bloc. But so long as this resistance is confined within its own bloc, it may inhibit the thrust to war but cannot finally impose alternative directions. So long as each bloc's resistance movement can be categorized as the "ally" of the other, exterminism (with its powerful bases in the weapons-systems-and-support-complex) will be able to police its own territory, reassert ideological control, and, eventually, resume its thrust.
Hence only the regeneration of internationalism can possibly summon up a force sufficient to the need. This internationalism must be consciously anti-exterminist: it must confront the ideological imperatives of both blocs: it must embody, in its thought, in its exchanges, in its gestures, and in its symbolic expressions, the imperatives of human ecological survival. Such a movement cannot be mediated by official or quasi-official spokespersons of either bloc...
Internationalism today demands unequivocal rejection of the ideology of both blocs... This must not be a hidden tactic but an open and principled strategy. This may be a most critical point in the dissolution of the exterminist field-of-force. It will be contested with equal ferocity by [both blocs]... It will require symbolic manifestations and a stubborn internationalist morale. And it will bring friends into danger.
Finally, it should go without saying that exterminism can only be confronted by the broadest possible popular alliance: that is, by every affirmative resource in our culture. Secondary differences must be subordinated to the human ecological imperative. ❞
— E.P Thompson Notes on Exterminism: The Last Stage of Civilisation New Left Review (1980))
Exterminisms
Notes on Exterminism (Thompson): In his 1980 essay, E.P. Thompson argued that the Cold War nuclear arms race could not be explained by rational politics or deterrence theory alone. Instead, he proposed that the "mode of production" had been colonized by a "mode of extermination." In this state, the military-industrial systems of the superpowers gained relative autonomy, evolving according to their own internal logic of expansion and lethality rather than responding to actual threats. History became driven by the inertia of weapons systems themselves, pushing humanity toward collision regardless of what politicians, the public, or even the generals personally desired.
Nuclear Exterminism > (AI) Neo-Exterminism: This concept marks the transition from the "First Cold War" (Nuclear) to the "Second Cold War" (AI). While nuclear exterminism threatened instant physical destruction through fission, AI Neo-Exterminism threatens a more complex, recursive self-improvement and total loss of control. Unlike nukes, which sit in silos until fired, AI is a "dual-use" technology that integrates into the fabric of daily life. The threat here is not just an explosion, but the speed of technological evolution outpacing the speed of human wisdom and governance, leading to a termination event where the future is optimized for variables incompatible with human life.
Bio-Exterminism (Bostrom): This refers to the radical democratization of "omnicidal" capability. While nuclear weapons required massive state infrastructure and centrifuges, AI-aided biology allows small groups, "lone wolf" actors, or even individuals to engineer specific pathogens with pandemic potential. This shifts the "threat landscape" from a few centralized states (a "unipolar" or "bipolar" risk) to a decentralized, ubiquitous peril (a "multipolar" catastrophe), where the capacity to end civilization becomes as accessible as software code, making containment virtually impossible under current paradigms.
Exhaustionism (Stewart): Building on E.P. Thompson’s concept of Exterminism (the active drive toward nuclear annihilation), Exhaustionism describes the dominant "meta-ideology" of the contemporary "global organic crisis". It is characterized by ideological enervation and a "dangerous stagnation in political imagination," where both elites and the public accept that civilizational collapse is either occurring or imminent, yet believe it is "too late" for fundamental structural transformation.
The New Power Elite
The Thermonuclear Monarchy (Kemp): Luke Kemp describes how the advent of nuclear weapons fundamentally broke the social contract of democracy. Because a nuclear exchange requires decision-making in minutes, the power to kill everyone on Earth was concentrated into the hands of a single individual (the President/Executive), creating a "monarchy" within the shell of a democracy. This necessitates a permanent state of emergency where the survival of the species hinges on the psychological stability of one person, bypassing all checks, balances, and public consent.
The Cybersilicon Corpotocracy (Ramsahoye): This is the AI-era evolution of the Thermonuclear Monarchy. Instead of a political leader holding the nuclear codes, sovereignty over the future of the species is privatized into the hands of a few un-elected tech CEOs and boards of directors—the "New Power-Elite." These entities control the development of superintelligence, making decisions that affect every living being without any democratic mandate. It represents the ultimate privatization of the benefits (short-term wealth and status) and socialisation of the losses (existential risk).
Big Tobacco, Big Oil & Big AI (Tegmark): Max Tegmark identifies a historical lineage of profit-driven denial. Just as Big Tobacco engineered doubt about lung cancer and Big Oil obfuscated the reality of climate change to protect trillions in revenue, "Big AI" companies are incentivized to downplay the existential risks of their products. They engage in "ethics washing" and lobby against regulation to maintain the flow of investment, prioritizing market share over species safety in a repetition of the corporate playbook that sacrifices the public good for private gain.
Manufacturing Consent (Chomsky): This concept explains how media and corporate power manipulate public discourse to manufacture support for policies that are actually harmful to the general population. In the context of AI, it describes how the narrative is shaped to frame AGI development as "inevitable" and "beneficial," marginalizing critics as "doomers" or Luddites. It ensures that the public remains passive observers of their own obsolescence, accepting the risks of AI scaling as the necessary price of "progress."
Theory of Historical Development
Root Socioeconomic Orientation (Joseph): this concept defines the foundational "thought syntax" and structural logic of modern civilization—specifically the market system—as the primary driver of human behavior and social outcomes. It refers to a social order premised on scarcity, competition, and self-interest, which evolved from the survival pressures of the Neolithic Revolution and the subsequent adoption of agriculture and trade. This orientation functions as a form of constant operant conditioning, wiring the collective psychology to view the world through a lens of fear, dominance, and "in-group/out-group" bias. Joseph argues that this root logic of competitive survival - an assumed 'fundamental [material] inadequacy' (Fuller) and 'war of all against all' (Hobbess) - acts as structural force that inevitably generates structural violence, inequality, and ecological destabilization, overriding individual moral intent.
Military-Economic Adaptationism (Dafoe): Allan Dafoe’s concept describes the structural compulsion for nations to adopt dangerous technologies to avoid being overtaken by rivals. In an anarchic global system, if one nation builds autonomous weapons, others are forced to follow suit or face obsolescence. This creates a "race to the bottom" on safety standards, where slowing down to ensure safety is punished by the loss of economic or military dominance, trapping all actors in a suicide pact of rapid escalation.
The Molochian Game-Theoretic Force Function (Schmachtenberger): "Moloch" is a metaphorical personification of negative-sum games and coordination failures: the game-theoretic function that forces agents to sacrifice long-term values (like survival or a habitable planet) for short-term power and competitive advantage. It dictates that the only systems that survive are those that ruthlessly expand and exploit, even if they destroy the substrate (Earth) they live on. Moloch is the "god" of the race to the bottom, demanding the sacrifice of the future for the present.
The Cooperator's Curse: This refers to the tragic game-theoretic dynamic where the dominant strategy in a non-cooperative game is defection (e.g., building the weapon, polluting, scaling AI), even though mutual cooperation (not doing those things) is the rational choice for collective survival. Because actors cannot trust one another to abstain, they are "cursed" to choose the path of mutual destruction to avoid the "sucker's payoff" of being the only one who cooperated while others defected.
Doom
Negative Existential Externality (Negative Externality + Existential Catastrophe)This concept scales the standard economic failure of the "negative externality" (pollution costs not reflected in product prices) to the level of species survival. It argues that the current market price of high-risk technologies (like AI development or gain-of-function research) creates a Negative Existential Externality: the cost of the transaction is not merely environmental degradation, but the potential annihilation of all future value. In this system, private entities capture the immediate profits of acceleration while "socializing" the risk of extinction to the entire human species. It represents the ultimate market failure, where the "price" of a cheaper consumer service today is the statistical probability of the end of the world tomorrow, a cost that is entirely invisible on the corporate balance sheet.
Military-Economic Maladaptionism
A critical subversion of Allan Dafoe’s "adaptationism" theory. While Dafoe argues that states rationally adapt to military and economic competition by adopting dangerous technologies (like AI weapons) to survive, Maladaptionism argues that this competitive logic itself has become an evolutionary dead-end. In a closed, fragile system with existential stakes (nuclear, biological, AI), the "rational" strategy of maximizing power relative to rivals is actually "maladaptive" for the species as a whole, because it guarantees a collective race to the bottom where safety margins are eroded to zero. The very mechanism that drove historical state survival now drives civilizational suicide.
Hyper-Agents of Doom (Agents of Doom + Hyperobjects)
Synthesizing Luke Kemp’s "Agents of Doom" (institutions driving risk) with Timothy Morton’s "Hyperobjects" (entities massively distributed in time and space), this concept describes risk-generating entities that have become too vast and "viscous" to be managed by human politics. A Hyper-Agent of Doom is not just a rogue state or corporation, but a decentralized, globally distributed system (like the global financial market or the fossil-fuel extraction network) that acts with a singular, destructive agency. Because these agents are "non-local"—existing everywhere and nowhere simultaneously—they cannot be dismantled by traditional targeted regulation. They are pervasive atmospheres of risk that stick to us, creating a sense of paralysis because the "villain" is the very environment we inhabit.
The Doomsday Megamachine (Doomsday Machine + The Megamachine)
This concept fuses Daniel Ellsberg’s specific "Doomsday Machine" (the automated nuclear retaliation system) with Lewis Mumford’s "Megamachine" (society organized as a rigid, bureaucratic apparatus). It posits that we have not just built a weapon capable of ending the world, but we have turned our entire civilization into a Doomsday Megamachine—a socio-technical organism where every cog (from the software engineer to the stock trader) is functionally integrated into a process geared toward self-destruction. In this view, the "machine" has no off-switch because it is composed of human routines, institutional inertia, and economic imperatives; the bureaucracy itself has become the ballistic missile, blindly executing a program of terminal efficiency.
World-Risk Factory (World Risk Society + Social Factory)
Combining Ulrich Beck’s "World Risk Society" with the Italian Autonomist concept of the "Social Factory" (where capitalism expands beyond the factory walls into all of social life), this concept argues that the production of existential risk has become the primary "output" of modern life. We are no longer just living in a risky society; we are working in a World-Risk Factory. Every digital interaction, every consumption choice, and every moment of data generation is unpaid labor that feeds the algorithms and systems (the "social factory") that destabilize the world. Risk is not an accidental byproduct; it is physically manufactured by the daily, microscopic reproduction of the system by billions of people, making us all unwitting assembly-line workers in the production of our own demise.
Life-Worlds & Agents
The Exterminist Mind & Will (Thompson): "Impulsive exterminism began to grow an exterminist mind and will." This concept describes the psychological and sociological crystallization of the drive toward destruction. Thompson argued that what began as "impulsive" reactions to geopolitical fear eventually hardened into a fixed, institutional psychology—an "Exterminist Mind". It posits that the system itself develops a "Will" separate from the individuals within it; the military-industrial complex is no longer just a tool of the state, but a subject with its own desires, rationalizing the irrational (mutually assured destruction) as necessary and inevitable.
MADness: This describes the specific psychological state of the technocratic and military elite, a collective mentality characterized by a "numbing of the moral imagination," where the human capacity for empathy is cauterized by technical calculations. It is the learned ability to calmly discuss "megadeaths," "collateral damage," and "acceptable losses" in abstract, strategic terms, completely detached from the visceral human reality of the horror. This intellectual sanitization allows planners to rationally contemplate scenarios that are objectively insane, treating the end of the world as just another variable in a spreadsheet.
Nuclear Weapons as Possessing Relative Autonomy (Thompson): "Nuclear weapons (all weapons) are things: yet they, and their attendant support-systems, seem to grow of their own accord, as if possessed by an independent will." Here at least we should reach for that talisman, 'relative autonomy.'" Thompson utilized the Marxist concept of "Relative Autonomy" to argue that weapons systems are not merely inert tools of political policy. Once a weapons system (or today, an AI system) reaches a certain scale of investment and bureaucratic complexity, it gains a life of its own. It generates its own economic rationale (jobs, contracts), its own scientific inertia (research must continue), and its own strategic logic (use it or lose it). The "Thing" begins to dictate policy to the politicians, rather than the other way around. "World War Three could burst out as 'something that no one willed'; the resultant of competing configurations of social forces," simply because the autonomy of the weapon-system overrode human agency.
AI Weapons as Possessing Absolute Autonomy: This concept marks the terrifying qualitative leap from Thompson’s era to the present. While nuclear weapons possessed Relative Autonomy (their use was dictated by institutional inertia, yet still required a human decision to launch), AI weapons introduce the possibility of Absolute Autonomy. In this stage, the "loop" is closed; the weapon system not only generates the strategic logic for war but executes the "kill chain" (selection and engagement of targets) without human intervention. The "Thing" transitions from a tool that shapes policy to an agent that acts upon the world directly, potentially operating at speeds (hyper-war) or following optimization pathways (instrumental convergence) that remain opaque to its human creators, rendering us spectators to our own destruction.
Empire, Biopolitics, Psychopolitics & Necropolitics
This conceptual sequence maps the evolution of power and control moving from the management of life to the management of death.
-
Empire (Hardt & Negri): This describes the new, decentered global order that has superseded the nation-state. Empire is the boundless, supranational network of global capitalism that regulates the world. In the context of risk, it implies there is no "outside" and no single "control room" to shut down dangerous trends; the risk-generating mechanisms are distributed across a global, acephalous (headless) network.
-
Biopolitics (Foucault): Foucault defined this as the power to "make live and let die." It is the administration of life itself—managing populations through hygiene, health, and statistics to optimize their economic utility. In CS-ER, this explains how populations are monitored and "optimized" by the state/tech apparatus.
-
Psychopolitics (Han): Byung-Chul Han updates Foucault for the digital age. Power no longer needs to coerce the body (biopolitics); it seduces the mind. Psychopolitics is the exploitation of the psyche, where subjects voluntarily expose their data and exploit themselves in the pursuit of achievement. It stabilizes the system by internalizing control, making resistance impossible because the subject believes they are free.
-
Necropolitics (Mbembe): Achille Mbembe describes the ultimate expression of sovereignty in a world of existential risk: the power to dictate who may live and who must die. It moves beyond "making live" (biopolitics) to the creation of "death-worlds"—zones where populations (refugees, the poor, the climate-vulnerable) are subjected to conditions of living death. In the context of AI and climate collapse, Necropolitics is the logic of the lifeboat: managing the crisis by designating "surplus populations" to be abandoned or extinguished to secure the safety of the few.
References
— E.P Thompson Notes on Exterminism: The Last Stage of Civilisation New Left Review (1980))
— Blake Stewart Notes on exhaustionism, the latest moment of the global organic crisis Institute of Race Relations (2022)
— Corin Katzke & Gideon, Futerman The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeatin Convergence Analysis (2024)
— Anthony Aguirre Keep The Future Human Future of Life Institute (2025)
— Luke Kemp Agents of Doom BBC Future (2021)