self-characterisation: I am an interdisciplinary social philosopher principally concerned with the metacrisis, metamodernism, the megamachine and macrostrategy. Broadly speaking, I situate myself on the periphery of the Effective Altruism, Existential Risk (studies), AI alignment and Progress (studies) space — with a heterodox perspective, informed by critical theory (post-capitalism, left-accelerationism), systems theory (AI theory, neo-cybernetics) and meta-theory (integral theory, conceptual engineering).
Current Projects: I'm currently developing SourceCodeX (source code + codex): a theory-building, wiki-style project for the emerging space intersecting the metacrisis, effective altruism and critical political economy: aiming to cohere concepts from each of these memetic systems into a whole that represents this synthesis (effectively creating the equivalent of the LessWrong & EA Forum Wiki's). I'm also developing Molochian a digital cultural artifact documenting the intellectual history (critical genealogy) of Moloch - the god of destructive competition (coordination failure, negative-sum game theory) and means-ends-reversal - the sacrifice of value (the good, the true and the beautiful) for survival and power.
Past Role: I was recently a research fellow at Life Itself and the Second Renaissance initiative during which I starting working on a sequence on Meta-theory - foundational research and conceptual engineering surrounding the metacrisis, metamodernism, metatheory and metaphysics, including 'What is the Metacrisis and What Should it Be', 'Maps and Mechanisms of the Metacrisis' and 'Metamodernism vs Neomodernism'.
Writings in-progress: Complementing my overarching research project Civilisational Mechanics, I am also writing a a tripartite collection, Fearful Symmetries a counter-blog at Disputes of Progress and sequence on Critical Effective Altruism.
Maps of the Metacrisis: Other projects of mine include Molochian, an intellectual history of the god of destructive competition, and Cybersynergetic a literature map of the civilisational superintelligence alignment problem - the latter undertaken during PIBBSS.
Academic Research: my philosophy (BA, now MA) dissertation (MAD), The Capital-AI: Capitalism as an Unaligned ASI models advanced global capitalism as a form of unaligned (cybernetic) artificial superintelligence that is developing unaligned (technical) artificial superintelligence. My capstone project Capital Phenomenology: What Is It Like to Be Capitalism? draws a further parallel to evolution, framing the market 'extended order’ as a distributed 'intelligent designer’ & 'emergent agency' across multiple subjects.
Designed Fellowships: I have also created a Redteaming EA Fellowship for the constructive criticism of the movement (which I ran at EA Oxford) and a Pragmatic Utopianism Fellowship presenting an extended and alternative conception of EA as doing systems and society better.
Involved Events: I have previously spoken at The International Conference on AI Risk on conceptual engineering and collective intelligence and designed the sessions of the Sensemaking Summer School on the metacrisis and metamodernism.
Former Experience: I have previously been a group organiser, community builder and Redteaming fellowship facilitator at Effective Altruism Oxford; an AI governance fellow of Impact Research Groups London, part of the winning research group on The Data of Gradual Disempowerment; and a summer research fellow of the Principles of Intelligent Behaviour in Biological and Social Systems initiative for San Francisco 2025 focused on collective intelligence and complex systems.
Projects-in-Progress
Philosophical musings on the megamachine, the metacrisis and meta-modern mythos
Like the natural philosophers and classical physicists that sought to grasp the celestial mechanics of the universe, as a technological philosopher and social physicist of the modern world, the end of my inquiry is a civilisational mechanics: a grand critical systems theory of how civilisation functions and, more importantly, dysfunctions.
In this regard, the principal aim of civilisational mechanics is to arrive at a theory of causation of contemporary problems, a theory of the metacrisis: how all the problems, crises and risks of modernity emerge from the fundamental structure and culture of civilisation, the dominant game-theories of the world-system and the underlying justifying beliefs that reproduce and enact them.
What we must end most fundamentally, as the generator of the metacrisis, is the civilisational evolutionary process of military-economic adaptationism: a process that has previously game-theoretically forced (Molochian- trapped) agriculturalisation (12,000 BCE) and industrialisation (1750) and is now accelerating autonomisation (2030): 7 the replacement of humans with AI agents in critical societal systems, resulting in either human disempowerment if successful or a misalignment catastrophe in the attempt.
Ultimatelly, the self-terminating process we must counter-act is the new form of AI-exterminism (alt neo-exterminism): proceeding the nuclear exterminism of the Cold War, AI-exterminism in the 21st century is the emergent trajectory of the AI arms race towards mutually-assured AI malfunction and destruction ('"something that no one willed"; 'the resultant of competing configurations of social forces'). [Thompson]
In light of this diagnosis, the project that must be undertaken is defined by pragmatic utopianism and civilisation design: the interdisciplinary research project of formulating the principles and protocols of an aligned world-system and the theory of transition (macrostrategy) towards realising it. This is the end goal of civilisational mechanics (the meta-theory and this blog).
on the parallels between capitalism, superintelligence, God and evolution
Fearful Symmetries derives its name and philosophical significance from a recurring line (at the end of the first and last stanzas) from William Blake’s poem The Tyger from The Songs of Experience: What immortal hand or eye, Could frame thy fearful symmetry? What immortal hand or eye, Dare frame thy fearful symmetry?
As opposed to the Evil God of The Tyger, the intelligent designer, the ‘immortal hand or eye’ , that Fearful Symmetries is concerned with is ‘the invisible hand’ and ‘life-blind watchmaker’ (and watch) of the market. As opposed to the natural universe, Fearful Symmetries is concerned with the ‘tremendous cosmos of the modern [spontaneous capitalist] economic order' that is ‘the result of human action, but not the execution of any human design'.
FS (re)conceptualises and critiques capital as an unaligned artificial intelligence and intelligent design - an autopoetic architect and architecture, creator and creation - that has manufactured an artificial nature with an artificial selection process and form of artificial life. Ultimately, FS aims to construct a meta-modern mythos for more-than-human civilisation and a metanarrative for the metacrisis.
Symbolism: two symmetrical arrows symbolising the past and future lightcone and the present point in time in between ('We live in the interregnum between worlds, between paradigms’)
Three Interrelated 'Fearful Symmetries'
➢ capitalism as recursively self-improving unaligned artificial superintelligence
➢ capitalism as religion and capitalist ideology as religious theodicy
➢ capitalism as distributed intelligent designer, reverse extended bicameral mind,
Laplacean-Cartesian Demon and alien god
on the metacrisis, metamodernism, metarationality & metaphysics
foundational research and conceptual engineering geared towards critically engaging with the existing discourse and make intellectual developments beyond it.
Metatheory is the "view from above"—the study of theory itself. Metatheory is concerned with integrative frameworks (like Integral Theory or Critical Realism) that attempt to map how different disciplines, paradigms, and worldviews relate to one another: ultimately seeking to identify the partial truths in all approaches to create a coherent, holistic understanding of reality.
Conceptual Engineering is a methodological approach in philosophy that treats concepts not as fixed definitions, but as cognitive tools that should be actively designed or "engineered" to serve specific functions. If a concept is currently defined in a way that complicates our understanding of reality or implicitly conceals the structural violence of the status quo, conceptual engineering involves reconstructing that concept to better track the truth and serve normative goals.
A conceptual adaptive system
SourceCodeX (source code + codex) is a theory-building project for the emerging space intersecting the metacrisis, critical political economy, AI alignment and effective altruism. The purpose of the codex is to synthesise concepts from these frames into a coherent totality - effectively creating the equivalent of the LessWrong (1154) & EA Forum Wikis (750) for a new hybrid lexicon.
Through formulating definitions and visualisations of significant abstractions (the metaphorical 'source code'), the codex functions to advance this combinatorial process and the conceptual adaptive system it forms. As well as hosting established concepts from many theorists from across these domains, the site also functions as a medium to introduce a series of my own novel conceptual engineerings that structure my internal world-model.
In time, my plan is to add up to 500 hundred entries. Each entry features a macroconcept - represented by an animation, accompanied by an epigraph and attribution - followed by a glossary of related concepts I have curated (with AI-assisted text). Some of the definitions I have written myself, for others I have taken excerpts from existing literature that already neatly encapsulate the idea. Beyond theory, this site also attempts to be a form of art.
A literature map of the civilisational superintelligence alignment problem
With artificial superintelligence – once science-fiction – entering political realism, scenario-planning and corporate strategy, the "alignment problem" reveals itself beyond AI as fundamentally cybernetic, civilisational and capitalist.
AI is not a technology in a vacuum or an alien intelligence in cyberspace, but is being developed and deployed in the real-world by existing human systems: the artificial collective intelligences of power-maximising states and profit-maximising corporations that are already misaligned and out-of-control.
While think-tanks worry that future "AI could enable a human or a small group of humans to take over" or a "gradual loss of control of our own civilization", the technocratic power-elite of the present prevent AI safety regulation that would protect the whole of humanity.
Cybersynergetic serves as a central archive – a techno-cultural artifact – documenting the conceptual explosion in the discourse surrounding AI alignment (control and interpretability) where this frame has been extended to states, corporations, capitalism and civilisation as a whole.
This literature marks the emergence of a new social ontology: computational mechanism (alternatively neomechanism), an information age update of the classical mechanisma of the industrial age changing the dominant technological metaphor from the (kinetic) machine to the (cognitive) machine intelligence (AI).
The ultimate purpose of the site is to enhance the coherence of this ('complex systems', 'sociotechnical', 'multi-agent') paradigm-shift in the AI alignment space, contribute to interdisciplinary research on the metacrisis, and orient macrostrategic thought towards world-systems change.
The (inhuman) history of a memetic entity
Molochian is a digital cultural artifact documenting the intellectual history (critical genealogy) of Moloch - the god of destructive competition (coordination failure, negative-sum game theory) and means-ends-reversal - the sacrifice of value (the good, the true and the beautiful) for survival and power.
Molochian trap (alt: multi-polar trap, coordination problem): a system of incentives where individually rational actions for competitive advantange in the short-term lead to collectively adverse outcomes in the long-term; where competing actors are incentivised into a "race to the bottom" that nobody wills but no one can unilaterally stop; a "bad Nash equilibrium" where no single actor can improve their situation by changing their strategy alone, so all actors remain locked in a suboptimal dynamic.
Beginning with the biblical deity of child sacrifice and ending in the neo-exterminism of the AI arms race, Moloch (as all true gods are) is timeless; an archetype instantiated across historical, ethical and technological conditions. Furthermore, Moloch is transcendent and immanent; we are Moloch and Moloch is us, it becomes us and we become it, we act it out and it acts through us. We 'metaphorically' attribute causality and responsibility to an emergent demonic entity (‘it was Moloch that did it’) when no such entity actually exists; and yet ‘it’ effects the world as if it does. In this way, Moloch is an alienation of our individual and collective agency.
Whilst popularised by Scott Alexander's essay Meditations on Moloch (2014), which itself referenced Ginsberg's Howl (1955), the meme of 'Moloch' also appears notably in Lang's anti-capitalist film Metropolis (1927) and Huxley's post-apocalyptic novel Ape & Essence (1948) - under the name 'Belial'. Beyond the name of Moloch, the notion of (capitalist) civilisation as a 'world-spirit' and 'alien power' - a dark emergent agency - finds precedent in the critical philosophies of Adorno and Marx. The symbol and substance of Moloch has a rich intellectual history that is largely unrecognized. The purpose of this site is to excavate and present this history.
Philosophical musings on the megamachine, the metacrisis and meta-modern mythos
Disputes of Progress is a critical response to the Roots of Progress (RoP) by Jason Crawford, an influential blog within the ‘Progress movement’ by one of its most prominent ‘thought-leaders’. Most of the writings under this project will take the form of ‘response posts’ that aim to target, critique and subvert particular posts from RoP.
Disputes of Progress aims to critique the work of other prominent progress theorists, as well as the wider actually-existing progress movement, for its ideological alignment with actually-existing techno-capitalism. In this regard, as a heterodox (sub)version of the mission of The Roots of Progress, the mission of Disputes of Progress is to create ‘a new critical philosophy of progress for the 21st century’.
Through original posts, guest contributions by critical (progress) theorists and critical dialogues (‘adversarial collaborations’) between (techno-capitalist and critical) progress theorists, Disputes of Progress seeks to become a counter-hegemonic counterbalance to the progress movement, and its false progress metanarrative; the ‘vindicatory genealogy’ of capitalist civilisation’s history that functions as a ‘legitimising myth’ (or system-justification) for its continuation into the future
-
A sequence of writings that critique the actually-existing EA movement for it's dominant ideological alignment with global techno-capitalism
Effective Altruism (EA) is a philosophical and social movement that has admirably set itself the ultimate moral project of ‘doing the most good’. Whilst, in principle, ‘Effective Altruism is a question and not an ideology’, in practice, ‘actually-existing Effective Altruism’ is in fact ‘wedded to a particular ideology’: the ideology of capitalism.
EA is broadly pro-capitalist and reformist in its orientation towards economic systems and systems change, approaching the project of ‘doing the ‘most good’ within the capitalist paradigm and its parameters; in other words, Effective Altruism equals an ‘Effective and Altruistic Capitalism’.
Further developing this critique, this sequence is critical of actually-existing effective altruism, actually-existing capitalism and the alignment between the two. Critical-EA aims to develop a critical epistemology, ideology critique and historicisation of (a-e) EA as a product of the ‘ruling ideas’, ‘moral blindspots’ and ‘structural incentives’ of the economic system of its time.
Towards Better Futures
Effective Altruism is currently undergoing a ‘utopian turn’ with thought leaders broadening their horizons from existential risk to existential flourishing; beyond the end of the world to the beginning of a better one.
After Superintelligence, Bostrom has reflected on the prospect of arriving at a Deep utopia and a ‘solved world’. Similarly, MacAskill has expanded his scope of What We Owe the Future from ‘safeguarding civilization’ to Better Futures (‘Should we aim for flourishing over mere survival?’) and ‘the transition to post-AGI society’.
Following suite, the Pragmatic Utopianism fellowship aims to be an entry point into this emerging area of the EA discourse. A discourse on dystopia and utopia has always been present within the wider culture surrounding EA, but it is now coming to the forefront in the light of the transformative potential of AGI.
The hope of the fellowship is that a review of the historical and recent literature will be valuable towards guiding this new wave of utopian thought across the movement and out into the world it seeks to change.
-
Doing EA Better
The Effective Altruism Redteaming Fellowship is designed to introduce participants to the methodology of ‘redteaming’ and provide a space for the critical discussion of ‘actually-existing EA'.
By the end of the fellowship, the aim is for participants to have formed an understanding of how to do redteaming in the context of EA and to have gained clarity on how they might contribute to constructively critiquing (the principles and practices of) EA in the future.
Whether or not participants consider themselves to be ‘an EA’ or an EA skeptic (EA-adjacent or EA-critical etc), the fellowship encourages the notion that it is possible – and even highly necessary and valuable – to be both a member and a critic of EA at the same time.
In relation to EA, redteaming is the attempt to adopt divergent, heterodox and critical perspectives on actually-existing EA and formulate internal critiques of established beliefs within the movement (mainstream and elite EA views), with the ultimate goal of improving the social epistemics and cause-prioritisation of the movement as a whole.
Experience-in-EA

May- September 2025
Research Fellow: Principles of Intelligent Behaviour in Biological & Social Systems Fellowship 2025
The Collective Intelligence Alignment Problem
During the fellowship, my project was to engage in 'conceptual AI safety research' that introduces and clarifies the 'conceptual engineering' of the (human-machine) 'collective intelligence (CI) alignment problem' - as a parallel, critique and extension of the 'AI alignment problem'. Whilst working independently on a literature map of this sociotechnical paradigm-shift in (A)I governance, I received mentorship from Jan Kulveit, the principal investigator of the Alignment of Complex Systems Research Group. The Principles of Intelligent Behaviour in Biological & Social Sytems (PIBBSS) Fellowship 2025 is a 3-month summer research fellowship in the Bay Area that aims to leverage insights on the parallels between intelligent behaviour in natural and artificial systems towards progress on important questions in AI risk, governance and safety.

January - May 2025
1st Place Research Project on AI Governance: Impact Research Groups 2025
The Data of Gradual Disempowerment: Measuring Systemic Existential Risk from Incremental AI Development
As our winning submission to the (8-week) IRG program, my AI Governance team (mentored by the AI safety researcher Francesca Gomez) co-authored a follow-up paper to 'Gradual Disempowerment' (Kulveit et al; 2025) discussing how we can proactively quantify and detect human disempowerment from the integration of AI into key societal systems. In addition to my contributions to the main text, as the 'resident philosopher' of the team, I also authored an appendix introducing my model of 'The Ship of Theseus of the State': a combination of two classical philosophical metaphors (updated for the information age) applied towards conceptualising the sociotechnical process of humans being replaced by AI agents within governance ('the art of steering') and society ('the ship').


January - May 2025
Redteaming Fellowship Founder & Facilitator: Effective Altrusim Oxford 2025 (Hilary Term)
The Effective Altruism Redteaming Fellowship: Doing EA Better through Constructive Internal Criticism
I joined the team of the Effective Altruism society in Oxford (where EA began) to design and lead the first Redteaming Fellowship within EA oriented around the constructive criticism ('redteaming') of the movement and its cause areas (Global Development, Animal Welfare, Existential Risk). The RF (hosted at Trajan House, the main EA office where the Center for Effective Altruism is based) was an experimental space for adopting divergent, heterodox and critical perspectives on (actually-existing) EA as a method of formulating internal critiques of established beliefs and ultimately, improving the social epistemology and cause-prioritisation of the movement as a whole. Alongside running this 6-week prorgam, I also helped organise an EA lightning talk series styled Doing Good Faster!. My work for EA Oxford was supported by the Open Philanthropy University Organiser Fellowship.
.png)

.png)


.png)

.png)

.png)

