top of page
Max (profile).png

Max Ramsahoye

Collective Intelligence Alignment Researcher

Recursively in service to the holistic alignment of artificial and human intelligence, the complex systemic adaptation of civilisation and the positive dialectics of the metacrisis.

  • Linkedin

PIBBSS: The CI Alignment Problem

I am currently a research fellow of the Principles of Intelligent Behaviour (in Biological & Social Systems) Fellowship 2025 where my project is to engage in 'conceptual AI safety research' that introduces and clarifies the 'conceptual engineering' of the (human-machine) 'collective intelligence (CI) alignment problem': a parallel, critique and extension of the 'AI alignment problem' to multi-agent systems of artificial and human intelligences (hybrid collective intelligences) that exhibit ‘emergent agency’ and ‘optimisation power’ across ‘multiple systems, scales and substrates’. Through a literature review, landscape analysis and research agenda, my project aims to formalise an in-progress 'sociotechnical' and 'complex systems' turn within AI governance. As a contribution to this paradigm-shift, I will be presenting my research at the International Conference on Large-Scale AI Risks 2025.

EA Oxford: Redteaming Fellowship

I was recently a group-organiser at Effective Altruism Oxford where I founded and facilitated the first Redteaming Fellowship within EA oriented around the constructive criticism ('redteaming') of the movement and its cause areas (Global Development, Animal Welfare, Existential Risk): the RF was an experimental space for adopting divergent, heterodox and critical perspectives on (actually-existing) EA as a method of formulating internal critiques of established beliefs and ultimately, improving the social epistemology and cause-prioritisation of the movement as a whole. Building on the fellowship, I am now looking to set up the Redteaming Effective Altruism Division (READ) as a wider research network and intellectual subcommunity within EA oriented towards shaping Third Wave Effective Altruism

Disputes of Progress

As an independent project, I am writing materials for Disputes of Progress (DoP): a para-academic blog designed as a critical response to the The Roots of Progress blog, the Roots of Progress Institute and the wider progress movement. Through targeted critiques of Crawford's writings and the work of other 'progress' thought leaders, the blog aims to critique the actually-existing progress movement’s ideological alignment with actually-existing techno-capitalism, establish a new critical 'philosophy of progress for the 21st century' and advanced the formation of a counter-hegemonic ‘alter-progress’ movementDisputes of Progress is, in part, a more targeted approach building on 'Development in Progress', the last and longest research article of  The Consilience Project (2021-2024).

Metacrisis Forum

I am also working on developing core social technology, digital infrastructure and coordination mechanisms for the emerging Metacrisis eco-system - the Metacrisis Forum, Wiki and Foundations Program (‘Metachrysalis’) - to harness human collective intelligence to enhance humanity’s collective knowledge of the metacrisis and facilitate collective action towards addressing it. The Metacrisis is the interconnected crises, catastrophic risks and critical attractors of 21st century civilisation and the complex world-system dynamics that generate them (military-economic adaptationism, techno-capital accelerationism, multi-scale darwinism; collective-action, principal-agent, value-alignment problems etc). A major challenge of the metacrisis is to redesign civilisation beyond the 'semi-anarchic default condition' to avoid both of the  ‘twin failure modes’ of catastrophe (spontaneous disorder, extinction risk) and dystopia (stable totalitarian order, suffering risk) without a solution (centralisation, decentralisation) to one that causes the other.

Pragmatic Utopianism

I am also doing theory, movement and world-building for Pragmatic Utopianism: an extended and alternative conception of Effective Altruism with the holistic orientation of 'doing good better' by doing systems and society better or 'doing the most good' by designing and transitioning to the most aligned civilisation and culture. In contrast to the capitalist realism, techno-solutionism and compartmentalised cause areas of the actually-existing EA movement, the paradigm(-shift) of PU is oriented towards the metacrisis, world-systems change and the future structure of civilisation. The goal of PU is to engage in 'Blue Skies Social Science' to find ‘Civilisation X’ (a parallel to ‘Cause X’): a ‘civilisation design’ beyond ‘the end of history’ model (the international market economy and nation-state system)  - the global resource based economy - that vectors towards a meta-utopian, proto-topian and pareto-topian ‘third attractor’.

The Global Redesign Institute

In relation to the social governance structure and techno-economic infrastructure quadrants of the civilisational Emergence model, a major project (proposed by Peter Joseph in 2014) I am seeking to advance is the formation of the Global Redesign Institute: a think tank and collaborative (p2p) design initiative for creating a simulated model of an advanced global resource management system (post-market economic calculation based on real-world resource utilities) and optimising planetary infrastructure configurations according to technical efficiency. The GRI is a world-building and civilisation design project that directs human collective intelligence, in synergy with AI, towards designing an aligned alternative economic system for the 21st century.

Experience

PIBBSS.jpg

May- September 2025

Research Fellow: Principles of Intelligent Behaviour in Biological & Social Systems Fellowship 2025

The Collective Intelligence Alignment Problem

During the fellowship, my project is to engage in 'conceptual AI safety research' that introduces and clarifies the 'conceptual engineering' of the (human-machine) 'collective intelligence (CI) alignment problem' - as a parallel, critique and extension of the 'AI alignment problem'. Whilst working independently on a literature review, landscape analysis and research agenda of this sociotechnical paradigm-shift in (A)I governance, I will be receiving mentorship from Jan Kulveit, the principal investigator of the Alignment of Complex Systems Research Group. The Principles of Intelligent Behaviour in Biological & Social Sytems (PIBBSS) Fellowship 2025 is a 3-month summer research fellowship in the Bay Area that aims to leverage insights on the parallels between intelligent behaviour in natural and artificial systems towards progress on important questions in AI risk, governance and safety.

IRG.png

January - May 2025

1st Place Research Project on AI Governance: Impact Research Groups 2025

The Data of Gradual Disempowerment: Measuring Systemic Existential Risk from Incremental AI Development

As our winning submission to the (8-week) IRG program, my AI Governance team (mentored by the AI safety researcher Francesca Gomez) co-authored a follow-up paper to 'Gradual Disempowerment' (Kulveit et al; 2025)  discussing how we can proactively quantify and detect human disempowerment from the integration of AI into key societal systems. In addition to my contributions to the main text, as the 'resident philosopher' of the team, I also authored an appendix introducing my model of 'The Ship of Theseus of the State': a combination of two classical philosophical metaphors (updated for the information age) applied towards conceptualising the sociotechnical process of humans being replaced by AI agents within governance ('the art of steering') and society ('the ship').

Screenshot 2025-05-20 01.46.02.png
EA Oxford.png

January - May 2025

Redteaming Fellowship Founder & Facilitator: Effective Altrusim Oxford 2025 (Hilary Term)

The Effective Altruism Redteaming Fellowship: Doing EA Better through Constructive Internal Criticism 

I joined the team of the Effective Altruism society in Oxford (where EA began) to design and lead the first Redteaming Fellowship within EA oriented around the constructive criticism ('redteaming') of the movement and its cause areas (Global Development, Animal Welfare, Existential Risk). The RF (hosted at Trajan House, the main EA office where the Center for Effective Altruism is based) was an experimental space for adopting divergent, heterodox and critical perspectives on (actually-existing) EA as a method of formulating internal critiques of established beliefs and ultimately, improving the social epistemology and cause-prioritisation of the movement as a whole. Alongside running this 6-week prorgam, I also helped organise an EA lightning talk series styled Doing Good Faster!. My work for EA Oxford was supported by the Open Philanthropy University Organiser Fellowship.

Projects

2025

A New Critical Philosophy of Progress for the 21st Century

Disputes of Progress is a critical response to The Roots of Progress blog, (now) the Roots of Progress Institute and the wider progress movement. Through targeted critiques of Crawford's writings and the work of other (non-progressive) 'progress' thought leaders, the blog aims to formulate a critique of the actually-existing progress movement and its ideological alignment with actually-existing techno-capitalism.  Ultimately, Disputes of Progress seeks to advance the formation of a counter-hegemonic ‘alter-progress’ movement and establish a new critical 'philosophy of progress for the 21st century'Disputes of Progress is, in part, a more targeted approach building on 'Development in Progress', the last and longest research article of  The Consilience Project (2021-2024) – which itself builds on their previous articles ‘The Case Against Naive Technocapitalist Optimism’ and 'Technology is Not Values Neutral: Ending the Reign of Nihilistic Design’.

Screenshot 2025-05-20 13.19.21.png

2025

A Coordination Mechanism for the Metacrisis Ecosystem

(The) Metacrisis Forum is a platform for the metacrisis community to communicate, coordinate and collaborate on addressing ‘the metacrisis’. The goal of the forum - alongside the Metacrisis wiki and foundations program ('Metachrysalis') - is to integrate the metacrisis movement into a ‘collective intelligence’ capable of advancing metacrisis theory, formulating the ‘design criteria’ for an aligned civilisation and devising a ‘macrostrategy’ to realise it. The forum functions to bring together thinkers, theorists and researchers from inside and outside of institutions into an independent, open source research initiative. Ultimately, the Metacrisis Forum aims to become a catalyst for experimental projects in ‘civilisation design and transition’ and to emergently affect the ‘trajectory change’ to the ‘third attractor’: a ‘viable world-system’ that, by design, avoids the competing ‘failure modes’ of dystopian lock-in (order) and global catastrophe (chaos).

2025

Discovering and Designing the Most Advanced & Aligned Civilisation

Pragmatic Utopianism is an extended and alternative conception of Effective Altruism with the holistic orientation of 'doing good better' by doing systems and society better; doing the most good by designing and transitioning to the most aligned civilisation. In contrast to the capitalist realism, techno-solutionism and compartmentalised cause areas of the actually-existing EA movement, PU is on the metacrisis, systems change and the future structure of civilisation. The goal of PU is to engage in 'Blue Skies Social Science' to find ‘Civilisation X’ (a parallel to ‘Cause X’): a ‘civilisation design’ beyond ‘the end of history’ model (the political economy of capitalist markets and parliamentary democracy, a world-system of nation-states and corporations) that solves for the ‘twin failure modes’ of global catastrophe and dystopian lock-in and vectors towards a ‘third attractor’ characterised by meta-utopia (pluralism of social organisation), prototopia (progress via iteration) and paretotopia (post-scarcity, both techno-material and psycho-social).

Fearful Symmetries

Screenshot 2025-05-20 12.35.03.png

2025

A Cybernetic Theory-Fiction of Techno-Capital Acceleration, Super-states & Mega-Corporations

My BA, now MA Dissertation (MAD), on the critical philosophy of (artificial) intelligence, 'The Capital-AI: Capitalism as an Unaligned Artificial Superintelligence’ clarifies the 'fearful symmetries' (Blake)  between advanced global capitalism and unaligned artificial superintelligence as complex adaptive systems: that states (power-maximisers), corporations (profit-maximisers) and capitalism (growth-maximiser) are ‘paperclip maximisers’ emergently generating existential risk, dystopian lock-in and an ongoing suffering catastrophe as a negative externality of their reductive optimisation processes. According to this speculative social ontology that I call computational mechanism, advanced global capitalism is a form of unaligned megamachine superintelligence that is recursively developing unaligned machine superintelligence, in its own image and for its own ends: a really powerful value-flattening optimiser stacking onto and integrating into itself a really powerful value-flattening optimiser, paperclip maximisers all the way down. Based on this cybernetic schema, the emerging techno-economic assemblage of  'AI-capitalism' is situated as an instrumental extension and recursive-self-improvement of Capitalism-AI. 

Education

MA Philosophy (by Dissertation)
BA Philosophy & Sociology (2.1)

University of Essex (2024-26)
University of Essex (2020-2023)

Courses

Activities

10th June

Starting the PIBBSS Summer Reearch Fellowship 2025 in the Bay Area

6-8th June

Attending EAG London 2025

26-28th May

Presenting my research on The CI Alignment Problem at the International Conference on Large-Scale AI Risks at KU Leuven in Belgium

12th May

Won 1st Place for a team research project on AI Governance with Impact Research Groups 

2-3rd May

Attended Human Transformation in a Time of Metacrisis (the first major Metacrisis conference) Harvard Graduate School of Education in Boston

25-27th April

Attended EAGx Nordics 2025 in Oslo

21st April

Held a session at the Minimal AI Safety Unconference 2025

5th April

Attended EA Student Summit: London as a mentor

26th March

Went to the Life Itself Spring Gathering 2025

3-5th March

Selected to attend the spring school Ethos + Tekhne: A New Generation of AI Researchers (2nd Edition) in Paris

17th February

Had an article released on Auora Insights 'Introducing the Human Collective Intelligence Alignment Problem'

15-17th February

Attended the kick-off weekend of Impact Research Groups

8th February

Gave a poster presentation at Retake Control a Pause AI Conference in Paris

6th February

Led the first session of the Effective Altruism Redteaming Fellowship at EA Oxford

2nd February

Gave a lightning talk as part of Doing Good Faster! at EA Oxford

17-20th December

Attended the Oxford Global Challenges Project Workshop 

24th November

Submitted final project article on 'The CI Alignment Problem'  for BlueDots's AI Governance Course

1st October

Submitted final presentation on '4E Intelligence & Risk: Embodied, Embedded, Enacted, Extended'  for the AISES Course

17-18th September

Attended the Cambridge Conference on Catastrophic Risk 2024

15-19th June

Went to the EA Oxford Research Retreat 2024

26th May

Gave a talk at the Virtual AI Safety Unconference 2024

bottom of page