top of page
MR (new 2.0).png

Experience-in-EA

PIBBSS.jpg

May- September 2025

Research Fellow: Principles of Intelligent Behaviour in Biological & Social Systems Fellowship 2025

The Collective Intelligence Alignment Problem

During the fellowship, my project was to engage in 'conceptual AI safety research' that introduces and clarifies the 'conceptual engineering' of the (human-machine) 'collective intelligence (CI) alignment problem' - as a parallel, critique and extension of the 'AI alignment problem'. Whilst working independently on a literature map of this sociotechnical paradigm-shift in (A)I governance, I received mentorship from Jan Kulveit, the principal investigator of the Alignment of Complex Systems Research Group. The Principles of Intelligent Behaviour in Biological & Social Sytems (PIBBSS) Fellowship 2025 is a 3-month summer research fellowship in the Bay Area that aims to leverage insights on the parallels between intelligent behaviour in natural and artificial systems towards progress on important questions in AI risk, governance and safety.

IRG.png

January - May 2025

1st Place Research Project on AI Governance: Impact Research Groups 2025

The Data of Gradual Disempowerment: Measuring Systemic Existential Risk from Incremental AI Development

As our winning submission to the (8-week) IRG program, my AI Governance team (mentored by the AI safety researcher Francesca Gomez) co-authored a follow-up paper to 'Gradual Disempowerment' (Kulveit et al; 2025)  discussing how we can proactively quantify and detect human disempowerment from the integration of AI into key societal systems. In addition to my contributions to the main text, as the 'resident philosopher' of the team, I also authored an appendix introducing my model of 'The Ship of Theseus of the State': a combination of two classical philosophical metaphors (updated for the information age) applied towards conceptualising the sociotechnical process of humans being replaced by AI agents within governance ('the art of steering') and society ('the ship').

Screenshot 2025-05-20 01.46.02.png
EA Oxford.png

January - May 2025

Redteaming Fellowship Founder & Facilitator: Effective Altrusim Oxford 2025 (Hilary Term)

The Effective Altruism Redteaming Fellowship: Doing EA Better through Constructive Internal Criticism 

I joined the team of the Effective Altruism society in Oxford (where EA began) to design and lead the first Redteaming Fellowship within EA oriented around the constructive criticism ('redteaming') of the movement and its cause areas (Global Development, Animal Welfare, Existential Risk). The RF (hosted at Trajan House, the main EA office where the Center for Effective Altruism is based) was an experimental space for adopting divergent, heterodox and critical perspectives on (actually-existing) EA as a method of formulating internal critiques of established beliefs and ultimately, improving the social epistemology and cause-prioritisation of the movement as a whole. Alongside running this 6-week prorgam, I also helped organise an EA lightning talk series styled Doing Good Faster!. My work for EA Oxford was supported by the Open Philanthropy University Organiser Fellowship.

Courses

bottom of page