
I am a final-year political science (major) and philosophy (minor) bachelor student at the University of Zurich. I am a member of the Swiss Study Foundation and hold a conditional offer for Cambridge's MPhil in Ethics of AI, Data and Algorithms. Discovering that there is a global community of people seeking to identify and implement the most promising ways to help others has transformed my life; I have been fascinated by this philosophy and social movement of Effective Altruism (EA) since early 2020. Spending a lot of time exploring questions about suffering, the future, and uncertainty, I aspire to a career in global priorities research and AI governance, conducting research and policy entrepreneurship with a focus on avoiding worst-case scenarios.
– resume: https://drive.google.com/file/d/1RLRmNzAtATab-0AN1mg5FDzDXVxOxucG
– admonymous (for giving me anonymous feedback on anything): https://admonymous.co/eleos_arete_citrini
– Animal Welfare Library – co-created with Arvo Muñoz Morán: https://www.animalwelfarelibrary.org
– selected blog post: powerful quotes I keep contemplating: citrini.blogspot.com/2023/01/powerful-quotes-i-keep-contemplating.html
I occasionally post stuff I find interesting and important on facebook and the EA Forum.
Get in touch: eleos.citrini@gmail.com :)
– – – – – – – – – –
my key intellectual interests (present ± a few years):
Hundreds of intellectual interests have accumulated over the last few years. Among the issues I have been most excited about in the last few years (having become varyingly hardly to somewhat or maybe moderately versed) and/or would most like to investigate in the next few years are the following, grouped into three broad clusters:
– the philosophy, politics, economics, and science of (emerging) technologies and the long-term future of Earth-originating life:
– s-risks and x-risks
– governance incl. ethics of AI, AI value alignment, and technical AI safety
– game theory of cooperation and conflict in the context of AI
– non-human sentience & sapience and moral circle expansion
– governance incl. ethics of biotechnologies, esp. transhumanism
– governance incl. ethics of outer space, esp. space colonisation
– cluelessness about and forecasting the long-term future
– the intersection of science fiction, technology, natural and social sciences, and philosophy
– futurology, progress studies, and macrostrategy
– longtermism(s) in theory and practice
– the philosophy, politics, economics, and science of belief formation, identity, and decision-making:
– decision theory and game theory
– decision-theoretic fanaticism, risk aversion, and bounded rationality
– formal epistemology and Bayesianism
– social epistemology, communication, and cognitive biases
– institutional decision-making, international relations, and global governance
– incentive structures, collective action problems, and complexity science
– egoism & altruism and dark tetrad traits, esp. re leadership
– the intersection of evolutionary psychology, moral psychology, and moral epistemology
– moral agency and moral patiency in humans and non-humans
– philosophy and psychology of self and human nature
– (more) topics in moral philosophy:
– ethical issues in effective altruism and global priorities research
– moral uncertainty and value theory
– animal ethics
– suffering-focused ethics
– population ethics and ethics of the future
– risk ethics
– consequentialist alternatives to utilitarianism
– scope-sensitive alternatives to consequentialism
– eudaimonia, enkrateia, and arete
– metaethics
I would gladly offer to discuss and share my thoughts on
– philosophical and political aspects of EA, esp. global priorities research and AI governance
– where EA might be going as well as where it should be going
– EA lifestyle(s)
– criticism of EA
Given my plans for the next several months to few years, I'm looking for
– connections with (more) people interested in either s-risks or AI governance or both
– a more concrete idea how (and with whom) I could co-pioneer and co-develop an AI governance subfield focused on s-risks (and which pitfalls to avoid)
– a more concrete idea which EA-related (career) goals to pursue (with whom and where) during and especially immediately after my master’s

Cause Areas
- Other existential risks
- Philosophy
- s-risks
- Government and policy
- Global coordination & peace-building
- Farmed animal welfare
- Long-term future
- Wild animal welfare
- AI strategy & policy
- Emerging Technologies
- Research
- Global priorities research

Career
- Long-term future
- Research
- Government and policy
- Global priorities research
- Philosophy
- AI strategy & policy
- Technology and security

Community
Academia