Our foci
Governance, Safety, and Strategy
Our Foundations of AI Safety certification course introduces the governance and strategy landscape around advanced AI: how institutions, incentives, and policy shape risk, and which levers might actually reduce catastrophic outcomes.
Technical Alignment
We dive deep into technical approaches to alignment, including interpretability, robustness, evaluations, and scalable oversight. Our goal is to distinctively model detailed threat scenarios and build the practical competence needed to make AI systems safe and reliable.
X-Risk and Long-Term Impact
We explore the long-run trajectory of technology and the potential catastrophic risks of advanced AI. Drawing from science fiction, philosophy, and rigorous research, we aim to understand the scale of the challenge and the urgency of meaningful action.