MATS empowers researchers to advance AI alignment
The ML Alignment & Theory Scholars (MATS) Program is an independent research and educational seminar program that connects talented scholars with top mentors in the fields of AI alignment, interpretability, and governance. For 10 weeks, MATS scholars will conduct research while also attending talks, workshops, and networking events with other members of the Berkeley alignment research community.
Applications for MATS Summer 2025 are now open! Unless otherwise indicated, the deadline for all stream applications is Apr 18. The Summer 2025 program will run Jun 16-Aug 22.
If you have questions about the MATS applications process, please email applications [ at ] matsprogram.org
Our Mission
MATS aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from unaligned artificial intelligence (AI). We believe that ambitious researchers from a variety of backgrounds have the potential to meaningfully contribute to the field of alignment research. We aim to provide the training, logistics, and community necessary to aid this transition. We also connect our scholars with financial support to ensure their financial security. Please see this post for more details.
MATS Research is an independent 501(c)(3) public charity (EIN: 99-0648563).
Program Details
-
MATS is an independent research program and educational seminar that connects talented scholars with top mentors in the fields of AI alignment, interpretability, and governance. Read more about the program timeline and content in our program overview.
The Summer 2025 Cohort will run June 16 - Aug 22 in Berkeley, California and feature seminar talks from leading AI alignment researchers, workshops on research strategy, and networking events with the Bay Area AI alignment community.
The MATS program is an independent 501(c)(3) public charity (EIN: 99-0648563). We have historically received funding from Open Philanthropy, the Survival and Flourishing Fund, DALHAP Investments, Craig Falls, Foresight Institute and several donors via Manifund. We are accepting donations to support further research scholars.
-
Since its inception in late 2021, the MATS program has supported 298 scholars and 75 mentors. After completion of the program, MATS alumni have:
Been hired by leading organizations like Anthropic, OpenAI, Google DeepMind, MIRI, ARC, Conjecture, the UK Frontier AI Taskforce, and the US government, and joined academic research groups like UC Berkeley CHAI, NYU ARG, and MIT Tegmark Group;
Co-founded AI safety organizations, including Apollo Research, Athena, Cadenza Labs, the Center for AI Policy, Leap Labs, Simplex, and Timaeus.
Pursued independent research with funding from the Long-Term Future Fund, Open Philanthropy, Lightspeed Grants, Manifund, and the Foresight Institute.
-
Our ideal applicant has:
An understanding of the AI alignment research landscape equivalent to completing the AI Safety Fundamentals Alignment Course;
Previous experience with technical research (e.g. ML, CS, maths, physics, neuroscience, etc.) or governance research, ideally at a postgraduate level; or
Previous policy research experience or a background conducive to AI governance (e.g. government positions, technical background, strong writing skills, AI forecasting knowledge, completed AISF Governance Course);
Strong motivation to pursue a career in AI alignment research, particularly to reduce global catastrophic risk, prevent human disempowerment, and promote sentient flourishing.
Even if you do not entirely meet these criteria, we encourage you to apply! Several past scholars applied without strong expectations and were accepted.
We are currently unable to accept applicants who will be under the age of 18 on June 16, 2025.
-
All MATS applicants must complete our pre-application form here. Specific mentors will have additional application requirements.
All stream-specific applications, as well as the pre-application, are due on the same day unless indicated otherwise on the application form.
Prospective scholars are able to apply to as many mentors as they would be interested in working with. In the event that a scholar is accepted by multiple mentors, we generally offer the scholar the choice of who to work with.