MATS empowers researchers to advance AI safety
The ML Alignment & Theory Scholars (MATS) Program is an independent research and educational seminar program that connects talented scholars with top mentors in the fields of AI alignment, interpretability, and governance. For 10 weeks, MATS scholars will conduct research while also attending talks, workshops, and networking events with other members of the Berkeley alignment research community.
The Winter 2024-25 Program will run Jan 6-Mar 14, 2025 and the Summer 2025 program will run Jun 9-Aug 15, 2025.
MATS is now accepting early applications for the Summer 2025 program. Prospective scholars should only apply in the early round if they have a strong reason to believe that they wouldn’t otherwise be able to participate in MATS Summer 2025. We expect that most applicants should wait until the regular applications round opens in January 2025.
The early application period is open and will close on November 24, 2024, at 11:59 PM PT. We expect to release decisions by early/mid December. We will release more information about the early applications round soon.
If you have questions about the MATS applications process, please email applications [ at ] matsprogram.org
Our Mission
MATS aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from unaligned artificial intelligence (AI). We believe that ambitious researchers from a variety of backgrounds have the potential to meaningfully contribute to the field of alignment research. We aim to provide the training, logistics, and community necessary to aid this transition. We also connect our scholars with financial support to ensure their financial security. Please see this post for more details.
MATS Research is an independent 501(c)(3) public charity (EIN: 99-0648563).
Program Details
-
MATS is an independent research program and educational seminar that connects talented scholars with top mentors in the fields of AI alignment, interpretability, and governance. Read more about the program timeline and content in our program overview.
The Winter 2024-25 Cohort will run Jan 6-Mar 14 in Berkeley, California and feature seminar talks from leading AI safety researchers, workshops on research strategy, and networking events with the Bay Area AI safety community.
The MATS program is an independent 501(c)(3) public charity (EIN: 99-0648563). We have historically received funding from Open Philanthropy, the Survival and Flourishing Fund, DALHAP Investments, Craig Falls, Foresight Institute and several donors via Manifund. We are accepting donations to support further research scholars.
-
Since its inception in late 2021, the MATS program has supported 344 scholars and 69 mentors. After completion of the program, MATS alumni have:
Been hired by leading organizations like Anthropic, OpenAI, Google DeepMind, MIRI, ARC, Conjecture, the UK Frontier AI Taskforce, and the US government, and joined academic research groups like UC Berkeley CHAI, NYU ARG, and MIT Tegmark Group;
Founded AI safety organizations, including Apollo Research, Athena, Cadenza Labs, the Center for AI Policy, Leap Labs, and Timaeus.
Pursued independent research with funding from the Long-Term Future Fund, Open Philanthropy, Lightspeed Grants, Manifund, and the Foresight Institute.
-
Our ideal applicant has:
An understanding of the AI alignment research landscape equivalent to completing the AI Safety Fundamentals Alignment Course;
Previous experience with technical research (e.g. ML, CS, maths, physics, neuroscience, etc.) or governance research, ideally at a postgraduate level; or
Previous policy research experience or a background conducive to AI governance (e.g. government positions, technical background, strong writing skills, AI forecasting knowledge, completed AISF Governance Course);
Strong motivation to pursue a career in AI alignment research, particularly to reduce global catastrophic risk, prevent human disempowerment, and enable sentient flourishing.
Even if you do not entirely meet these criteria, we encourage you to apply! Several past scholars applied without strong expectations and were accepted.
We are currently unable to accept applicants who will be under the age of 18 on Jan 6, 2025.
-
All MATS applicants must complete our general application form here. Specific mentors will have additional application requirements.
All stream-specific applications, as well as the general application, are due on Oct 6 at 11:59 pm PT unless indicated otherwise on the application form.
Prospective scholars are able to apply to as many mentors as they would be interested in working with. In the event that a scholar is accepted by multiple mentors, we generally offer the scholar the choice of who to work with.
Applications to work with mentors Neel Nanda and Arthur Conmy are now closed. Their deadline is significantly earlier to accommodate their training phase, which will take place in October.
Program tracks for Winter 2024-25
-
Oversight & Control
Jan Leike (Anthropic), Ethan Perez (Anthropic), Buck Shlegeris (Redwood Research), David Lindner (Google DeepMind), Erik Jenner (UC Berkeley CHAI), Fabien Roger (Redwood Research), Mrinak Sharma (Anthropic), Newton Cheng (Anthropic), Samuel Albanie (Google DeepMind), Scott Emmons (Google DeepMind)
As model develop potential dangerous behaviors, can we develop and evaluate methods to monitor and regulate AI systems, ensuring they adhere to desired behaviors while minimally undermining their efficiency or performance?
-
Interpretability
Neel Nanda (Google DeepMind), Adrià Garriga Alonso (FAR AI), Arthur Conmy (Google DeepMind), Lee Sharkey (Apollo Research), Nina Panickssery (Anthropic)
Rigorously understanding how ML models function may allow us to identify and train against misalignment. Can we reverse engineer neural nets from their weights, or identify structures corresponding to “goals” or dangerous capabilities within a model and surgically alter them?
-
Evaluations
Evan Hubinger (Anthropic), Marius Hobbhahn (Apollo Research), Oliver Sourbut & Sid Black (UK AISI), Owain Evans (UC Berkeley CHAI), Stephen Casper (MIT AAG), Steven Basart (CAIS)
Many stories of AI accident and misuse involve potentially dangerous capabilities, such as sophisticated deception and situational awareness, that have not yet been demonstrated in AI. Can we evaluate such capabilities in existing AI systems to form a foundation for policy and further technical work?
-
Governance & Strategy
Daniel Kokotajlo & Eli Lifland (AI Futures Project), David Krueger (Cambridge KASL), Janet Egan (CNAS), Lisa Thiergart (MIRI), Timothy Fist (IFP, CNAS)
As AI systems continue to advance and develop even stronger capabilities, can we develop policies, standards, and frameworks to guide the ethical development, deployment, and regulation of AI technologies, focusing on ensuring safety and societal benefit?
-
Agency
Alex Turner (Google DeepMind), Alex Cloud, Andi Peng (Anthropic), Andreea Bobu (MIT CSAIL), Max Kleiman-Weiner (UW), Michael Cohen (UC Berkeley CHAI)
As models continue to scale, they become more agentic and, as such, we need methods to study their newfound agency. How do we study modeling optimal agents, how those agents interact with each other, and how some agents can be aligned with each other?