
Launch your career in AI alignment & security
The MATS Program is an independent research and educational seminar program that connects talented scholars with top mentors in the fields of AI alignment, governance, and security. For 12 weeks, MATS scholars conduct research while also attending talks, workshops, and networking events with other members of the Berkeley alignment research community.
MATS Winter 2026 will run from January 5 - March 28 in Berkeley, CA. Applications are due October 2* at 11:59pm anywhere on earth.
*Neel Nanda’s applications open and close significantly sooner than the others due to his unique exploration phase. Neel’s stream will continue to accept late applications until Friday, September 12.
If you have questions about the MATS applications process, please email applications [ at ] matsprogram.org
Logos indicate affiliations of MATS mentors
Scholars receive:
Mentorship from world-class researchers in the fields of AI alignment, governance, and security
A dedicated research manager to offer support and keep the research on track
Seminars, workshops, and other events with guest speakers
A stipend of $14.4k generously provided by AI Safety Support, an Australian nonprofit
A $12k budget for compute
Travel to and from Berkeley, CA
Office space in Berkeley, CA and the community that comes from talking to other scholars
Catered meals
Housing in a private room
Networking opportunities with the broader AI alignment community
The chance to join the extension program and continue their research for 6-12 additional months in London, UK
Testimonials
“Apollo almost certainly would not have happened without MATS.”
Marius Hobbhahn
Apollo Research CEO
Named to TIME’s 100 Most Influential People in AI 2025
MATS Winter 2022-2023 Alumnus
MATS Winter 2026 Mentor
“There’s life pre-MATS and life post-MATS. It was the inflection point that set me up to become a technical AI safety researcher.”
Jesse Hoogland
Executive Director, Timaeus
MATS Winter 2022-2023 Alumnus
“Looking back at me before [MATS], I don’t think I could even recognize myself 8 months ago.”
Quentin Feuillade-Montixi
Co-founder and CTO, PRISM Evals
Winter 2022-23 Alumnus
Program Highlights
Week 1: Orientation. Immediately dive into research.
Week 5: Scholars submit a Scholar Research Plan. SRPs are graded on scholars’ threat model, impact mechanism, reasoning transparency, and risk analysis. Scores influence which scholars are admitted to the extension program.
Week 12: The program culminates with a Research Symposium. All scholars submit a poster, and select scholars will give a spotlight talk. See the posters from the most recent cohort here.
Scholar outcomes
Since its inception in late 2021, the MATS program has supported 357 scholars and 75 mentors
Research
In the past 3.5 years, we have helped produce 115 research publications with over 5100 collective citations; our organizational h-index is 31.
Scholars have helped develop new AI alignment agendas, including activation engineering, externalized reasoning oversight, conditioning predictive models, developmental interpretability, evaluating situational awareness, formalizing natural abstractions, and gradient routing.
Careers
Alumni have been hired by leading organizations like Anthropic, Google DeepMind, OpenAI, Meta AI, UK AISI, Redwood Research, METR, RAND TASP, Open Philanthropy, ARC, FAR.AI, Apollo Research, Truthful AI, LawZero, MIRI, CAIF, CLR, Beneficial AI Foundation, SaferAI, Haize Labs, Eleuther AI, Harmony Intelligence, Conjecture, Magic, and the US government, and joined academic research groups like UC Berkeley CHAI, NYU ARG, KASL, MILA, and MIT Tegmark Group.
80% of our alumni are now working in AI alignment.
Founders
About 10% of our alumni have co-founded AI safety organizations or research teams during or after MATS.
MATS-founded organizations include Aether, Apollo Research, Athena, Atla, Cadenza Labs, Catalyze Impact, Center for AI Policy, Contramont Research, Coordinal Research, Freestyle Research, Leap Labs, LISA, Live Theory, Luthien Research, Poseidon Research, PRISM Eval, Simplex, Timaeus, Theorem Labs, and Workshop Labs.
Our Mission
MATS aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from unaligned artificial intelligence (AI). We believe that ambitious researchers from a variety of backgrounds have the potential to meaningfully contribute to the field of alignment research. We aim to provide the training, logistics, and community necessary to aid this transition. We also connect our scholars with financial support to ensure their financial security. Please see this post for more details.
MATS Research is an independent 501(c)(3) public charity (EIN: 99-0648563).