The Autumn cohort will connect fellows with mentors from Anthropic, OpenAI, Google DeepMind, Redwood Research, ARC, and more on research and field-building efforts to reduce frontier AI risk. MATS provides fellows with support in the form of funding, compute, housing & meals, research management, and a driven community, so fellows can focus fully on doing impactful work. Applications close June 7th.
.webp)
MATS supports researchers and field-builders across seven tracks: Empirical, Theory, Strategy & Forecasting, Policy & Governance, Systems Security, Founding & Field-Building, and Biosecurity. You can apply to one or more of these tracks in the stage 1 application.
The general application should take around 1 to 2 hours depending on how many tracks you apply to. It asks for:
Returning applicants from Summer 2026 can sign in to pre-fill their application.
For more on what we and mentors are look for, see getting into MATS.
After passing stage 1 evaluations, stage 2 uses standardized assessments to evaluate the skills most predictive of success for certain tracks.
Each track contains multiple streams, one or more mentors with a shared research agenda. If you advance, you'll apply to and rank the the specific streams you'd like to join. For each, you'll:
In the last stage:
Offers go out in late July, and the program starts September 28th.
The MATS Program is a 10 to 12-week research fellowship designed to train and support emerging researchers and field-builders working on AI alignment, interpretability, governance, and security. Fellows collaborate with world-class mentors, receive dedicated research management support, and join a vibrant community in Berkeley or London focused on advancing safe and reliable AI. The program provides the structure, resources, and mentorship needed to produce impactful research and launch long-term careers in AI safety.
MATS mentors are leading experts from a broad range of AI safety, alignment, governance, interpretability, and security domains. They include academics, industry researchers, and independent experts who guide fellows through research projects, provide feedback, and help shape each fellow'’s growth as a researcher. The mentors represent expertise in areas such as:
The main program will run from September 28th to December 4th in Berkeley, CA and London, UK, with an extension phase for accepted fellows beginning in December. This extension phase is run primarily from our London hub.
MATS accepts applicants from diverse academic and professional backgrounds ranging from machine learning, mathematics, and computer science to policy, economics, physics, and cognitive science. The primary requirements are a strong motivation to contribute to AI safety and evidence of technical aptitude or research potential. Prior AI safety experience is helpful but not required.
Applicants submit a general application, applying to various tracks (Empirical, Theory, Strategy & Forecasting, Policy & Governance, Systems Security, Founding & Field-Building, and Biosecurity) and streams within those tracks.
After a centralized review period, applicants who are advanced will then undergo additional evaluations depending on the preferences of the streams they've applied to before doing final interviews and receiving offers.