The MATS Program is an independent research program that connects talented fellows with top AI alignment mentors in interpretability, governance, and security. For 12 weeks, MATS fellows conduct research while also attending talks, workshops, and networking events with other members of the alignment research community. Neel Nanda’s stream has a separate application that closes on January 2nd and can be found here.
.webp)
MATS supports researchers in a variety of research tracks, which include technical governance, empirical, policy & strategy, theory, and compute governance. You can specify which tracks and streams you'd like to apply to in the general application.
All applicants submit the general application here. We estimate that it will take 1-2 hours to complete. You'll be asked to:
Applicants complete additional evaluations in further stages depending on which tracks and streams they apply to.
The MATS Program is a 12-week research fellowship designed to train and support emerging researchers working on AI alignment, interpretability, governance, and security. Scholars collaborate with world-class mentors, receive dedicated research management support, and join a vibrant community in Berkeley focused on advancing safe and reliable AI. The program provides the structure, resources, and mentorship needed to produce impactful research and launch long-term careers in AI safety.
MATS mentors are leading researchers from a broad range of AI safety, alignment, governance, interpretability, and security domains. They include academics, industry researchers, and independent experts who guide scholars through research projects, provide feedback, and help shape each scholar’s growth as a researcher. The mentors represent expertise in areas such as:
Key dates
Application:
The main program will then run from early June to late August in Berkeley, CA and London, UK, with the extension phase for accepted fellows beginning in September primarily from our London hub.
MATS accepts applicants from diverse academic and professional backgrounds ranging from machine learning, mathematics, and computer science to policy, economics, physics, and cognitive science. The primary requirements are strong motivation to contribute to AI safety and evidence of technical aptitude or research potential. Prior AI safety experience is helpful but not required.
Applicants submit a general application, applying to various tracks (technical governance, empirical, policy & strategy, theory, and compute governance) and streams within those tracks.
After a centralized review period, applicants who are advanced will then undergo additional evaluations depending on the preferences of the streams they've applied to before doing final interviews and receiving offers.