Early applications for MATS Summer 2025 are now open!
The early application round is intended for individuals who have a competing offer or anticipate that they would be unable to participate in the MATS Summer 2025 program without an early decision.
Regular application decisions are expected to be released in late March. If you believe you cannot commit to the program unless you receive an earlier decision, we encourage you to apply through the early application process. Otherwise, we recommend waiting for the regular application round, which opens in January.
Application process
Steps in the application process
Create an applications account. You’ll use this to access all applications materials.
Submit a general MATS application. This is required by all streams. Due November 24th for most streams.
Submit applications to the MATS streams you want to work with. You can and should apply to all of the MATS streams that interest you! See a list of the streams and their applications below. Stream-specific applications are due November 24th.
Complete additional evaluations. Depending on the streams you apply to, you may be required to complete a coding screen, interviews, or other evaluations after submitting your application.
Tips for applying
Make sure to check your spam folders for emails! You may wish to automatically filter for emails from applications@matsprogram.org to ensure you don’t miss any emails.
Submit your application materials early. In the past, some applicants have had technical problems in the hour leading up to the application deadline.
Summer 2025 Early Streams
-
Team Shard (Alex Turner)
In the shard theory stream, we create qualitatively new methods and fields of inquiry, from steering vectors to gradient routing to unsupervised capability elicitation. If you're theory-minded, maybe you'll help us formalize shard theory itself.
Read more about this mentor and apply here.
-
Buck Shlegeris' Stream
Buck is investigating control evaluations for AI, including adversarial training to detect and prevent malicious behaviors, and exploring techniques to ensure AI safety through effective supervision and oversight.
Read more about this mentor and apply here.
-
Eli Lifland and Daniel Kokotajlo's Stream
This stream's mentors will be Eli Lifland and Daniel Kokotajlo, with Eli Lifland being the primary mentor.
We are interested in mentoring projects in AI forecasting and governance. We are currently working on a detailed mainline scenario forecast, and this work would build on that to either do more scenario forecasting or explore how to positively affect key decision points, informed by our scenario.Read more about these mentors and apply here.
-
David Lindner's Stream
David’s stream will focus on detecting and preventing harm from deceptive alignment (mostly scheming) via evaluations, control, and red teaming.
Read more about this mentor and apply here.
-
Fabien Roger's Stream
Fabien Roger is an AI safety researcher at Anthropic and previously worked at Redwood Research. Fabien’s research focuses on AI control and preventing sandbagging.
Read more about this mentor and apply here.