Applications for all Summer 2025 streams are now open!
Applications due April 18
Application process
Steps in the application process
Create an application account. You’ll use this to access all applications materials.
Submit a MATS pre-application by April 18. This is required by all streams.
Submit applications to the MATS stream(s) you want to work with. You must submit at least one stream-specific application for your MATS application to be considered. You can and should apply to all of the MATS streams that interest you! Most stream applications will be due April 18. See a list of the streams and their applications below.
Complete additional evaluations. Depending on the streams you apply to, you may be required to complete a coding screen, interviews, or other evaluations after submitting your application. The process, however, is not standardized between streams; not being contacted for an interview does not necessarily mean that your application is not in consideration.
Tips for applying
Make sure to check your spam folders for emails! You may wish to automatically filter for emails from applications@matsprogram.org to ensure you don’t miss any emails.
Submit your application materials early. In the past, some applicants have had technical problems in the hour leading up to the application deadline. Additionally, applications are reviewed on a rolling basis.
Mentors will primarily evaluate candidates based on the submission of their own stream-specific applications, though all mentors will have access to application materials submitted to other streams.
Summer 2025 Tracks
To decide which mentor(s) to apply to, applicants are able to filter below by track and research interest. We recommend reading through the different streams’ proposed research projects and mentorship style to assess personal fit.
Click on each track title to read a brief description.
-
As model develop potential dangerous behaviors, can we develop and evaluate methods to monitor and regulate AI systems, ensuring they adhere to desired behaviors while minimally undermining their efficiency or performance?
-
Many stories of AI accident and misuse involve potentially dangerous capabilities, such as sophisticated deception and situational awareness, that have not yet been demonstrated in AI. Can we evaluate such capabilities in existing AI systems to form a foundation for policy and further technical work?
-
Rigorously understanding how ML models function may allow us to identify and train against misalignment. Can we reverse engineer neural nets from their weights, or identify structures corresponding to “goals” or dangerous capabilities within a model and surgically alter them?
-
As AI systems continue to advance and develop even stronger capabilities, can we develop policies, standards, and frameworks to guide the ethical development, deployment, and regulation of AI technologies, focusing on ensuring safety and societal benefit?
-
As models continue to scale, they become more agentic and, as such, we need methods to study their newfound agency. How do we study modeling optimal agents, how those agents interact with each other, and how some agents can be aligned with each other?
-
As AI systems become more capable, they will become higher-value targets for theft, and more able to undermine cybersecurity protocols. How do we ensure that the weights of valuable ML models remain under the control of developers, and that AI improves rather than degrades the state of cybersecurity?