Streams in this track include hands-on research using machine learning experiments to understand and improve model safety including AI control, interpretability, scalable oversight, evaluations, red-teaming, and robustness. This is the largest track in the program and is defined by its methods rather than any single research agenda. If your primary tool is ML engineering, this is your track.
The track is defined by its methodology more than by any single research agenda. Fellows run ML experiments to understand and improve the safety properties of frontier models, with work spanning interpretability, AI control, scalable oversight, evaluations, red-teaming, robustness, and model organisms of misalignment. The unifying thread is that progress comes from getting hands on real models (training, probing, fine-tuning, measuring) rather than reasoning from first principles alone. This is the largest track in the program and the most common entry point into technical AI safety research.
We are looking for fellows whose primary tool is ML engineering, broadly construed. The essential requirement is the ability to design and run experiments on language models or other deep learning systems and iterate quickly on the results. In practice that usually means strong Python (with and without AI coding tools), comfort with the infrastructure around running models at moderate scale, and enough research taste to know which experiments are worth running. Mission alignment matters: fellows should be able to say why a given line of empirical work meaningfully reduces frontier risk, not just whether it yields a successful publication. Educational background and seniority are weighted lightly here relative to other tracks. Past cohorts have included strong fellows ranging from undergraduates to senior industry researchers.
Fellows are matched to mentors based on fit, and projects are scoped to produce concrete artifacts by program end: papers, evaluation suites, open-source tooling, or technical reports. Target audiences include safety and alignment teams at frontier labs, governments and other evaluation organizations, the broader ML research community.
This coalition of mentors make up the “Anthropic Stream”. This stream spans a range of empirical research areas in AI safety on LLMs, including AI control, scalable oversight, model organisms, model internals, model welfare, security, and more. You’ll be pitched, and have the option to pitch, a variety of safety research projects, and then be matched to projects and mentors based on your interests/preferences on research and what you’d like to get out of MATS. Fellows in this stream frequently receive funding and continued mentorship after MATS to complete their research project, usually leading to a (co-)first author paper. People in this stream often end up in long-term homes for safety research after MATS (e.g. Anthropic, Redwood Research, OpenAI).
Anthropic mentors share an application, tend to collaborate and co-mentor projects together, and generally share infrastructure to streamline the fellow experience. By applying to this stream, you are being considered for all of the Anthropic mentors.
During the program, scholars meet weekly with their project mentors and collaborators. Some projects meet more often without mentors (e.g., daily standups with the peers on the project). Each project will have a primary mentor, who is also the main decision-maker on key milestones for the project and who is the default person to go to for feedback, advice, etc. Co-mentors also attend project meetings as needed and provide feedback throughout the program. Some project co-mentors can be as involved as the primary mentor.
Mentorship starts with the “Project Pitch Session” Anthropic runs at the start of the program. Fellows get ~1 week to derisk and trial projects before submitting their preferences. Starting on week 2, scholars are assigned projects where the primary mentor is whoever pitched it. Some projects are assigned co-mentors who are other supervisors who want to join the project.
This stream focuses on building realistic defensive cybersecurity benchmarks utilizing data from Asymmetric Security's work on real-world incidents.
1 hour weekly meetings by default for high-level guidance. We will respond within a day to async communication.
Essential:
Preferred:
We will assign the project direction; scholars will have significant tactical freedom.
I have two broad areas.
Security:
I am interested in building demonstrations for hacking real-world AI deployments to show that they are not secure. The goal is to force companies to invest in alignment techniques that can solve the underlying security issues.
Benchmarks:
I am interested in building benchmarks to determine how generalizable modern LLM techniques actually are, now that we are no longer in the pre-training scaling era.
I will meet 1-1 or as a group, depending on the interests as they relate to the projects. Slack communication outside of the 1-1.
I strongly prefer multiple short meetings over single long meetings, except at the start.
I'll help with research obstacles, including outside of meetings
For security:
You should have a strong security mindset, having demonstrated the willingness to be creative on this. I would like to see past demonstration of willingness to get your hands dirty and try many different systems.
For benchmarks:
As creative as possible, willingness to work on the nitty gritty, willingness to work really hard on problems other people fine boring. As interests as far away from SF-related interests as possible.
Mentor(s) will talk through project ideas with scholar
This stream will focus on monitoring, stress-testing safety methods, and evals, with a focus on risks from scheming AIs. Examples include (black-box) AI control techniques, white-box monitors (probes etc.), chain-of-thought monitoring/faithfulness, building evaluation environments, and stress-testing mitigations.
For each project, we will have a weekly meeting to discuss the overall project direction and prioritize next steps for the upcoming week. On a day-to-day basis, you will discuss experiments and write code with other mentees on the project (though I'm available on Slack for quick feedback between meetings or to address things that are blocking you).
I structure the program around collaborative, team-based research projects. You will work in a small team, on a project from a predefined list. I organize the 12-week program into fast-paced research sprints designed to create and keep research velocity, so you should expect regular deadlines and milestones. I will provide a more detailed schedule and set of milestones at the beginning of the program.
I am looking for fellows with strong machine learning engineering skills, as well as a background in technical research. While I’ll provide weekly guidance on research, I expect fellows to be able to run experiments and decide on low-level details fairly independently most of the time. I’ll propose concrete projects to choose from, so you should not expect to work on your own research idea during MATS. I strongly encourage collaboration within the stream, so you should expect to work in teams of 2-3 fellows on a project, hence good communication and team skills are important.
We will most likely have a joint project selection phase with the other GDM mentors, where we present a list of projects (with the option for fellows to iterate on them). Afterward, each project will have at least one main mentor, but we might also co-mentor some projects.
GDM stream focused on scheming risk, AI control, monitoring, monitorability, and loss-of-control evaluations. Probably running in-person in London.
I'm generally quite hands-off. I propose projects that I think matter for AGI safety and are tractable, to set scholars up for success. I then expect scholars to fully own the project, and update / consult me as needed.
By default we'd meet once a week to discuss the project for 30 min - 1 hour. I see my role as giving feedback on the project direction, stress-testing or advising on design / prioritisation decisions, and occasionally suggesting experiments or methodological improvements (which you should treat as suggestions / advice, not orders!).
You can also book ad-hoc meetings with me, ping me on Slack, or send me docs / paper drafts for review.
I also offer scholars to meet with me once a month for 30 min to discuss career stuff, skill-building, feedback on their progress, or anything else.
Preferred technical skills:
I'll propose ~5 projects for scholars to choose from. I am also open to scholar-proposed projects if they are well articulated, promising, and align with my research interests.
The Redwood Research stream is looking for fast empirical iterators and strategists to work on control research.
Depending on the mentor:
We are looking for people who are:
We will assign projects by default but are open to getting pitched on projects.
This stream will focus on monitoring, stress-testing safety methods, and evals, with a focus on risks from scheming AIs. Examples include (black-box) AI control techniques, white-box monitors (probes etc.), chain-of-thought monitoring/faithfulness, building evaluation environments, and stress-testing mitigations.
We are looking for scholars with strong machine learning engineering skills, as well as a background in technical research. While we’ll provide weekly guidance on research, we expect scholars to be able to run experiments and decide on low-level details fairly independently most of the time. We’ll propose concrete projects to choose from, so you should not expect to work on your own research idea during MATS. We strongly encourage collaboration within the stream, so you should expect to work in teams of 2-3 scholars on a project, hence good communication and team skills are important.
In the shard theory stream, we create qualitatively new methods and fields of inquiry, from steering vectors to gradient routing to unsupervised capability elicitation to robust unlearning. If you're theory-minded, maybe you'll help us formalize shard theory itself.
We will have weekly 1-1's and weekly team lunch, as well as asynchronous communication over Slack. Mentees are always welcome to reach out at any time, in case guidance is needed outside of usual meeting times.
Scholars should mostly figure things out on their own outside of meetings
Ideal candidates would have:
Mentor(s) will talk through project ideas with scholar
Research on deceptive alignment, designing scheming propensity evaluations and honeypots.
GDM stream will propose and distribute projects
The MATS Program is a 10-week research fellowship designed to train and support emerging researchers working on AI alignment, transparency and security. Fellows collaborate with world-class mentors, receive dedicated research management support, and join a vibrant community in Berkeley focused on advancing safe and reliable AI. The program provides the structure, resources, and mentorship needed to produce impactful research and launch long-term careers in AI safety.
MATS mentors are leading researchers from a broad range of AI safety, alignment, governance, field-building and security domains. They include academics, industry researchers, and independent experts who guide scholars through research projects, provide feedback, and help shape each scholar’s growth as a researcher. The mentors represent expertise in areas such as:
Key dates
Application:
The main program will then run from September 28th to December 4th, with the extension phase for accepted fellows beginning in December.
MATS accepts applicants from diverse academic and professional backgrounds - from machine learning, mathematics, and computer science to policy, economics, physics, cognitive science, biology, and public health, as well as founders, operators, and field-builders without traditional research backgrounds. The primary requirements are strong motivation to contribute to AI safety and evidence of technical aptitude, research potential, or relevant operational experience. Prior AI safety experience is helpful but not required.
Applicants submit a general application, applying to various tracks (Empirical, Theory, Strategy & Forecasting, Policy & Governance, Systems Security, Biosecurity, Founding & Field-Building.
In stage 2, applicants apply to streams within those tracks as well as completing track specific evaluations.
After a centralized review period, applicants who are advanced will then undergo additional evaluations depending on the preferences of the streams they've applied to before doing final interviews and receiving offers.
For more information on how to get into MATS, please look at this page.