Empirical

Streams in this track include hands-on research using machine learning experiments to understand and improve model safety including AI control, interpretability, scalable oversight, evaluations, red-teaming, and robustness. This is the largest track in the program and is defined by its methods rather than any single research agenda. If your primary tool is ML engineering, this is your track.

Application process

  • Initial application: No track-specific questions.
  • Stage 2: Complete 1–2 assessments evaluating research taste and technical implementation skills.
  • Stream applications & follow-up: Apply to individual streams; follow-up includes interviews or additional assessments depending on the stream.

Empirical track overview

The track is defined by its methodology more than by any single research agenda. Fellows run ML experiments to understand and improve the safety properties of frontier models, with work spanning interpretability, AI control, scalable oversight, evaluations, red-teaming, robustness, and model organisms of misalignment. The unifying thread is that progress comes from getting hands on real models (training, probing, fine-tuning, measuring) rather than reasoning from first principles alone. This is the largest track in the program and the most common entry point into technical AI safety research.

We are looking for fellows whose primary tool is ML engineering, broadly construed. The essential requirement is the ability to design and run experiments on language models or other deep learning systems and iterate quickly on the results. In practice that usually means strong Python (with and without AI coding tools), comfort with the infrastructure around running models at moderate scale, and enough research taste to know which experiments are worth running. Mission alignment matters: fellows should be able to say why a given line of empirical work meaningfully reduces frontier risk, not just whether it yields a successful publication. Educational background and seniority are weighted lightly here relative to other tracks. Past cohorts have included strong fellows ranging from undergraduates to senior industry researchers.

Fellows are matched to mentors based on fit, and projects are scoped to produce concrete artifacts by program end: papers, evaluation suites, open-source tooling, or technical reports. Target audiences include safety and alignment teams at frontier labs, governments and other evaluation organizations, the broader ML research community.

Empirical track streams

Empirical

This coalition of mentors make up the “Anthropic Stream”. This stream spans a range of empirical research areas in AI safety on LLMs, including AI control, scalable oversight, model organisms, model internals, model welfare, security, and more. You’ll be pitched, and have the option to pitch, a variety of safety research projects, and then be matched to projects and mentors based on your interests/preferences on research and what you’d like to get out of MATS. Fellows in this stream frequently receive funding and continued mentorship after MATS to complete their research project, usually leading to a (co-)first author paper. People in this stream often end up in long-term homes for safety research after MATS (e.g. Anthropic, Redwood Research, OpenAI).

Anthropic mentors share an application, tend to collaborate and co-mentor projects together, and generally share infrastructure to streamline the fellow experience. By applying to this stream, you are being considered for all of the Anthropic mentors.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
No items found.

Asymmetric Security

Empirical

This stream focuses on building realistic defensive cybersecurity benchmarks utilizing data from Asymmetric Security's work on real-world incidents.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Empirical

I have two broad areas.

Security:

I am interested in building demonstrations for hacking real-world AI deployments to show that they are not secure. The goal is to force companies to invest in alignment techniques that can solve the underlying security issues.

Benchmarks:

I am interested in building benchmarks to determine how generalizable modern LLM techniques actually are, now that we are no longer in the pre-training scaling era.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Empirical

This stream will focus on monitoring, stress-testing safety methods, and evals, with a focus on risks from scheming AIs. Examples include (black-box) AI control techniques, white-box monitors (probes etc.), chain-of-thought monitoring/faithfulness, building evaluation environments, and stress-testing mitigations.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Empirical

GDM stream focused on scheming risk, AI control, monitoring, monitorability, and loss-of-control evaluations. Probably running in-person in London.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Empirical

The Redwood Research stream is looking for fast empirical iterators and strategists to work on control research.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Empirical

This stream will focus on monitoring, stress-testing safety methods, and evals, with a focus on risks from scheming AIs. Examples include (black-box) AI control techniques, white-box monitors (probes etc.), chain-of-thought monitoring/faithfulness, building evaluation environments, and stress-testing mitigations.

Read more
Desired scholar characteristics
Empirical

In the shard theory stream, we create qualitatively new methods and fields of inquiry, from steering vectors to gradient routing to unsupervised capability elicitation to robust unlearning. If you're theory-minded, maybe you'll help us formalize shard theory itself.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Empirical

Research on deceptive alignment, designing scheming propensity evaluations and honeypots.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process

Frequently asked questions

What is the MATS Program?
Who are the MATS Mentors?
What are the key dates of the MATS Program?
Who is eligible to apply?
How does the application and mentor selection process work?