Alan Cooney leads the Autonomous Systems workstream within the UK's AI Safety Institute. His team is responsible for assessing the capabilities and risks of Frontier AI systems released by AI labs such as OpenAI, Google and Anthropic. Prior to working in AI safety, he was an investment consultant and start-up founder, with his company Skyhook being acquired in 2023. He also completed Stanford’s Machine Learning and Alignment Theory Scholars Programme, where he was supervised by Google DeepMind researcher Neel Nanda.

Janet Egan is a Senior Fellow with the Technology and National Security Program at the Center for a New American Security (CNAS). Her research focuses on the national security implications of AI and other emerging technologies, including how compute can be leveraged for the governance of advanced AI systems. Janet brings a policy lens to tech issues: translating technical research into insights that are salient with policymakers.
Prior to joining CNAS, Janet was a Director in the Australian Government Department of the Prime Minister and Cabinet. She has applied experience working on policy at the intersection of national security, economics and international relations, on issues spanning 5G security, cyber security, countering foreign interference, foreign investment and trade, and critical infrastructure regulations. Janet has a MPP from the Harvard Kennedy School, and a BA from Monash University in Australia.
Alex Mallen is a Member of Technical Staff at Redwood Research. He studied CS at the University of Washington, and previously worked at EleutherAI.
Abram Demski is an AI Safety researcher specializing in Agent Foundations, best known for Embedded Agency (co-written with Scott Garrabrant). His overall approach primarily involves deconfusion research in relation to various concepts related to AI risks, including agency, optimization, trust, meaning, understanding, interpretability, and computational uncertainty (more commonly but less precisely known as bounded rationality). More specifically, his recent work focuses on modeling trust, with the objective of clarifying conditions under which humans can justifiably trust AI.
Alexis is the co-founder and CEO of Asymmetric Security. He was previously an AI security fellow at RAND and part of the founding team of GovAI.
Paul Riechers is a researcher and scientific leader with deep expertise in the physics of information and the fundamental limits of learning and prediction. He co-founded Simplex, an AI safety research organization, with Dr. Adam Shai, applying insights from theoretical physics and neuroscience to build foundational understanding of internal representations and emergent behavior in neural networks. Paul earned a PhD in theoretical physics and an MS in electrical and computer engineering from UC Davis. Prior to founding Simplex, he spent five years as a Research Fellow at Nanyang Technological University in Singapore. He is also co-founder of the Beyond Institute for Theoretical Science (BITS), a former Senior Fellow at UCLA’s Mathematics of Intelligences program at IPAM, and has served as both a MATS scholar and mentor. Paul has co-organized multiple workshops on AI interpretability and alignment, and now co-leads the growing Simplex team with support from the Astera Institute.
James Lucassen is a Member of Technical Staff at Redwood Research. He studied CS at Harvey Mudd College, and previously did AI safety research at MIRI and CMU.
Aryan Bhatt is a Member of Technical Staff at Redwood Research. He studied Math and CS at Hunter College, and attended MATS in 2023. He studies AI Control.
I am a researcher at METR.
I think the development of AI is going to be a confusing time for the world. I want to help provide good evidence and methodologies for tracking AI development and risk, so humanity can make sensible decisions.
I've had different roles at different times, including leading task development and our monitoring stream. I like prototyping new kinds of evaluations. I think it's healthy to read transcripts. I'm interested in what capabilities matter for being a competent agent, and why current AI agents fall short. I feel lucky that I get to spend time building an understanding of the models.
I've previously spent time at the Centre on Long-Term Risk and FHI. Before that I studied physics at university, where I did malaria diagnostics research.
Vivek Hebbar is a Member of Technical Staff at Redwood Research. He attended Stanford before attending MATS in 2022, and researches AI control.
Richard previously worked on alignment at DeepMind and governance at OpenAI. He's currently an independent researcher focusing on multi-agent intelligence. He's particular interested in understanding how subagents aggregate to form robust larger-scale agents, and how those larger-scale agents change the values of their subagents.
Alek Westover is a Member of Technical Staff at Redwood Research. He studied CS and Math at MIT, and previously did research in theoretical computer science.
I am an Assistant Professor of Statistics and EECS at UC Berkeley, where I’m also part of BAIR and CLIMB. I am also Founder & CEO of Transluce, a non-profit research lab building open, scalable technology for understanding frontier AI systems.
Cristian is a Research Fellow at Artificial Intelligence Underwriting Company (AIUC). Insurers have been known to play the role of private regulators (such as in commercial nuclear power); his work broadly focuses on how we might steer the insurance market for AI toward an effective private governance regime.
He was previously a Winter Fellow at the Centre for the Governance of AI, and independent researcher at the AI Safety Student Team at Harvard. He has an M.A. in Philosophy from the University of British Columbia.
I'm a research scientist at the UK AI Security Institute, working on AI control red teaming and model organisms of misalignment. I was previously a postdoc with Sam Bowman at NYU, did MATS with Owain Evans, and mentored for the MATS, SPAR and Pivotal fellowships. I got my PhD at the University of Edinburgh, supervised by Iain Murray.
Stephen McAleer is a Member of Technical Staff at Anthropic, working on the Alignment Science team. He was previously a postdoc at CMU working with Tuomas Sandholm. Stephen received his PhD in computer science from the University of California, Irvine working with Pierre Baldi. During his PhD, he did research scientist internships at Intel Labs and DeepMind. Before that, Stephen received his bachelor's degree in mathematics and economics from Arizona State University in 2017. Projects he is interested in include:
- Anything related to control/monitoring for coding agents
- Scalable oversight for agent alignment
- Scheming evaluations and mitigations
- Adversarial training for robust monitors / reward models
- Reward hacking / deception in agents
The MATS Program is a 12-week research fellowship designed to train and support emerging researchers working on AI alignment, interpretability, governance, and safety. Fellows collaborate with world-class mentors, receive dedicated research management support, and join a vibrant community in Berkeley focused on advancing safe and reliable AI. The program provides the structure, resources, and mentorship needed to produce impactful research and launch long-term careers in AI safety.
MATS mentors are leading researchers from a broad range of AI safety, alignment, governance, interpretability, and security domains. They include academics, industry researchers, and independent experts who guide scholars through research projects, provide feedback, and help shape each scholar’s growth as a researcher. The mentors represent expertise in areas such as:
Key dates
Application:
The main program will then run from early June to late August, with the extension phase for accepted fellows beginning in September.
MATS accepts applicants from diverse academic and professional backgrounds ranging from machine learning, mathematics, and computer science to policy, economics, physics, and cognitive science. The primary requirements are strong motivation to contribute to AI safety and evidence of technical aptitude or research potential. Prior AI safety experience is helpful but not required.
Applicants submit a general application, applying to various tracks (technical governance, empirical, policy & strategy, theory, and compute governance) and streams within those tracks.
After a centralized review period, applicants who are advanced will then undergo additional evaluations depending on the preferences of the streams they've applied to before doing final interviews and receiving offers.
For more information on how to get into MATS, please look at this page.