Research in this track covers software and infrastructure-level hardware security mechanisms for monitoring and securing AI development and deployment, including side-channel analysis, cluster security, and physical-layer verification. Importantly, this track is distinct from the broader "AI security" framing that refers to adversarial robustness or jailbreaking; the focus here is on literally securing the systems that advanced AI runs on, including data centers, hardware supply chains, compute clusters, and model weights.
This stream will work on gathering and analyzing data in order to shed light on the driving forces behind AI and monitor its impacts.
Scholars will have individual weekly meetings for half an hour with their mentor, as well as a group meeting with their mentor for half an hour. Additionally, scholars will attend Epoch’s weekly Work In Progress meeting.
Some useful characteristics (don't need all of these):
Scholar will pick from a list of projects
In this project, we will explore GPU side-channel attacks to extract information about model usage. A simple example is to observe (via radio, power fluctuations, acoustics, etc.) which experts were used in each forward pass of an MOE model, then use those observations to guess which tokens were produced.
Co-working 2-4 hours per week, including detailed guidance. Flexible. 1 hour check-ins per week. You can schedule ad-hoc calls if stuck or wanting to brainstorm.
Please note: experience with hardware is not a requirement for this stream, as long as you are willing to work hard and learn fast, and can show other evidence of exceptional ability. If in doubt: we encourage you to apply!
We will provide you with a lot of autonomy and plug-and-play access to a rare combination of tools and equipment—in exchange we expect you to have a strong self-direction, intellectual ambition, and a lot of curiosity. This stream requires you to have a tight experiment loop to form and test hypotheses on the fly.
Example skill profiles:
Must have: Trained or fine-tuned a transformer language model in PyTorch (toy models and following guides is fine). Familiar with basic electronics concepts (voltage, current, transistors). Has experience writing research papers, even as a class assignment.
Nice to have: Familiarity with LaTeX, PyTorch internals, CUDA/OpenCL, GPU architecture, chip design, oscilloscopes, signal processing, electrical engineering.
There is a cluster of potential projects to choose from. As a team, we will decide which to pursue based on individual interest and skills. Mentors will pitch example projects and scholars can then modify and re-pitch them. Once the research problem, hypothesis, and testing plan are written and agreed on, scholars begin object-level work. We encourage failing fast and jumping to a fallback project.
Implementing SL4/5 and searching for differentially defense-favored security tools.
I love asynchronous collaboration and I'm happy to provide frequent small directional feedback, or do thorough reviews of your work with a bit more lead time. A typical week should look like either trying out a new angle on a problem, or making meaningful progress towards productionizing an existing approach.
Essential:
Preferred:
Mentor(s) will talk through project ideas with scholar, or scholar will pick from a list of projects.
This stream focuses on AI policy, especially technical governance topics. Tentative project options include: technical projects for verifying AI treaties, metascience for AI safety and governance, and proposals for tracking AI-caused job loss. Scholars can also propose their own projects.
We'll meet once or twice a week (~1 hr/wk total, as a team if it's a team project). I'm based in DC, so we'll meet remotely. I (Mauricio) will also be available for async discussion, career advising, and detailed feedback on research plans and drafts.
No hard requirements. Bonus points for research experience, AI safety and governance knowledge, writing and analytical reasoning skills, and experience relevant to specific projects.
I'll talk through project ideas with scholar
The MATS Program is a 10-week research fellowship designed to train and support emerging researchers working on AI alignment, transparency and security. Fellows collaborate with world-class mentors, receive dedicated research management support, and join a vibrant community in Berkeley focused on advancing safe and reliable AI. The program provides the structure, resources, and mentorship needed to produce impactful research and launch long-term careers in AI safety.
MATS mentors are leading researchers from a broad range of AI safety, alignment, governance, field-building and security domains. They include academics, industry researchers, and independent experts who guide scholars through research projects, provide feedback, and help shape each scholar’s growth as a researcher. The mentors represent expertise in areas such as:
Key dates
Application:
The main program will then run from September 28th to December 4th, with the extension phase for accepted fellows beginning in December.
MATS accepts applicants from diverse academic and professional backgrounds - from machine learning, mathematics, and computer science to policy, economics, physics, cognitive science, biology, and public health, as well as founders, operators, and field-builders without traditional research backgrounds. The primary requirements are strong motivation to contribute to AI safety and evidence of technical aptitude, research potential, or relevant operational experience. Prior AI safety experience is helpful but not required.
Applicants submit a general application, applying to various tracks (Empirical, Theory, Strategy & Forecasting, Policy & Governance, Systems Security, Biosecurity, Founding & Field-Building.
In stage 2, applicants apply to streams within those tracks as well as completing track specific evaluations.
After a centralized review period, applicants who are advanced will then undergo additional evaluations depending on the preferences of the streams they've applied to before doing final interviews and receiving offers.
For more information on how to get into MATS, please look at this page.