Join our team

The ML Alignment & Theory Scholars (MATS) program aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from unaligned artificial intelligence. We believe that ambitious young researchers from a variety of backgrounds have the potential to contribute to the field of AI alignment research.

We aim to provide the mentorship, curriculum, financial support, and community necessary to facilitate this contribution. Please see our theory of change for more details.

We have seven open roles currently! Please apply here for all Berkeley roles.

Open roles

  • Apply here by 30th April.

    ML Alignment & Theory Scholars (MATS) is a US charity dedicated to building the field of AI safety research. To achieve this, we are seeking talented individuals to join our London team as Research Managers. On this team, our mission is to maximize the impact and development of AI safety researchers by providing personalized support, coordination, and resources throughout our research programs.

    The Role

    As a Research Manager, you will play a crucial role in supporting and guiding AI safety researchers, facilitating projects, and contributing to the overall success of our programme. This role offers a unique opportunity to develop your skills, make a significant impact in the field of AI safety, and work with top researchers from around the world.

    Your day to day will involve talking to both scholars and mentors to understand the needs and direction of their projects. This may involve becoming integrated into the research team, providing feedback on papers and ensuring that there is a plan to get from where the project is now to where it needs to be. 

    We are excited for candidates that can augment their work as a research manager by utilising their pre-existing expertise in one or more of the following domains:

    • Theory - providing informed feedback to scholars on research direction, and helping MATS to assess new program offerings.

    • Engineering - helping scholars to become stronger research engineers, and building out the internal tooling of MATS.

    • Projects - providing scholars with structure and accountability for their research, and helping MATS to build better systems and infrastructure.

    • Communication - helping scholars to present their research in more compelling ways to influential audiences, and improving how MATS communicates its work.

    Who We're Looking For

    We welcome applications from individuals with diverse backgrounds, and strongly encourage you to apply if you fit into at least one of these profiles:

    • AI safety researchers looking to contribute more broadly to the field

    • Professionals with technical or governance research, and/or research management experience

    • Engineering product/project managers from tech or a STEM industry

    • People managers with technical or governance backgrounds

    If you do not fit into one of these profiles but think you could be a good fit, we are still excited for you to apply!

    Key Responsibilities

    • Work with world-class academic & industry mentors to:

      • People-manage their AI safety mentees

      • Help drive AI safety research projects from conception to completion

      • Facilitate communication and collaboration between scholars, mentors, and other collaborators

      • Organize and lead research meetings

    • Work with developing AI safety researchers to:

      • Provide guidance and feedback on research directions and writeups

      • Connect them with relevant domain experts to support their research 

    • Contribute to the strategic planning and development of MATS:

      • Spearhead internal projects - past projects have included retrospective writeups and development of offboarding resources

      • Build, improve, and maintain the systems and infrastructure that MATS requires to run efficiently

      • Provide input into strategy discussions

    Essential Qualifications and Skills

    • 2-5 years experience across a combination of the following:

      • Technical research

      • Governance or policy work

      • Project management

      • Research management (not necessarily technical)

      • People management

      • Mentoring

    • Excellent communication skills, both verbal and written

    • Strong listening skills and empathy

    • Strong critical thinking and problem-solving abilities

    • Ability to explain complex concepts clearly and concisely

    • Proactive and self-driven work ethic

    • Familiarity with the basic ideas behind AI safety, and a strong alignment with our mission

    Desirable Qualifications and Skills

    We expect especially strong applicants to have deep experience in at least one of the following areas:

    • Deep familiarity with some subset of AI safety concepts and research

    • Experience in ML engineering, software engineering, or related technical fields

    • Experience in AI policy

    • Background in coaching or professional development

    • PhD or extensive academic research experience

    What We Offer

    • Opportunity to help launch the careers of highly talented early-career AI Safety researchers. 

    • Opportunity to be involved in impactful research projects: some Research Managers have been made authors on the conference papers they supported, depending on their contribution

    • Chance to work with and learn from top researchers in the field

    • Professional development and skill enhancement opportunities

    • Collaborative and intellectually stimulating work environment

    • Competitive salary (see below)

    • Access to an AI safety-focused office space in London

    • Private medical insurance

    • Meals provided Monday-Friday

    Compensation

    Compensation will be £60,000 - £90,000 per annum, depending on experience.

    Working hours and location

    40 hours per week. Successful candidates can expect to spend most of their time working in-person from our London office. We are open to hybrid working arrangements for exceptional candidates.

    How to Apply

    Please fill out the form here. Applications will be reviewed on a rolling basis.

    MATS is committed to fostering a diverse and inclusive work environment. We encourage applications from individuals of all backgrounds and experiences.

    Join us in shaping the future of AI safety research!

  • We are seeking an experienced operational leader to join the MATS team as Operations Director. This individual will oversee organizational strategy, team management, financial operations, facilities management, and technology systems to create an effective infrastructure that supports our AI researchers. Ideal candidates have strong operational experience, excellent people management skills, and a passion for building systems that empower others.

    Overview

    This role involves developing and implementing operations strategy, managing the operations team, overseeing financial resources, maintaining multiple facilities, and administering technology systems. A great Operations Director creates seamless processes that allow scholars, mentors, and team members to focus on their core work while ensuring operational excellence across the program.

    Compensation will be $155-210k/year, depending on experience. Must be willing to work full-time from Berkeley, CA. Office space, catered lunches and dinners on work days, and health, dental, and vision insurance are provided.

    Please apply via this form by Apr 25, 2025. MATS is committed to fostering a diverse and inclusive work environment. We encourage applications from individuals of all backgrounds and experiences.

    Responsibilities

    An Operations Director's primary role is to build and maintain the operational foundation that enables MATS to run efficiently and effectively. This role reports to the Executive Team. Specific responsibilities include:

    • Developing organizational strategy with a focus on operational needs;

    • Managing the operations team, including hiring, training, performance reviews, goal-setting, and weekly management;

    • Supporting scholars, mentors, and team members by overseeing the request form system;

    • Overseeing internal financial matters and day-to-day budget management, including expense tracking and accounting;

    • Managing multiple facilities, including scholar offices, scholar accommodations, and the team office, including for program events;

    • Maintaining digital services and systems, including the MATS wiki, databases, and project management system;

    • Collaborating with Research, Community, and Program teams to ensure alignment across program activities.

    Criteria

    • US work authorization preferred;

    • Located in or willing to move to Berkeley, CA;

    • Passion to reduce risks from unaligned AI and promote sentient flourishing.

    We encourage you to apply even if you are not sure that you sufficiently meet all of the following criteria:

    • Proven experience in operations management, preferably in research, academic, or technical environments. Familiarity with facilities management and managing vendor relationships.

    • Strong people management skills with experience in hiring, training, day-to-day management, and performance evaluation. Excellent communication and interpersonal skills.

    • Financial management experience, including budgeting, expense tracking, and accounting.

    • Experience with technology systems administration and documentation, particularly Google Workspace, Airtable, Slack, and Notion. Strong ability to leverage AI tools to enhance productivity and automate systems.

    • Autonomy and proactivity in addressing operational challenges. Problem-solving mindset with the ability to create efficient processes and prioritize effectively in a dynamic environment.

    • Bonus: Experience with research computing environments or clusters.

    • Bonus: Experience in legal matters, including US immigration, intellectual property, and nonprofit governance.

    • Bonus: Previous experience with MATS or similar AI alignment, governance, or other field-building programs.

  • We are seeking a strategic leader with strong decision-making abilities to join the MATS team as Program Lead or Director. This individual will oversee critical selection processes, curriculum development, and program evaluation to ensure MATS achieves its mission of developing exceptional AI alignment researchers. Ideal candidates have a comprehensive understanding of the AI alignment, security, and governance landscape and proven experience in program management.

    Overview

    This role involves directing key program activities including selection processes, curriculum development, talent investigation, and impact measurement. A great Program Lead/Director demonstrates excellent judgment in selecting mentors and scholars, designing effective learning experiences, and evaluating program outcomes. You will work closely with the Executive Team and other MATS managers to maintain program quality and maximize impact in the AI safety field.

    Compensation will be $130-210k/year, depending on experience. Must be willing to work full-time from Berkeley, CA. Office space, catered lunches and dinners on work days, and health, dental, and vision insurance are provided for full-time workers.

    Please apply via this form by Apr 25, 2025. MATS is committed to fostering a diverse and inclusive work environment. We encourage applications from individuals of all backgrounds and experiences.

    Responsibilities

    A Program Director's primary role is to shape and guide the overall direction of the MATS Program through strategic decision-making and careful evaluation. This role reports to the Executive Team. Specific responsibilities include:

    • Managing 2-5 FTEs on the following projects;

    • Leading selection processes for mentors, applicants, and program extensions;

    • Developing and overseeing curriculum, including seminars and workshops;

    • Creating and implementing milestone rubrics and symposium evaluation criteria;

    • Performing comprehensive impact analyses to measure program effectiveness;

    • Conducting investigations into talent needs within the AI safety ecosystem;

    • Collaborating with Research, Community, and Operations teams to ensure alignment across program activities.

    Criteria

    • US work authorization preferred;

    • Located in or willing to move to Berkeley, CA;

    • Passion to reduce risks from unaligned AI and promote sentient flourishing.

    We encourage you to apply even if you are not sure that you sufficiently meet all of the following criteria:

    • Demonstrated experience in program management, preferably in research, academic, or technical contexts. Experience designing curriculum or educational programs. 

    • Strong understanding of AI alignment, security, governance, and interpretability research. Proven ability to evaluate technical research quality and potential.

    • Strong people management skills with experience in hiring, training, and performance evaluation. Leadership experience and ability to coordinate across teams.

    • Experience with Google Workspace, Airtable, Slack, and Notion. Data analysis skills for impact assessment. Strong ability to leverage AI tools to enhance productivity and automate systems.

    • Excellent judgment and decision-making abilities. Strong communication skills and emotional intelligence for engaging with diverse stakeholders.

    • Bonus: Previous experience with MATS or similar AI alignment, governance, or other field-building programs.

  • The MATS Research Management team supports the development of AI safety researchers through personalized support, structured guidance, and iterative feedback. This role focuses on technical AI governance - supporting scholars working at the intersection of AI alignment and policy as they grow to inform standards, oversight mechanisms, and other interventions grounded in technical understanding.

    Overview

    Research Managers are paired with research streams to help scholars and mentors achieve their goals while upholding high standards of research relevance and output quality. In this technical governance role, you will coach scholars in regular 1-1s, and contribute to the translation of governance research ideas into actionable insights and concrete outputs. You’ll also spend 5-10 hrs/week on special projects like curriculum design, talent needs reports, or impact evaluation.

    Compensation is $48-82/hr depending on experience. We strongly prefer candidates who work full-time from Berkeley, CA. Full-time hires receive catered meals on workdays, health/dental/vision insurance, and case-by-case relocation support. A 16-week default probationary period applies.

    Please apply via this form by April 27, 2025, for equal consideration - note will be closing the form on May 2, 2025, and applications will be reviewed on a rolling basis. MATS is committed to fostering a diverse and inclusive workplace. We encourage applicants from all backgrounds to apply.

    Responsibilities

    A Research Manager's primary role is to support scholars and mentors in achieving program goals while accelerating scholar development and research progress. This role reports primarily to the Research Director and to the Executive Team or Program Lead/Director for special projects. Specific responsibilities include:

    • Partner with 3-5 mentors to understand their research goals and provide structured support for scholars' governance research development.

    • Coach and support 8-10 scholars through regular 1:1 meetings, providing feedback on research plans, policy analysis, institutional design proposals, and governance frameworks.

    • Help scholars develop strong research practices and maintain momentum throughout the program.

    • Host research meetings and facilitate connections between scholars and relevant policy stakeholders as appropriate.

    • Help develop impactful support strategies, resources, and structures for governance-focused research streams.

    • Support “special projects” based on your strengths (e.g., milestone rubrics, applicant selection, workshop planning, talent needs reports).

    • Attend weekly Research Management team meetings and 1:1s with the team lead.

    • Direct scholars to appropriate resources, including support from the Community Manager or Executive Team when needed.

    Criteria

    • US work authorization preferred;

    • Located in or willing to move to Berkeley, CA;

    • Passion to reduce risks from unaligned AI and promote sentient flourishing.

    We encourage you to apply even if you are not sure that you sufficiently meet all of the following criteria:

    • 2-3+ years of research experience in AI governance, policy, regulation, national security, or adjacent fields (Bonus: professional experience specific to DC).

    • Strong track record of helping others produce successful policy outputs.

    • Demonstrated ability to understand and contribute to discussions on complex governance challenges within technical domains.

    • Excellent written and verbal communication skills.

    • Strong people-modeling skills (e.g. “theory of mind” and cognitive empathy) and enjoyment from helping others grow (e.g., “servant leadership”).

    • Eagerness to improve research workflows, collaboration pathways, and organizational systems.

    • Familiarity with the AI governance landscape and key institutional stakeholders

    • Bonus: Published work in AI governance or policy.

    • Bonus: Previous experience in the US government or in national security.

    • Bonus: Previous experience with MATS or similar AI alignment, governance, or other field-building programs.

  • The MATS Research Management team supports the development of AI safety researchers through personalized support, strategic, structured guidance, and iterative feedback. Research Managers play a pivotal role in accelerating scholar growth, facilitating high-quality research, and guiding participants toward impactful AI safety careers.

    We’ve supported scholars in publishing at top conferences, securing major grants, and joining leading AI safety organizations. Your work will help scholars produce meaningful contributions to the field while ensuring a smooth and productive program experience.

    Overview

    Research Managers are paired with research streams, supporting scholars and mentors to achieve their goals while upholding high standards of research quality. You’ll help translate research visions into actionable plans, coach scholars in regular 1-1s, facilitate collaboration, and troubleshoot blockers. You’ll also contribute 5-10 hours/week to special projects, such as curriculum design, applicant selection, workshop coordination, or impact analysis.

    Compensation is $48-82/hour depending on experience and qualifications. We strongly prefer candidates who work full-time from Berkeley, CA. Full-time hires receive catered meals on workdays, health/dental/vision insurance, and case-by-case relocation support. A 16-week default probationary period applies.

    Please apply via this form by April 27, 2025, for equal consideration - note will be closing the form on May 2, 2025, and applications will be reviewed on a rolling basis. MATS is committed to fostering a diverse and inclusive workplace. We encourage applicants from all backgrounds to apply.

    Responsibilities

    A Research Manager's primary role is to support scholars and mentors in achieving program goals while accelerating scholar development and research progress. This role reports primarily to the Research Director and to the Executive Team or Program Lead/Director for special projects. Specific responsibilities include:

    • Partner with 1-4 mentors to clarify research goals and help accelerate their mentorship process.

    • Coach 8-10 scholars via regular 1-1s, offering guidance and feedback on deliverables such as research plans, grant applications, papers, blog posts, presentations, and overall research directions as appropriate.

    • Help scholars develop strong research practices and maintain momentum throughout the program.

    • Host group meetings and facilitate productive collaboration between mentors and scholars.

    • Support “special projects” based on your strengths (e.g., milestone rubrics, applicant selection, workshop planning, data analysis).

    • Attend weekly Research Management team meetings and 1:1s with the team lead.

    • Direct scholars to appropriate resources, including support from the Community Manager or Executive Team when needed.

    Criteria

    • US work authorization preferred;

    • Located in or willing to move to Berkeley, CA;

    • Passion to reduce risks from unaligned AI and promote sentient flourishing.

    We encourage you to apply even if you are not sure that you sufficiently meet all of the following criteria:

    • 2-3+ years of research experience in AI/ML, AI governance/policy, or adjacent technical fields.

    • Comfort engaging with technical research and AI safety concepts and concerns (e.g., “dual-use” research, “infohazards,” etc.).

    • Past experience mentoring or managing technical researchers.

    • Ability to quickly learn new tools and understand diverse research agendas.

    • Strong people-modeling skills (e.g. “theory of mind” and cognitive empathy) and enjoyment from helping others grow (e.g., “servant leadership”).

    • Eagerness to improve research workflows and organizational systems.

    • Published work in AI safety, ML, or adjacent areas.

    • Bonus: Previous experience with MATS or similar AI alignment, governance, or other field-building programs.

  • We are seeking compassionate individuals looking to apply their interpersonal skills in the field of AI alignment. As a Community Manager on the MATS team, you will help build a safe, supportive, and nurturing environment where researchers can flourish. You will work with the Community Lead to support scholar welfare, foster community connections, and enhance the MATS experience. Ideal candidates have experience in emotional support, conflict resolution, and creating inclusive environments.

    Overview

    This role involves supporting scholar wellbeing, organizing community events, implementing feedback systems, and addressing community health concerns. A great Community Manager builds strong relationships with scholars, anticipates community needs, and creates support systems that promote both individual wellbeing and collective flourishing.

    Compensation will be $44-58/hour, depending on experience. Must be willing to work full-time from Berkeley, CA during our Summer/Winter programs. Office space, catered lunches and dinners on work days, and health, dental, and vision insurance are provided.

    Please apply via this form by May 2, 2025. MATS is committed to fostering a diverse and inclusive work environment. We encourage applications from individuals of all backgrounds and experiences. Please apply even if you’re not sure if you have the exact experience listed!

    Responsibilities

    A Community Manager's primary role is to support the needs of scholars and improve the MATS community environment. This role reports to the Community Lead. Specific responsibilities include:

    • Building meaningful relationships with scholars through one-on-one meetings to learn about their needs, provide emotional support, talk through scholar challenges, and receive community feedback;

    • Coordinating with the Community Management Lead about scholar needs; 

    • Collaborate with Research Management and Operations teams, as needed, to support scholars;

    • Supporting scholar orientation and offboarding;

    • Facilitate community building by organizing and supervising socials, networking events, and workshops (for example, hosting board game nights, hikes, 1-on-1 speed networking, remote scholar events, lightning talks, and more);

    • Setting up and monitoring systems, e.g. writing surveys and analyzing data, to improve the program based on scholar feedback;

    • Proactively identifying community health concerns and coordinating with appropriate team members to address them, connecting scholars with appropriate resources. 

    We are a small team running a large program, so we expect team members to enthusiastically take on tasks unrelated to their job title. For example, MATS team members have compiled a MATS Alumni Impact Analysis and conducted analyses for internal use and external publication

    Criteria

    • US work authorization preferred;

    • Located in or willing to move to Berkeley, CA;

    • Passion to reduce risks from unaligned AI and promote sentient flourishing.

    We encourage you to apply even if you are not sure that you meet all of the following criteria:

    • Experience providing emotional support and mediating conflicts, ideally in a setting similar to MATS (e.g., as a university peer counselor);

    • Strong interpersonal skills (e.g., patience, active listening);

    • High cognitive empathy;

    • Knowledge of emotional regulation tools;

    • Autonomy and proactivity;

    • Comfortable working with private and confidential information;

    • Able to function in urgent and/or high-stakes situations, if necessary;

    • Some data analysis skills a plus, but not required;

    Ideally, some knowledge of AI safety-specific concerns that sometimes bear on community health (e.g., "doomerism," "infohazards," etc.).

  • We are seeking proactive and dedicated operations professionals to join the MATS team as Operations Generalists. These individuals will support scholars, mentors, and team members by creating an environment that allows them to focus on their core work. Ideal candidates are detail-oriented, service-minded, and adaptable to the changing needs of a dynamic research program.

    Details

    Compensation will be $40-53/h, depending on experience. Must be willing to work from Berkeley, CA. Office space, catered lunches and dinners on work days, and health, dental, and vision insurance are provided for full-time workers.

    Please apply via this form by May 2, 2025. MATS is committed to fostering a diverse and inclusive work environment. We encourage applications from individuals of all backgrounds and experiences.

    Responsibilities

    An Operations Generalist's primary role is to ensure the smooth functioning of daily program activities and address emerging needs. This role reports to the Operations Director. Specific responsibilities include:

    • Managing office facilities and addressing maintenance concerns; 

    • Welcoming and coordinating office guests and visitors; Interfacing with external contractors and service providers; 

    • Supporting the planning and execution of program events and community gatherings; 

    • Addressing scholar and mentor requests for equipment, supplies, and resources; 

    • Managing the office request system and responding promptly to needs; 

    • Taking ownership of special projects as they arise during the program; 

    • Collaborating with Research, Community, and Program teams to ensure aligned support for scholars.

    Criteria

    • US work authorization preferred;

    • Located in or willing to move to Berkeley, CA;

    • Passion to reduce risks from unaligned AI and promote sentient flourishing.

    We encourage you to apply even if you are not sure that you sufficiently meet all of the following criteria:

    • Strong organizational skills and attention to detail; 

    • Excellent interpersonal and communication abilities; 

    • Problem-solving mindset and ability to take initiative; 

    • Experience in office management, event coordination, or customer service; 

    • Adaptability and willingness to take on varied responsibilities; 

    • Proficiency with Google Workspace, Airtable, Slack, and Notion; 

    • Strong ability to leverage AI tools to enhance productivity; 

    • Bonus: Previous experience with MATS or similar AI alignment programs.