Join our team

The ML Alignment & Theory Scholars (MATS) program aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from unaligned artificial intelligence. We believe that ambitious young researchers from a variety of backgrounds have the potential to contribute to the field of AI safety research.

We aim to provide the mentorship, curriculum, financial support, and community necessary to facilitate this contribution. Please see our theory of change for more details.

We have two open roles currently. More coming soon!

Open roles

  • Apply here by Mar 9, 2025.

    Operations Generalist

    ML Alignment & Theory Scholars (MATS) is a US charity dedicated to building the field of AI safety research. Our mission to find, train and support talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from unaligned artificial intelligence (AI). We believe that ambitious researchers from a variety of backgrounds have the potential to meaningfully contribute to the field of alignment research. We aim to provide the training, logistics, and community necessary to aid this work. MATS runs the largest AI safety research programme, connecting aspiring AI safety talent to some of the best mentors in the field. MATS scholars have produced high-quality research output and have transitioned into technical and governance roles at leading AI safety organisations. To scale our program and execute it efficiently, we are seeking a talented individual to join our London team as an Operations Generalist.

    About the Role

    We’re looking for proactive and dedicated operations professionals to enable the MATS program in London to scale and provide the best-possible experience to our scholars. As an Operations Generalist at MATS, you’ll support scholars, mentors, and team members such that they can better focus on object-level work.

    Working from the office spaces at London Initiative for AI Safety (LISA), your day to day will involve executing strategic initiatives and process improvements, planning and running events for scholars, coordinating with vendors  and communicating MATS work to external audiences via blog posts. 

    Who We're Looking For

    We welcome applications from individuals with diverse backgrounds, and strongly encourage you to apply if you fit into at least one of these profiles:

    • Operations managers from a start-up culture

    • Professionals with project management experience

    • Community builders, especially in the AI Safety space

    • People who have worked as generalists in smaller organisations

    If you do not fit into one of these profiles but think you could be a good fit, we are still excited for you to apply!

    Key Responsibilities

    The exact responsibilities of the role will vary subject to the profile of the successful candidate and MATS’ priorities over time, but will include task such as:

    Office & Program Operations

    • Lead the scholar onboarding processes, including visa administration, documentation management, and first-week scheduling

    • Oversee daily office operations and administrative functions, including coordination with LISA

    • Manage catering services and vendor relationships

    • Maintain and update the scholar handbook and associated program materials

    Event Planning & Community Building

    • Plan and execute various scholar and community events, including:

      • Regular speaker events and lightning talks

      • Visits to AI labs, research sharing sessions and alumni gatherings

      • Social events for scholar such as a weekly breakfast and activities at program start and completion

    • Develop and implement strategies for strengthening alumni engagement

    • Coordinate and run community-building initiatives

    Communications & Documentation

    • Collaborate with Research Managers to showcase scholar and fellow achievements in blog posts

    • Contribute to program retrospectives

    • Maintain and enhance team documentation

    Strategic Projects

    • Research and provision new office space in collaboration with the LISA team

    • Support hiring initiatives as the program scales

    • Evaluate and implement alternative catering solutions, vendors

    • Drive process improvement initiatives and support execution of miscellaneous MATS projects

    Essential Skills & Experience

    • Significant experience in operations management or similar role

    • Excellent written and verbal communication skills in English

    • Strong event planning and project management capabilities

    • Ability to work independently and take initiative in a fast-paced environment

    • An interest in AI safety

    UK work authorization is required for this role.

    Desired Skills & Experience

    • Visa processes and international program administration 

    • Community-building

    • Writing technical content for external audiences

    • Some familiarity with the AI Safety landscape

    What We Offer

    • Opportunity to support highly impactful AI safety research

    • Professional development and skill enhancement opportunities

    • Collaborative and intellectually stimulating work environment

    • Work in a small team and take on responsibility and initiatives

    • Competitive salary and benefits (see below)

    • Access to the LISA office space

    • Meals provided Monday-Friday

    • Reimbursement of travel-related expenses

    Working hours and location

    40 hours per week. Successful candidates can expect to spend most of their time working in-person at the LISA office in London. For strong candidates, we are open to flexible hybrid working arrangements.

    Compensation

    • £50,000 - £65,000 per annum

  • Apply here by 31st March.

    ML Alignment & Theory Scholars (MATS) is a US charity dedicated to building the field of AI safety research. To achieve this, we are seeking talented individuals to join our London team as Research Managers. On this team, our mission is to maximize the impact and development of AI safety researchers by providing personalized support, coordination, and resources throughout our research programs.

    The Role

    As a Research Manager, you will play a crucial role in supporting and guiding AI safety researchers, facilitating projects, and contributing to the overall success of our programme. This role offers a unique opportunity to develop your skills, make a significant impact in the field of AI safety, and work with top researchers from around the world.

    Your day to day will involve talking to both scholars and mentors to understand the needs and direction of their projects. This may involve becoming integrated into the research team, providing feedback on papers and ensuring that there is a plan to get from where the project is now to where it needs to be. 

    We are excited for candidates that can augment their work as a research manager by utilising their pre-existing expertise in one or more of the following domains:

    • Theory - providing informed feedback to scholars on research direction, and helping MATS to assess new program offerings.

    • Engineering - helping scholars to become stronger research engineers, and building out the internal tooling of MATS.

    • Projects - providing scholars with structure and accountability for their research, and helping MATS to build better systems and infrastructure.

    • Communication - helping scholars to present their research in more compelling ways to influential audiences, and improving how MATS communicates its work.

    Who We're Looking For

    We welcome applications from individuals with diverse backgrounds, and strongly encourage you to apply if you fit into at least one of these profiles:

    • AI safety researchers looking to contribute more broadly to the field

    • Professionals with technical or governance research, and/or research management experience

    • Engineering product/project managers from tech or a STEM industry

    • People managers with technical or governance backgrounds

    If you do not fit into one of these profiles but think you could be a good fit, we are still excited for you to apply!

    Key Responsibilities

    • Work with world-class academic & industry mentors to:

      • People-manage their AI safety mentees

      • Help drive AI safety research projects from conception to completion

      • Facilitate communication and collaboration between scholars, mentors, and other collaborators

      • Organize and lead research meetings

    • Work with developing AI safety researchers to:

      • Provide guidance and feedback on research directions and writeups

      • Connect them with relevant domain experts to support their research 

    • Contribute to the strategic planning and development of MATS:

      • Spearhead internal projects - past projects have included retrospective writeups and development of offboarding resources

      • Build, improve, and maintain the systems and infrastructure that MATS requires to run efficiently

      • Provide input into strategy discussions

    Essential Qualifications and Skills

    • 2-5 years experience across a combination of the following:

      • Technical research

      • Governance or policy work

      • Project management

      • Research management (not necessarily technical)

      • People management

      • Mentoring

    • Excellent communication skills, both verbal and written

    • Strong listening skills and empathy

    • Strong critical thinking and problem-solving abilities

    • Ability to explain complex concepts clearly and concisely

    • Proactive and self-driven work ethic

    • Familiarity with the basic ideas behind AI safety, and a strong alignment with our mission

    Desirable Qualifications and Skills

    We expect especially strong applicants to have deep experience in at least one of the following areas:

    • Deep familiarity with some subset of AI safety concepts and research

    • Experience in ML engineering, software engineering, or related technical fields

    • Experience in AI policy

    • Background in coaching or professional development

    • PhD or extensive academic research experience

    What We Offer

    • Opportunity to help launch the careers of highly talented early-career AI Safety researchers. 

    • Opportunity to be involved in impactful research projects: some Research Managers have been made authors on the conference papers they supported, depending on their contribution

    • Chance to work with and learn from top researchers in the field

    • Professional development and skill enhancement opportunities

    • Collaborative and intellectually stimulating work environment

    • Competitive salary (see below)

    • Access to an AI safety-focused office space in London

    • Private medical insurance

    • Meals provided Monday-Friday

    Compensation

    Compensation will be £60,000 - £90,000 per annum, depending on experience.

    Working hours and location

    40 hours per week. Successful candidates can expect to spend most of their time working in-person from our London office. We are open to hybrid working arrangements for exceptional candidates.

    How to Apply

    Please fill out the form here. Applications will be reviewed on a rolling basis.

    MATS is committed to fostering a diverse and inclusive work environment. We encourage applications from individuals of all backgrounds and experiences.

    Join us in shaping the future of AI safety research!