Model Organisms for Emergent Misalignment

MATS Fellow:

Ed Turner, Anna Soligo

Authors:

Edward Turner, Anna Soligo, Mia Taylor, Senthooran Rajamanoharan, Neel Nanda

Citations

33 Citations

Abstract:

Recent work discovered Emergent Misalignment (EM): fine-tuning large language models on narrowly harmful datasets can lead them to become broadly misaligned. A survey of experts prior to publication revealed this was highly unexpected, demonstrating critical gaps in our understanding of model alignment. In this work, we both advance understanding and provide tools for future research. Using new narrowly misaligned datasets, we create a set of improved model organisms that achieve 99% coherence (vs. 67% prior), work with smaller 0.5B parameter models (vs. 32B), and that induce misalignment using a single rank-1 LoRA adapter. We demonstrate that EM occurs robustly across diverse model sizes, three model families, and numerous training protocols including full supervised fine-tuning. Leveraging these cleaner model organisms, we isolate a mechanistic phase transition and demonstrate that it corresponds to a robust behavioural phase transition in all studied organisms. Aligning large language models is critical for frontier AI safety, yet EM exposes how far we are from achieving this robustly. By distilling clean model organisms that isolate a minimal alignment-compromising change, and where this is learnt, we establish a foundation for future research into understanding and mitigating alignment risks in LLMs.

Recent research

SL5 Standard for AI Security

Authors:

Yoav Tzfati

Date:

March 10, 2026

Citations:

0

Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains

Authors:

Roy Rinberg

Date:

March 5, 2026

Citations:

0

Frequently asked questions

What is the MATS Program?
How long does the program last?