Simple LLM Baselines are Competitive for Model Diffing

MATS Fellow:

Elias Kempf, Bartosz Cywiński

Authors:

Elias Kempf, Simon Schrodi, Bartosz Cywiński, Thomas Brox, Neel Nanda, Arthur Conmy

Citations

Citations

Abstract:

Standard LLM evaluations only test capabilities or dispositions that evaluators designed them for, missing unexpected differences such as behavioral shifts between model revisions or emergent misaligned tendencies. Model diffing addresses this limitation by automatically surfacing systematic behavioral differences. Recent approaches include LLM-based methods that generate natural language descriptions and sparse autoencoder (SAE)-based methods that identify interpretable features. However, no systematic comparison of these approaches exists nor are there established evaluation criteria. We address this gap by proposing evaluation metrics for key desiderata (generalization, interestingness, and abstraction level) and use these to compare existing methods. Our results show that an improved LLM-based baseline performs comparably to the SAE-based method while typically surfacing more abstract behavioral differences.

Recent research

The 2025 AI Agent Index

Authors:

Leon Staufer, Mick Yang

Date:

February 20, 2026

Citations:

0

Frequently asked questions

What is the MATS Program?
How long does the program last?