Applying sparse autoencoders to unlearn knowledge in language models

MATS Fellow:

Eoin Farrell, Yeu-Tong Lau

Authors:

Eoin Farrell, Yeu-Tong Lau, Arthur Conmy

Citations

50 Citations

Abstract:

We investigate whether sparse autoencoders (SAEs) can be used to remove knowledge from language models. We use the biology subset of the Weapons of Mass Destruction Proxy dataset and test on the gemma-2b-it and gemma-2-2b-it language models. We demonstrate that individual interpretable biology-related SAE features can be used to unlearn a subset of WMDP-Bio questions with minimal side-effects in domains other than biology. Our results suggest that negative scaling of feature activations is necessary and that zero ablating features is ineffective. We find that intervening using multiple SAE features simultaneously can unlearn multiple different topics, but with similar or larger unwanted side-effects than the existing Representation Misdirection for Unlearning technique. Current SAE quality or intervention techniques would need to improve to make SAE-based unlearning comparable to the existing fine-tuning based techniques.

Recent research

SL5 Standard for AI Security

Authors:

Yoav Tzfati

Date:

March 10, 2026

Citations:

0

AI Researchers' Views on Automating AI R&D and Intelligence Explosions

Authors:

Severin Field

Date:

March 5, 2026

Citations:

0

Frequently asked questions

What is the MATS Program?
How long does the program last?