Research

Paper

AI LLM March 04, 2026

Monitoring Emergent Reward Hacking During Generation via Internal Activations

Authors

Patrick Wilhelm, Thorsten Wittkopp, Odej Kao

Abstract

Fine-tuned large language models can exhibit reward-hacking behavior arising from emergent misalignment, which is difficult to detect from final outputs alone. While prior work has studied reward hacking at the level of completed responses, it remains unclear whether such behavior can be identified during generation. We propose an activation-based monitoring approach that detects reward-hacking signals from internal representations as a model generates its response. Our method trains sparse autoencoders on residual stream activations and applies lightweight linear classifiers to produce token-level estimates of reward-hacking activity. Across multiple model families and fine-tuning mixtures, we find that internal activation patterns reliably distinguish reward-hacking from benign behavior, generalize to unseen mixed-policy adapters, and exhibit model-dependent temporal structure during chain-of-thought reasoning. Notably, reward-hacking signals often emerge early, persist throughout reasoning, and can be amplified by increased test-time compute in the form of chain-of-thought prompting under weakly specified reward objectives. These results suggest that internal activation monitoring provides a complementary and earlier signal of emergent misalignment than output-based evaluation, supporting more robust post-deployment safety monitoring for fine-tuned language models.

Metadata

arXiv ID: 2603.04069
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-04
Fetched: 2026-03-05 06:06

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.04069v1</id>\n    <title>Monitoring Emergent Reward Hacking During Generation via Internal Activations</title>\n    <updated>2026-03-04T13:44:24Z</updated>\n    <link href='https://arxiv.org/abs/2603.04069v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.04069v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Fine-tuned large language models can exhibit reward-hacking behavior arising from emergent misalignment, which is difficult to detect from final outputs alone. While prior work has studied reward hacking at the level of completed responses, it remains unclear whether such behavior can be identified during generation. We propose an activation-based monitoring approach that detects reward-hacking signals from internal representations as a model generates its response. Our method trains sparse autoencoders on residual stream activations and applies lightweight linear classifiers to produce token-level estimates of reward-hacking activity. Across multiple model families and fine-tuning mixtures, we find that internal activation patterns reliably distinguish reward-hacking from benign behavior, generalize to unseen mixed-policy adapters, and exhibit model-dependent temporal structure during chain-of-thought reasoning. Notably, reward-hacking signals often emerge early, persist throughout reasoning, and can be amplified by increased test-time compute in the form of chain-of-thought prompting under weakly specified reward objectives. These results suggest that internal activation monitoring provides a complementary and earlier signal of emergent misalignment than output-based evaluation, supporting more robust post-deployment safety monitoring for fine-tuned language models.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-04T13:44:24Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <arxiv:journal_ref>ICLR2026 Workshop: Principled Design for Trustworthy AI</arxiv:journal_ref>\n    <author>\n      <name>Patrick Wilhelm</name>\n    </author>\n    <author>\n      <name>Thorsten Wittkopp</name>\n    </author>\n    <author>\n      <name>Odej Kao</name>\n    </author>\n  </entry>"
}