Research

Paper

AI LLM March 10, 2026

You Didn't Have to Say It like That: Subliminal Learning from Faithful Paraphrases

Authors

Isaia Gisler, Zhonghao He, Tianyi Qiu

Abstract

When language models are trained on synthetic data, they (student model) can covertly acquire behavioral traits from the data-generating model (teacher model). Subliminal learning refers to the transmission of traits from a teacher to a student model via training on data unrelated to those traits. Prior work demonstrated this in the training domains of number sequences, code, and math Chain-of-Thought traces including transmission of misaligned behaviors. We investigate whether transmission occurs through natural language paraphrases with fixed semantic content, and whether content explicitly contradicting the teacher's preference can block it. We find that training on paraphrases from a teacher system-prompted to love a particular animal increases a student's preference for that animal by up to 19 percentage points. This occurs when paraphrased content is semantically unrelated to the animal, or even when it explicitly expresses dislike. The transmission succeeds despite aggressive filtering to ensure paraphrase fidelity. This raises concerns for pipelines where models generate their own training data: content-based inspection cannot detect such transmission, and even preference-contradicting content fails to prevent it.

Metadata

arXiv ID: 2603.09517
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09517v1</id>\n    <title>You Didn't Have to Say It like That: Subliminal Learning from Faithful Paraphrases</title>\n    <updated>2026-03-10T11:21:14Z</updated>\n    <link href='https://arxiv.org/abs/2603.09517v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09517v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>When language models are trained on synthetic data, they (student model) can covertly acquire behavioral traits from the data-generating model (teacher model). Subliminal learning refers to the transmission of traits from a teacher to a student model via training on data unrelated to those traits. Prior work demonstrated this in the training domains of number sequences, code, and math Chain-of-Thought traces including transmission of misaligned behaviors. We investigate whether transmission occurs through natural language paraphrases with fixed semantic content, and whether content explicitly contradicting the teacher's preference can block it. We find that training on paraphrases from a teacher system-prompted to love a particular animal increases a student's preference for that animal by up to 19 percentage points. This occurs when paraphrased content is semantically unrelated to the animal, or even when it explicitly expresses dislike. The transmission succeeds despite aggressive filtering to ensure paraphrase fidelity. This raises concerns for pipelines where models generate their own training data: content-based inspection cannot detect such transmission, and even preference-contradicting content fails to prevent it.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-10T11:21:14Z</published>\n    <arxiv:comment>Accepted for Spotlight presentation at EACL 2026 SRW. 5 pages, 2 figures, plus appendix. Equal supervision by Zhonghao He and Tianyi Qiu</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Isaia Gisler</name>\n      <arxiv:affiliation>ETH Zürich</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Zhonghao He</name>\n      <arxiv:affiliation>University of Cambridge</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Tianyi Qiu</name>\n      <arxiv:affiliation>Peking University</arxiv:affiliation>\n    </author>\n  </entry>"
}