Research

Paper

AI LLM March 09, 2026

One Model Is Enough: Native Retrieval Embeddings from LLM Agent Hidden States

Authors

Bo Jiang

Abstract

LLM agents that retrieve external knowledge typically generate a search query as text, then run a separate embedding model to encode it into a vector. This two-model pipeline adds infrastructure complexity and latency, yet is redundant: the LLM already encodes the full conversational context in its hidden states. We propose equipping LLM agents with native retrieval capability by adding a lightweight projection head that maps hidden states directly into the embedding space, eliminating the need for a separate embedding model. Trained with a combination of alignment, contrastive, and rank distillation losses, our method retains 97\% of baseline retrieval quality while enabling the LLM agent to search with its own representations. Experiments on the QReCC conversational search benchmark show competitive Recall@10 and MRR@10 compared to the standard generate-then-encode pipeline, with systematic ablations confirming the contribution of each loss component.

Metadata

arXiv ID: 2603.08429
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-09
Fetched: 2026-03-10 05:43

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.08429v1</id>\n    <title>One Model Is Enough: Native Retrieval Embeddings from LLM Agent Hidden States</title>\n    <updated>2026-03-09T14:25:35Z</updated>\n    <link href='https://arxiv.org/abs/2603.08429v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.08429v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>LLM agents that retrieve external knowledge typically generate a search query as text, then run a separate embedding model to encode it into a vector. This two-model pipeline adds infrastructure complexity and latency, yet is redundant: the LLM already encodes the full conversational context in its hidden states. We propose equipping LLM agents with native retrieval capability by adding a lightweight projection head that maps hidden states directly into the embedding space, eliminating the need for a separate embedding model. Trained with a combination of alignment, contrastive, and rank distillation losses, our method retains 97\\% of baseline retrieval quality while enabling the LLM agent to search with its own representations. Experiments on the QReCC conversational search benchmark show competitive Recall@10 and MRR@10 compared to the standard generate-then-encode pipeline, with systematic ablations confirming the contribution of each loss component.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.IR'/>\n    <published>2026-03-09T14:25:35Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Bo Jiang</name>\n    </author>\n  </entry>"
}