Research

Paper

TESTING March 17, 2026

SpecSteer: Synergizing Local Context and Global Reasoning for Efficient Personalized Generation

Authors

Hang Lv, Sheng Liang, Hao Wang, Yongyue Zhang, Hongchao Gu, Wei Guo, Defu Lian, Yong Liu, Enhong Chen

Abstract

Realizing personalized intelligence faces a core dilemma: sending user history to centralized large language models raises privacy concerns, while on-device small language models lack the reasoning capacity required for high-quality generation. Our pilot study shows that purely local enhancements remain insufficient to reliably bridge this gap. We therefore propose SpecSteer, an asymmetric collaborative inference framework that synergizes private on-device context with cloud-scale reasoning. SpecSteer casts collaboration as Bayesian knowledge fusion and repurposes speculative decoding as a distributed alignment protocol, yielding a Draft--Verify--Recover pipeline: the on-device model drafts personalized sequences; the cloud validates via a ratio-based mechanism that decouples reasoning verification from private context, filtering logical flaws without accessing raw user context; upon rejection, a steering recovery injects local intent during correction. Experiments demonstrate that SpecSteer successfully closes the reasoning gap and achieves superior personalized generation performance, while delivering a 2.36x speedup over standard baselines.

Metadata

arXiv ID: 2603.16219
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-17
Fetched: 2026-03-18 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.16219v1</id>\n    <title>SpecSteer: Synergizing Local Context and Global Reasoning for Efficient Personalized Generation</title>\n    <updated>2026-03-17T07:51:29Z</updated>\n    <link href='https://arxiv.org/abs/2603.16219v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.16219v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Realizing personalized intelligence faces a core dilemma: sending user history to centralized large language models raises privacy concerns, while on-device small language models lack the reasoning capacity required for high-quality generation. Our pilot study shows that purely local enhancements remain insufficient to reliably bridge this gap. We therefore propose SpecSteer, an asymmetric collaborative inference framework that synergizes private on-device context with cloud-scale reasoning. SpecSteer casts collaboration as Bayesian knowledge fusion and repurposes speculative decoding as a distributed alignment protocol, yielding a Draft--Verify--Recover pipeline: the on-device model drafts personalized sequences; the cloud validates via a ratio-based mechanism that decouples reasoning verification from private context, filtering logical flaws without accessing raw user context; upon rejection, a steering recovery injects local intent during correction. Experiments demonstrate that SpecSteer successfully closes the reasoning gap and achieves superior personalized generation performance, while delivering a 2.36x speedup over standard baselines.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-17T07:51:29Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Hang Lv</name>\n    </author>\n    <author>\n      <name>Sheng Liang</name>\n    </author>\n    <author>\n      <name>Hao Wang</name>\n    </author>\n    <author>\n      <name>Yongyue Zhang</name>\n    </author>\n    <author>\n      <name>Hongchao Gu</name>\n    </author>\n    <author>\n      <name>Wei Guo</name>\n    </author>\n    <author>\n      <name>Defu Lian</name>\n    </author>\n    <author>\n      <name>Yong Liu</name>\n    </author>\n    <author>\n      <name>Enhong Chen</name>\n    </author>\n  </entry>"
}