Paper
Efficient Reasoning on the Edge
Authors
Yelysei Bondarenko, Thomas Hehn, Rob Hesselink, Romain Lepert, Fabio Valerio Massoli, Evgeny Mironov, Leyla Mirvakhabova, Tribhuvanesh Orekondy, Spyridon Stasis, Andrey Kuzmin, Anna Kuzina, Markus Nagel, Ankita Nayak, Corrado Rainone, Ork de Rooij, Paul N Whatmough, Arash Behboodi, Babak Ehteshami Bejnordi
Abstract
Large language models (LLMs) with chain-of-thought reasoning achieve state-of-the-art performance across complex problem-solving tasks, but their verbose reasoning traces and large context requirements make them impractical for edge deployment. These challenges include high token generation costs, large KV-cache footprints, and inefficiencies when distilling reasoning capabilities into smaller models for mobile devices. Existing approaches often rely on distilling reasoning traces from larger models into smaller models, which are verbose and stylistically redundant, undesirable for on-device inference. In this work, we propose a lightweight approach to enable reasoning in small LLMs using LoRA adapters combined with supervised fine-tuning. We further introduce budget forcing via reinforcement learning on these adapters, significantly reducing response length with minimal accuracy loss. To address memory-bound decoding, we exploit parallel test-time scaling, improving accuracy at minor latency increase. Finally, we present a dynamic adapter-switching mechanism that activates reasoning only when needed and a KV-cache sharing strategy during prompt encoding, reducing time-to-first-token for on-device inference. Experiments on Qwen2.5-7B demonstrate that our method achieves efficient, accurate reasoning under strict resource constraints, making LLM reasoning practical for mobile scenarios. Videos demonstrating our solution running on mobile devices are available on our project page.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.16867v1</id>\n <title>Efficient Reasoning on the Edge</title>\n <updated>2026-03-17T17:59:51Z</updated>\n <link href='https://arxiv.org/abs/2603.16867v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.16867v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Large language models (LLMs) with chain-of-thought reasoning achieve state-of-the-art performance across complex problem-solving tasks, but their verbose reasoning traces and large context requirements make them impractical for edge deployment. These challenges include high token generation costs, large KV-cache footprints, and inefficiencies when distilling reasoning capabilities into smaller models for mobile devices. Existing approaches often rely on distilling reasoning traces from larger models into smaller models, which are verbose and stylistically redundant, undesirable for on-device inference. In this work, we propose a lightweight approach to enable reasoning in small LLMs using LoRA adapters combined with supervised fine-tuning. We further introduce budget forcing via reinforcement learning on these adapters, significantly reducing response length with minimal accuracy loss. To address memory-bound decoding, we exploit parallel test-time scaling, improving accuracy at minor latency increase. Finally, we present a dynamic adapter-switching mechanism that activates reasoning only when needed and a KV-cache sharing strategy during prompt encoding, reducing time-to-first-token for on-device inference. Experiments on Qwen2.5-7B demonstrate that our method achieves efficient, accurate reasoning under strict resource constraints, making LLM reasoning practical for mobile scenarios. Videos demonstrating our solution running on mobile devices are available on our project page.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n <published>2026-03-17T17:59:51Z</published>\n <arxiv:comment>Project page: https://qualcomm-ai-research.github.io/llm-reasoning-on-edge/</arxiv:comment>\n <arxiv:primary_category term='cs.LG'/>\n <author>\n <name>Yelysei Bondarenko</name>\n </author>\n <author>\n <name>Thomas Hehn</name>\n </author>\n <author>\n <name>Rob Hesselink</name>\n </author>\n <author>\n <name>Romain Lepert</name>\n </author>\n <author>\n <name>Fabio Valerio Massoli</name>\n </author>\n <author>\n <name>Evgeny Mironov</name>\n </author>\n <author>\n <name>Leyla Mirvakhabova</name>\n </author>\n <author>\n <name>Tribhuvanesh Orekondy</name>\n </author>\n <author>\n <name>Spyridon Stasis</name>\n </author>\n <author>\n <name>Andrey Kuzmin</name>\n </author>\n <author>\n <name>Anna Kuzina</name>\n </author>\n <author>\n <name>Markus Nagel</name>\n </author>\n <author>\n <name>Ankita Nayak</name>\n </author>\n <author>\n <name>Corrado Rainone</name>\n </author>\n <author>\n <name>Ork de Rooij</name>\n </author>\n <author>\n <name>Paul N Whatmough</name>\n </author>\n <author>\n <name>Arash Behboodi</name>\n </author>\n <author>\n <name>Babak Ehteshami Bejnordi</name>\n </author>\n </entry>"
}