Research

Paper

AI LLM February 25, 2026

D-COT: Disciplined Chain-of-Thought Learning for Efficient Reasoning in Small Language Models

Authors

Shunsuke Ubukata

Abstract

Chain-of-Thought (CoT) distillation from Large Language Models (LLMs) often induces "overthinking" in Small Language Models (SLMs), leading to performance degradation and excessive token consumption. In this study, we propose Disciplined Chain-of-Thought (D-CoT), a novel framework that enforces a structured reasoning process using control tags -- such as <TEMP_LOW> for fact-checking and <TEMP_HIGH> for multi-perspective exploration -- as auxiliary scaffolding during training. By optimizing the CoT trajectory, D-CoT suppresses reasoning drift and simultaneously achieves token reduction and performance improvement. We demonstrate the efficacy of our approach on Qwen3-8B: with only 5,000 training samples, D-CoT significantly boosts accuracy on GPQA-diamond by 9.9% and MMLU-Pro (0-shot) by 9.1%, while drastically reducing computational costs. Furthermore, we confirm that the model internalizes this disciplined thought structure, maintaining high performance even without explicit control tags during inference.

Metadata

arXiv ID: 2602.21786
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-02-25
Fetched: 2026-02-26 05:00

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21786v1</id>\n    <title>D-COT: Disciplined Chain-of-Thought Learning for Efficient Reasoning in Small Language Models</title>\n    <updated>2026-02-25T11:08:38Z</updated>\n    <link href='https://arxiv.org/abs/2602.21786v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21786v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Chain-of-Thought (CoT) distillation from Large Language Models (LLMs) often induces \"overthinking\" in Small Language Models (SLMs), leading to performance degradation and excessive token consumption. In this study, we propose Disciplined Chain-of-Thought (D-CoT), a novel framework that enforces a structured reasoning process using control tags -- such as &lt;TEMP_LOW&gt; for fact-checking and &lt;TEMP_HIGH&gt; for multi-perspective exploration -- as auxiliary scaffolding during training. By optimizing the CoT trajectory, D-CoT suppresses reasoning drift and simultaneously achieves token reduction and performance improvement. We demonstrate the efficacy of our approach on Qwen3-8B: with only 5,000 training samples, D-CoT significantly boosts accuracy on GPQA-diamond by 9.9% and MMLU-Pro (0-shot) by 9.1%, while drastically reducing computational costs. Furthermore, we confirm that the model internalizes this disciplined thought structure, maintaining high performance even without explicit control tags during inference.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-02-25T11:08:38Z</published>\n    <arxiv:comment>9 pages, 3 figures. Code: https://github.com/gitpullpull/DisciplinedChainOfThought | Benchmarks: https://huggingface.co/datasets/gitpullpull/D-CoT-Benchmarks | Dataset: https://huggingface.co/datasets/gitpullpull/D-CoT-datasets</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Shunsuke Ubukata</name>\n    </author>\n  </entry>"
}