Research

Paper

TESTING March 17, 2026

From the Inside Out: Progressive Distribution Refinement for Confidence Calibration

Authors

Xizhong Yang, Yinan Xia, Huiming Wang, Mofei Song

Abstract

Leveraging the model's internal information as the self-reward signal in Reinforcement Learning (RL) has received extensive attention due to its label-free nature. While prior works have made significant progress in applying the Test-Time Scaling (TTS) strategies to RL, the discrepancy in internal information between test and training remains inadequately addressed. Moreover, Test-Time Training based on voting-based TTS strategies often suffers from reward hacking problems. To address these issues, we propose DistriTTRL, which leverages the distribution prior of the model's confidence during RL to progressively optimize the reward signal, rather than relying solely on single-query rollouts. Additionally, we mitigate the phenomenon of consistent reward hacking caused by the voting-based TTS strategies through diversity-targeted penalties. Benefiting from this training mechanism where model capability and self-reward signals complement each other, and the mitigation of reward hacking, DistriTTRL has achieved significant performance improvements across multiple models and benchmarks.

Metadata

arXiv ID: 2603.16500
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-17
Fetched: 2026-03-18 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.16500v1</id>\n    <title>From the Inside Out: Progressive Distribution Refinement for Confidence Calibration</title>\n    <updated>2026-03-17T13:26:29Z</updated>\n    <link href='https://arxiv.org/abs/2603.16500v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.16500v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Leveraging the model's internal information as the self-reward signal in Reinforcement Learning (RL) has received extensive attention due to its label-free nature. While prior works have made significant progress in applying the Test-Time Scaling (TTS) strategies to RL, the discrepancy in internal information between test and training remains inadequately addressed. Moreover, Test-Time Training based on voting-based TTS strategies often suffers from reward hacking problems. To address these issues, we propose DistriTTRL, which leverages the distribution prior of the model's confidence during RL to progressively optimize the reward signal, rather than relying solely on single-query rollouts. Additionally, we mitigate the phenomenon of consistent reward hacking caused by the voting-based TTS strategies through diversity-targeted penalties. Benefiting from this training mechanism where model capability and self-reward signals complement each other, and the mitigation of reward hacking, DistriTTRL has achieved significant performance improvements across multiple models and benchmarks.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-17T13:26:29Z</published>\n    <arxiv:comment>15 pages</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Xizhong Yang</name>\n    </author>\n    <author>\n      <name>Yinan Xia</name>\n    </author>\n    <author>\n      <name>Huiming Wang</name>\n    </author>\n    <author>\n      <name>Mofei Song</name>\n    </author>\n  </entry>"
}