Research

Paper

AI LLM March 04, 2026

Rethinking the Efficiency and Effectiveness of Reinforcement Learning for Radiology Report Generation

Authors

Zilin Lu, Ruifeng Yuan, Weiwei Cao, Wanxing Chang, Zhongyu Wei, Sinuo Wang, Yong Xia, Ling Zhang, Jianpeng Zhang

Abstract

Radiologists highly desire fully automated AI for radiology report generation (R2G), yet existing approaches fall short in clinical utility. Reinforcement learning (RL) holds potential to address these shortcomings, but its adoption in this task remains underexplored. In this paper, we revisit RL in terms of data efficiency and optimization effectiveness for R2G tasks. First, we explore the impact of data quantity and quality on the performance of RL in medical contexts, revealing that data quality plays a more critical role than quantity. To this end, we propose a diagnostic diversity-based data sampling strategy that enables comparable performance with fewer samples. Second, we observe that the majority of tokens in radiology reports are template-like and diagnostically uninformative, whereas the low frequency of clinically critical tokens heightens the risk of being overlooked during optimization. To tackle this, we introduce Diagnostic Token-weighted Policy Optimization (DiTPO), which directly optimizes for clinical accuracy by using a diagnostic F1 score as the reward signal. Unlike standard RL approaches that treat all tokens equally, DiTPO explicitly models the varying importance of different tokens through rule- or gradient-based mechanisms to prioritize clinically relevant content. Extensive experiments on the MIMIC-CXR, IU-Xray, and CheXpert Plus datasets demonstrate that our framework achieves state-of-the-art (SOTA) performance while requiring substantially fewer training samples in RL. Notably, on MIMIC-CXR, our framework attains an F1 score of 0.516 using only 20% of the RL training samples.

Metadata

arXiv ID: 2603.04022
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-04
Fetched: 2026-03-05 06:06

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.04022v1</id>\n    <title>Rethinking the Efficiency and Effectiveness of Reinforcement Learning for Radiology Report Generation</title>\n    <updated>2026-03-04T12:57:05Z</updated>\n    <link href='https://arxiv.org/abs/2603.04022v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.04022v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Radiologists highly desire fully automated AI for radiology report generation (R2G), yet existing approaches fall short in clinical utility. Reinforcement learning (RL) holds potential to address these shortcomings, but its adoption in this task remains underexplored. In this paper, we revisit RL in terms of data efficiency and optimization effectiveness for R2G tasks. First, we explore the impact of data quantity and quality on the performance of RL in medical contexts, revealing that data quality plays a more critical role than quantity. To this end, we propose a diagnostic diversity-based data sampling strategy that enables comparable performance with fewer samples. Second, we observe that the majority of tokens in radiology reports are template-like and diagnostically uninformative, whereas the low frequency of clinically critical tokens heightens the risk of being overlooked during optimization. To tackle this, we introduce Diagnostic Token-weighted Policy Optimization (DiTPO), which directly optimizes for clinical accuracy by using a diagnostic F1 score as the reward signal. Unlike standard RL approaches that treat all tokens equally, DiTPO explicitly models the varying importance of different tokens through rule- or gradient-based mechanisms to prioritize clinically relevant content. Extensive experiments on the MIMIC-CXR, IU-Xray, and CheXpert Plus datasets demonstrate that our framework achieves state-of-the-art (SOTA) performance while requiring substantially fewer training samples in RL. Notably, on MIMIC-CXR, our framework attains an F1 score of 0.516 using only 20% of the RL training samples.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-04T12:57:05Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Zilin Lu</name>\n    </author>\n    <author>\n      <name>Ruifeng Yuan</name>\n    </author>\n    <author>\n      <name>Weiwei Cao</name>\n    </author>\n    <author>\n      <name>Wanxing Chang</name>\n    </author>\n    <author>\n      <name>Zhongyu Wei</name>\n    </author>\n    <author>\n      <name>Sinuo Wang</name>\n    </author>\n    <author>\n      <name>Yong Xia</name>\n    </author>\n    <author>\n      <name>Ling Zhang</name>\n    </author>\n    <author>\n      <name>Jianpeng Zhang</name>\n    </author>\n  </entry>"
}