Research

Paper

AI LLM March 19, 2026

SQL-Commenter: Aligning Large Language Models for SQL Comment Generation with Direct Preference Optimization

Authors

Lei Yu, Peng Wang, Jingyuan Zhang, Xin Wang, Jia Xu, Li Yang, Changzhi Deng, Jiajia Ma, Fengjun Zhang

Abstract

SQL query comprehension is a significant challenge due to complex syntax, diverse join types, and deep nesting. Many queries lack adequate comments, severely hindering code readability, maintainability, and knowledge transfer. Automated SQL comment generation faces two main challenges: limited datasets that inadequately represent complex real-world queries, and Large Language Models' (LLMs) insufficient understanding of SQL-specific semantics. Our empirical analysis shows that even after continual pre-training and supervised fine-tuning, LLMs struggle with complex SQL semantics, yielding inaccurate comments. To address this, we propose SQL-Commenter, an advanced method based on LLaMA-3.1-8B. We first construct a comprehensive dataset of complex SQL queries with expert-verified comments. Next, we perform continual pre-training on a large SQL corpus to enhance the LLM's syntax and semantic understanding, followed by supervised fine-tuning. Finally, we introduce Direct Preference Optimization (DPO) using human feedback. SQL-Commenter utilizes a preference-based loss function to favor preferred outputs, enhancing fine-grained semantic learning and context-dependent quality assessment. Evaluated on the Spider and Bird benchmarks, SQL-Commenter significantly outperforms state-of-the-art baselines. On average, it surpasses the strongest baseline (Qwen3-14B) by 9.29, 4.99, and 13.23 percentage points on BLEU-4, METEOR, and ROUGE-L, respectively. Moreover, human evaluation demonstrates the superior quality of comments generated by SQL-Commenter in terms of correctness, completeness, and naturalness.

Metadata

arXiv ID: 2603.18606
Provider: ARXIV
Primary Category: cs.SE
Published: 2026-03-19
Fetched: 2026-03-21 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.18606v1</id>\n    <title>SQL-Commenter: Aligning Large Language Models for SQL Comment Generation with Direct Preference Optimization</title>\n    <updated>2026-03-19T08:23:40Z</updated>\n    <link href='https://arxiv.org/abs/2603.18606v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.18606v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>SQL query comprehension is a significant challenge due to complex syntax, diverse join types, and deep nesting. Many queries lack adequate comments, severely hindering code readability, maintainability, and knowledge transfer. Automated SQL comment generation faces two main challenges: limited datasets that inadequately represent complex real-world queries, and Large Language Models' (LLMs) insufficient understanding of SQL-specific semantics. Our empirical analysis shows that even after continual pre-training and supervised fine-tuning, LLMs struggle with complex SQL semantics, yielding inaccurate comments. To address this, we propose SQL-Commenter, an advanced method based on LLaMA-3.1-8B. We first construct a comprehensive dataset of complex SQL queries with expert-verified comments. Next, we perform continual pre-training on a large SQL corpus to enhance the LLM's syntax and semantic understanding, followed by supervised fine-tuning. Finally, we introduce Direct Preference Optimization (DPO) using human feedback. SQL-Commenter utilizes a preference-based loss function to favor preferred outputs, enhancing fine-grained semantic learning and context-dependent quality assessment. Evaluated on the Spider and Bird benchmarks, SQL-Commenter significantly outperforms state-of-the-art baselines. On average, it surpasses the strongest baseline (Qwen3-14B) by 9.29, 4.99, and 13.23 percentage points on BLEU-4, METEOR, and ROUGE-L, respectively. Moreover, human evaluation demonstrates the superior quality of comments generated by SQL-Commenter in terms of correctness, completeness, and naturalness.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n    <published>2026-03-19T08:23:40Z</published>\n    <arxiv:comment>Accepted to ICPC 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.SE'/>\n    <author>\n      <name>Lei Yu</name>\n    </author>\n    <author>\n      <name>Peng Wang</name>\n    </author>\n    <author>\n      <name>Jingyuan Zhang</name>\n    </author>\n    <author>\n      <name>Xin Wang</name>\n    </author>\n    <author>\n      <name>Jia Xu</name>\n    </author>\n    <author>\n      <name>Li Yang</name>\n    </author>\n    <author>\n      <name>Changzhi Deng</name>\n    </author>\n    <author>\n      <name>Jiajia Ma</name>\n    </author>\n    <author>\n      <name>Fengjun Zhang</name>\n    </author>\n  </entry>"
}