Research

Paper

AI LLM February 24, 2026

Unseen-Codebases-Domain Data Synthesis and Training Based on Code Graphs

Authors

Guangsheng Ou, Qiming Zhang, Sirong Chen, Anji Li, Dong Xu, Tiancheng Luo, Dekun Dai, Cuiyun Gao, Long Wang, Jun Zhou, Mingwei Liu, Zibin Zheng

Abstract

In the context of newly release software frameworks, large language models (LLMs) often exhibit poor performance and a high rate of hallucination, as they are not exposed to such environments during training. Although inference-time augmentation techniques such as retrieval-augmented generation (RAG) can partially mitigate hallucinations, knowledge injection through prompting alone is insufficient to enable models to fully understand the intrinsic relationships among different components of a codebase, or to reason about the correct compositions and apply. Although explicit knowledge injection can be achieved through post-training, compared with public code domains, unseen codebases typically provide only source code and lack large volumes of high-quality, usage-oriented code that can be directly leveraged as training data. Consequently, existing data synthesis approaches are insufficient to adequately capture unseen codebases usage scenarios when restricted to source code alone. To address these challenges, we propose UCD-Training, a two-stage training framework for reasoning-aware data synthesis grounded in a code graph constructed from unseen codebases. UCD-Training first parses the source code to build a code graph, then conducts dependency-preserving continued pretraining (CPT) using file-level dependency data, followed by graph-grounded supervised fine-tuning (SFT) on three types of synthesized data augmented with explicit reasoning traces: (1) single-hop relation reasoning data, (2) compositional API reasoning data, and (3) codebase utilization data. We further introduce a new benchmark, UnseenCodeBench, for code generation on unseen codebases and conduct comprehensive experiments across multiple codebases.

Metadata

arXiv ID: 2602.20799
Provider: ARXIV
Primary Category: cs.SE
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20799v1</id>\n    <title>Unseen-Codebases-Domain Data Synthesis and Training Based on Code Graphs</title>\n    <updated>2026-02-24T11:36:34Z</updated>\n    <link href='https://arxiv.org/abs/2602.20799v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20799v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>In the context of newly release software frameworks, large language models (LLMs) often exhibit poor performance and a high rate of hallucination, as they are not exposed to such environments during training. Although inference-time augmentation techniques such as retrieval-augmented generation (RAG) can partially mitigate hallucinations, knowledge injection through prompting alone is insufficient to enable models to fully understand the intrinsic relationships among different components of a codebase, or to reason about the correct compositions and apply. Although explicit knowledge injection can be achieved through post-training, compared with public code domains, unseen codebases typically provide only source code and lack large volumes of high-quality, usage-oriented code that can be directly leveraged as training data. Consequently, existing data synthesis approaches are insufficient to adequately capture unseen codebases usage scenarios when restricted to source code alone. To address these challenges, we propose UCD-Training, a two-stage training framework for reasoning-aware data synthesis grounded in a code graph constructed from unseen codebases. UCD-Training first parses the source code to build a code graph, then conducts dependency-preserving continued pretraining (CPT) using file-level dependency data, followed by graph-grounded supervised fine-tuning (SFT) on three types of synthesized data augmented with explicit reasoning traces: (1) single-hop relation reasoning data, (2) compositional API reasoning data, and (3) codebase utilization data. We further introduce a new benchmark, UnseenCodeBench, for code generation on unseen codebases and conduct comprehensive experiments across multiple codebases.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n    <published>2026-02-24T11:36:34Z</published>\n    <arxiv:primary_category term='cs.SE'/>\n    <author>\n      <name>Guangsheng Ou</name>\n    </author>\n    <author>\n      <name>Qiming Zhang</name>\n    </author>\n    <author>\n      <name>Sirong Chen</name>\n    </author>\n    <author>\n      <name>Anji Li</name>\n    </author>\n    <author>\n      <name>Dong Xu</name>\n    </author>\n    <author>\n      <name>Tiancheng Luo</name>\n    </author>\n    <author>\n      <name>Dekun Dai</name>\n    </author>\n    <author>\n      <name>Cuiyun Gao</name>\n    </author>\n    <author>\n      <name>Long Wang</name>\n    </author>\n    <author>\n      <name>Jun Zhou</name>\n    </author>\n    <author>\n      <name>Mingwei Liu</name>\n    </author>\n    <author>\n      <name>Zibin Zheng</name>\n    </author>\n  </entry>"
}