Paper
CLARC: C/C++ Benchmark for Robust Code Search
Authors
Kaicheng Wang, Liyan Huang, Weike Fang, Weihang Wang
Abstract
Efficient code retrieval is critical for developer productivity, yet existing benchmarks largely focus on Python and rarely stress-test robustness beyond superficial lexical cues. To address the gap, we introduce an automated pipeline for code search datasets and present CLARC, a C/C++ benchmark built from real-world GitHub repositories. CLARC contains 1,245 query-code pairs for evaluation and 5,472 pairs for training. The benchmark incorporates LLM-generated natural language queries validated through rigorous human scoring and hypothesis testing. To analyze contextual requirements effectively, our pipeline starts by ensuring code compilability. It then categorizes code snippets by dependency complexity, distinguishing whether the code relies on custom-defined types or helper functions. The pipeline also enables CLARC to stress-test retrieval robustness by introducing challenging settings, including identifier anonymization and compilation to low-level languages like Assembly and WebAssembly. Under these conditions, our evaluation of six state-of-the-art models reveals sharp drops in retrieval effectiveness. The experimental results highlight the models' persistent reliance on lexical features rather than code semantic understanding. Our dataset is publicly available at https://huggingface.co/datasets/ClarcTeam/CLARC.
Metadata
Related papers
Fractal universe and quantum gravity made simple
Fabio Briscese, Gianluca Calcagni • 2026-03-25
POLY-SIM: Polyglot Speaker Identification with Missing Modality Grand Challenge 2026 Evaluation Plan
Marta Moscati, Muhammad Saad Saeed, Marina Zanoni, Mubashir Noman, Rohan Kuma... • 2026-03-25
LensWalk: Agentic Video Understanding by Planning How You See in Videos
Keliang Li, Yansong Li, Hongze Shen, Mengdi Liu, Hong Chang, Shiguang Shan • 2026-03-25
Orientation Reconstruction of Proteins using Coulomb Explosions
Tomas André, Alfredo Bellisario, Nicusor Timneanu, Carl Caleman • 2026-03-25
The role of spatial context and multitask learning in the detection of organic and conventional farming systems based on Sentinel-2 time series
Jan Hemmerling, Marcel Schwieder, Philippe Rufin, Leon-Friedrich Thomas, Mire... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.04484v1</id>\n <title>CLARC: C/C++ Benchmark for Robust Code Search</title>\n <updated>2026-03-04T18:57:37Z</updated>\n <link href='https://arxiv.org/abs/2603.04484v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.04484v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Efficient code retrieval is critical for developer productivity, yet existing benchmarks largely focus on Python and rarely stress-test robustness beyond superficial lexical cues. To address the gap, we introduce an automated pipeline for code search datasets and present CLARC, a C/C++ benchmark built from real-world GitHub repositories. CLARC contains 1,245 query-code pairs for evaluation and 5,472 pairs for training. The benchmark incorporates LLM-generated natural language queries validated through rigorous human scoring and hypothesis testing. To analyze contextual requirements effectively, our pipeline starts by ensuring code compilability. It then categorizes code snippets by dependency complexity, distinguishing whether the code relies on custom-defined types or helper functions. The pipeline also enables CLARC to stress-test retrieval robustness by introducing challenging settings, including identifier anonymization and compilation to low-level languages like Assembly and WebAssembly. Under these conditions, our evaluation of six state-of-the-art models reveals sharp drops in retrieval effectiveness. The experimental results highlight the models' persistent reliance on lexical features rather than code semantic understanding. Our dataset is publicly available at https://huggingface.co/datasets/ClarcTeam/CLARC.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n <published>2026-03-04T18:57:37Z</published>\n <arxiv:comment>Accepted by ICLR 2026</arxiv:comment>\n <arxiv:primary_category term='cs.SE'/>\n <author>\n <name>Kaicheng Wang</name>\n </author>\n <author>\n <name>Liyan Huang</name>\n </author>\n <author>\n <name>Weike Fang</name>\n </author>\n <author>\n <name>Weihang Wang</name>\n </author>\n </entry>"
}