Research

Paper

AI LLM March 25, 2026

CoCR-RAG: Enhancing Retrieval-Augmented Generation in Web Q&A via Concept-oriented Context Reconstruction

Authors

Kaize Shi, Xueyao Sun, Qika Lin, Firoj Alam, Qing Li, Xiaohui Tao, Guandong Xu

Abstract

Retrieval-augmented generation (RAG) has shown promising results in enhancing Q&A by incorporating information from the web and other external sources. However, the supporting documents retrieved from the heterogeneous web often originate from multiple sources with diverse writing styles, varying formats, and inconsistent granularity. Fusing such multi-source documents into a coherent and knowledge-intensive context remains a significant challenge, as the presence of irrelevant and redundant information can compromise the factual consistency of the inferred answers. This paper proposes the Concept-oriented Context Reconstruction RAG (CoCR-RAG), a framework that addresses the multi-source information fusion problem in RAG through linguistically grounded concept-level integration. Specifically, we introduce a concept distillation algorithm that extracts essential concepts from Abstract Meaning Representation (AMR), a stable semantic representation that structures the meaning of texts as logical graphs. The distilled concepts from multiple retrieved documents are then fused and reconstructed into a unified, information-intensive context by Large Language Models, which supplement only the necessary sentence elements to highlight the core knowledge. Experiments on the PopQA and EntityQuestions datasets demonstrate that CoCR-RAG significantly outperforms existing context-reconstruction methods across these Web Q&A benchmarks. Furthermore, CoCR-RAG shows robustness across various backbone LLMs, establishing itself as a flexible, plug-and-play component adaptable to different RAG frameworks.

Metadata

arXiv ID: 2603.23989
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.23989v1</id>\n    <title>CoCR-RAG: Enhancing Retrieval-Augmented Generation in Web Q&amp;A via Concept-oriented Context Reconstruction</title>\n    <updated>2026-03-25T06:38:09Z</updated>\n    <link href='https://arxiv.org/abs/2603.23989v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.23989v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Retrieval-augmented generation (RAG) has shown promising results in enhancing Q&amp;A by incorporating information from the web and other external sources. However, the supporting documents retrieved from the heterogeneous web often originate from multiple sources with diverse writing styles, varying formats, and inconsistent granularity. Fusing such multi-source documents into a coherent and knowledge-intensive context remains a significant challenge, as the presence of irrelevant and redundant information can compromise the factual consistency of the inferred answers. This paper proposes the Concept-oriented Context Reconstruction RAG (CoCR-RAG), a framework that addresses the multi-source information fusion problem in RAG through linguistically grounded concept-level integration. Specifically, we introduce a concept distillation algorithm that extracts essential concepts from Abstract Meaning Representation (AMR), a stable semantic representation that structures the meaning of texts as logical graphs. The distilled concepts from multiple retrieved documents are then fused and reconstructed into a unified, information-intensive context by Large Language Models, which supplement only the necessary sentence elements to highlight the core knowledge. Experiments on the PopQA and EntityQuestions datasets demonstrate that CoCR-RAG significantly outperforms existing context-reconstruction methods across these Web Q&amp;A benchmarks. Furthermore, CoCR-RAG shows robustness across various backbone LLMs, establishing itself as a flexible, plug-and-play component adaptable to different RAG frameworks.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-25T06:38:09Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Kaize Shi</name>\n    </author>\n    <author>\n      <name>Xueyao Sun</name>\n    </author>\n    <author>\n      <name>Qika Lin</name>\n    </author>\n    <author>\n      <name>Firoj Alam</name>\n    </author>\n    <author>\n      <name>Qing Li</name>\n    </author>\n    <author>\n      <name>Xiaohui Tao</name>\n    </author>\n    <author>\n      <name>Guandong Xu</name>\n    </author>\n  </entry>"
}