Paper
Towards Robust Retrieval-Augmented Generation Based on Knowledge Graph: A Comparative Analysis
Authors
Hazem Amamou, Stéphane Gagnon, Alan Davoust, Anderson R. Avila
Abstract
Retrieval-Augmented Generation (RAG) was introduced to enhance the capabilities of Large Language Models (LLMs) beyond their encoded prior knowledge. This is achieved by providing LLMs with an external source of knowledge, which helps reduce factual hallucinations and enables access to new information not available during pretraining. However, inconsistent retrieved information can negatively affect LLM responses. The Retrieval-Augmented Generation Benchmark (RGB) was introduced to evaluate the robustness of RAG systems under such conditions. In this work, we use the RGB corpus to evaluate LLMs in four scenarios: noise robustness, information integration, negative rejection, and counterfactual robustness. We perform a comparative analysis between the RGB RAG baseline and GraphRAG, a knowledge graph based retrieval system. We test three GraphRAG customizations to improve robustness. Results show improvements over the RGB baseline and provide insights for designing more reliable RAG systems for real world scenarios.
Metadata
Related papers
Cosmic Shear in Effective Field Theory at Two-Loop Order: Revisiting $S_8$ in Dark Energy Survey Data
Shi-Fan Chen, Joseph DeRose, Mikhail M. Ivanov, Oliver H. E. Philcox • 2026-03-30
Stop Probing, Start Coding: Why Linear Probes and Sparse Autoencoders Fail at Compositional Generalisation
Vitória Barin Pacela, Shruti Joshi, Isabela Camacho, Simon Lacoste-Julien, Da... • 2026-03-30
SNID-SAGE: A Modern Framework for Interactive Supernova Classification and Spectral Analysis
Fiorenzo Stoppa, Stephen J. Smartt • 2026-03-30
Acoustic-to-articulatory Inversion of the Complete Vocal Tract from RT-MRI with Various Audio Embeddings and Dataset Sizes
Sofiane Azzouz, Pierre-André Vuissoz, Yves Laprie • 2026-03-30
Rotating black hole shadows in metric-affine bumblebee gravity
Jose R. Nascimento, Ana R. M. Oliveira, Albert Yu. Petrov, Paulo J. Porfírio,... • 2026-03-30
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.05698v1</id>\n <title>Towards Robust Retrieval-Augmented Generation Based on Knowledge Graph: A Comparative Analysis</title>\n <updated>2026-03-05T21:43:53Z</updated>\n <link href='https://arxiv.org/abs/2603.05698v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.05698v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Retrieval-Augmented Generation (RAG) was introduced to enhance the capabilities of Large Language Models (LLMs) beyond their encoded prior knowledge. This is achieved by providing LLMs with an external source of knowledge, which helps reduce factual hallucinations and enables access to new information not available during pretraining. However, inconsistent retrieved information can negatively affect LLM responses. The Retrieval-Augmented Generation Benchmark (RGB) was introduced to evaluate the robustness of RAG systems under such conditions. In this work, we use the RGB corpus to evaluate LLMs in four scenarios: noise robustness, information integration, negative rejection, and counterfactual robustness. We perform a comparative analysis between the RGB RAG baseline and GraphRAG, a knowledge graph based retrieval system. We test three GraphRAG customizations to improve robustness. Results show improvements over the RGB baseline and provide insights for designing more reliable RAG systems for real world scenarios.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n <published>2026-03-05T21:43:53Z</published>\n <arxiv:comment>The paper is 6 pages long and includes 5 figures and 3 tables illustrating the experimental framework and results. It is submitted to the IEEE International Conference on Systems, Man, and Cybernetics (SMC 2025) and studies improving the robustness of Retrieval-Augmented Generation systems using knowledge graph based GraphRAG approaches</arxiv:comment>\n <arxiv:primary_category term='cs.CL'/>\n <author>\n <name>Hazem Amamou</name>\n </author>\n <author>\n <name>Stéphane Gagnon</name>\n </author>\n <author>\n <name>Alan Davoust</name>\n </author>\n <author>\n <name>Anderson R. Avila</name>\n </author>\n <arxiv:doi>10.1109/SMC58881.2025.11343466</arxiv:doi>\n <link href='https://doi.org/10.1109/SMC58881.2025.11343466' rel='related' title='doi'/>\n </entry>"
}