Paper
OMNIA: Closing the Loop by Leveraging LLMs for Knowledge Graph Completion
Authors
Frédéric Ieng, Soror Sahri, Mourad Ouzzani, Massinissa Hammaz, Salima Benbernou, Hanieh Khorashadizadeh, Sven Groppe, Farah Benamara
Abstract
Knowledge Graphs (KGs) are widely used to represent structured knowledge, yet their automatic construction, especially with Large Language Models (LLMs), often results in incomplete or noisy outputs. Knowledge Graph Completion (KGC) aims to infer and add missing triples, but most existing methods either rely on structural embeddings that overlook semantics or language models that ignore the graph's structure and depend on external sources. In this work, we present OMNIA, a two-stage approach that bridges structural and semantic reasoning for KGC. It first generates candidate triples by clustering semantically related entities and relations within the KG, then validates them through lightweight embedding filtering followed by LLM-based semantic validation. OMNIA performs on the internal KG, without external sources, and specifically targets implicit semantics that are most frequent in LLM-generated graphs. Extensive experiments on multiple datasets demonstrate that OMNIA significantly improves F1-score compared to traditional embedding-based models. These results highlight OMNIA's effectiveness and efficiency, as its clustering and filtering stages reduce both search space and validation cost while maintaining high-quality completion.
Metadata
Related papers
Gen-Searcher: Reinforcing Agentic Search for Image Generation
Kaituo Feng, Manyuan Zhang, Shuang Chen, Yunlong Lin, Kaixuan Fan, Yilei Jian... • 2026-03-30
On-the-fly Repulsion in the Contextual Space for Rich Diversity in Diffusion Transformers
Omer Dahary, Benaya Koren, Daniel Garibi, Daniel Cohen-Or • 2026-03-30
Graphilosophy: Graph-Based Digital Humanities Computing with The Four Books
Minh-Thu Do, Quynh-Chau Le-Tran, Duc-Duy Nguyen-Mai, Thien-Trang Nguyen, Khan... • 2026-03-30
ParaSpeechCLAP: A Dual-Encoder Speech-Text Model for Rich Stylistic Language-Audio Pretraining
Anuj Diwan, Eunsol Choi, David Harwath • 2026-03-30
RAD-AI: Rethinking Architecture Documentation for AI-Augmented Ecosystems
Oliver Aleksander Larsen, Mahyar T. Moghaddam • 2026-03-30
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.11820v1</id>\n <title>OMNIA: Closing the Loop by Leveraging LLMs for Knowledge Graph Completion</title>\n <updated>2026-03-12T11:30:41Z</updated>\n <link href='https://arxiv.org/abs/2603.11820v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.11820v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Knowledge Graphs (KGs) are widely used to represent structured knowledge, yet their automatic construction, especially with Large Language Models (LLMs), often results in incomplete or noisy outputs. Knowledge Graph Completion (KGC) aims to infer and add missing triples, but most existing methods either rely on structural embeddings that overlook semantics or language models that ignore the graph's structure and depend on external sources. In this work, we present OMNIA, a two-stage approach that bridges structural and semantic reasoning for KGC. It first generates candidate triples by clustering semantically related entities and relations within the KG, then validates them through lightweight embedding filtering followed by LLM-based semantic validation. OMNIA performs on the internal KG, without external sources, and specifically targets implicit semantics that are most frequent in LLM-generated graphs. Extensive experiments on multiple datasets demonstrate that OMNIA significantly improves F1-score compared to traditional embedding-based models. These results highlight OMNIA's effectiveness and efficiency, as its clustering and filtering stages reduce both search space and validation cost while maintaining high-quality completion.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.DB'/>\n <published>2026-03-12T11:30:41Z</published>\n <arxiv:primary_category term='cs.DB'/>\n <author>\n <name>Frédéric Ieng</name>\n </author>\n <author>\n <name>Soror Sahri</name>\n </author>\n <author>\n <name>Mourad Ouzzani</name>\n </author>\n <author>\n <name>Massinissa Hammaz</name>\n </author>\n <author>\n <name>Salima Benbernou</name>\n </author>\n <author>\n <name>Hanieh Khorashadizadeh</name>\n </author>\n <author>\n <name>Sven Groppe</name>\n </author>\n <author>\n <name>Farah Benamara</name>\n </author>\n </entry>"
}