Research

Paper

AI LLM March 12, 2026

OMNIA: Closing the Loop by Leveraging LLMs for Knowledge Graph Completion

Authors

Frédéric Ieng, Soror Sahri, Mourad Ouzzani, Massinissa Hammaz, Salima Benbernou, Hanieh Khorashadizadeh, Sven Groppe, Farah Benamara

Abstract

Knowledge Graphs (KGs) are widely used to represent structured knowledge, yet their automatic construction, especially with Large Language Models (LLMs), often results in incomplete or noisy outputs. Knowledge Graph Completion (KGC) aims to infer and add missing triples, but most existing methods either rely on structural embeddings that overlook semantics or language models that ignore the graph's structure and depend on external sources. In this work, we present OMNIA, a two-stage approach that bridges structural and semantic reasoning for KGC. It first generates candidate triples by clustering semantically related entities and relations within the KG, then validates them through lightweight embedding filtering followed by LLM-based semantic validation. OMNIA performs on the internal KG, without external sources, and specifically targets implicit semantics that are most frequent in LLM-generated graphs. Extensive experiments on multiple datasets demonstrate that OMNIA significantly improves F1-score compared to traditional embedding-based models. These results highlight OMNIA's effectiveness and efficiency, as its clustering and filtering stages reduce both search space and validation cost while maintaining high-quality completion.

Metadata

arXiv ID: 2603.11820
Provider: ARXIV
Primary Category: cs.DB
Published: 2026-03-12
Fetched: 2026-03-14 05:03

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.11820v1</id>\n    <title>OMNIA: Closing the Loop by Leveraging LLMs for Knowledge Graph Completion</title>\n    <updated>2026-03-12T11:30:41Z</updated>\n    <link href='https://arxiv.org/abs/2603.11820v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.11820v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Knowledge Graphs (KGs) are widely used to represent structured knowledge, yet their automatic construction, especially with Large Language Models (LLMs), often results in incomplete or noisy outputs. Knowledge Graph Completion (KGC) aims to infer and add missing triples, but most existing methods either rely on structural embeddings that overlook semantics or language models that ignore the graph's structure and depend on external sources. In this work, we present OMNIA, a two-stage approach that bridges structural and semantic reasoning for KGC. It first generates candidate triples by clustering semantically related entities and relations within the KG, then validates them through lightweight embedding filtering followed by LLM-based semantic validation. OMNIA performs on the internal KG, without external sources, and specifically targets implicit semantics that are most frequent in LLM-generated graphs. Extensive experiments on multiple datasets demonstrate that OMNIA significantly improves F1-score compared to traditional embedding-based models. These results highlight OMNIA's effectiveness and efficiency, as its clustering and filtering stages reduce both search space and validation cost while maintaining high-quality completion.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DB'/>\n    <published>2026-03-12T11:30:41Z</published>\n    <arxiv:primary_category term='cs.DB'/>\n    <author>\n      <name>Frédéric Ieng</name>\n    </author>\n    <author>\n      <name>Soror Sahri</name>\n    </author>\n    <author>\n      <name>Mourad Ouzzani</name>\n    </author>\n    <author>\n      <name>Massinissa Hammaz</name>\n    </author>\n    <author>\n      <name>Salima Benbernou</name>\n    </author>\n    <author>\n      <name>Hanieh Khorashadizadeh</name>\n    </author>\n    <author>\n      <name>Sven Groppe</name>\n    </author>\n    <author>\n      <name>Farah Benamara</name>\n    </author>\n  </entry>"
}