Research

Paper

AI LLM February 26, 2026

OmniGAIA: Towards Native Omni-Modal AI Agents

Authors

Xiaoxi Li, Wenxiang Jiao, Jiarui Jin, Shijian Wang, Guanting Dong, Jiajie Jin, Hao Wang, Yinuo Wang, Ji-Rong Wen, Yuan Lu, Zhicheng Dou

Abstract

Human intelligence naturally intertwines omni-modal perception -- spanning vision, audio, and language -- with complex reasoning and tool usage to interact with the world. However, current multi-modal LLMs are primarily confined to bi-modal interactions (e.g., vision-language), lacking the unified cognitive capabilities required for general AI assistants. To bridge this gap, we introduce OmniGAIA, a comprehensive benchmark designed to evaluate omni-modal agents on tasks necessitating deep reasoning and multi-turn tool execution across video, audio, and image modalities. Constructed via a novel omni-modal event graph approach, OmniGAIA synthesizes complex, multi-hop queries derived from real-world data that require cross-modal reasoning and external tool integration. Furthermore, we propose OmniAtlas, a native omni-modal foundation agent under tool-integrated reasoning paradigm with active omni-modal perception. Trained on trajectories synthesized via a hindsight-guided tree exploration strategy and OmniDPO for fine-grained error correction, OmniAtlas effectively enhances the tool-use capabilities of existing open-source models. This work marks a step towards next-generation native omni-modal AI assistants for real-world scenarios.

Metadata

arXiv ID: 2602.22897
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22897v1</id>\n    <title>OmniGAIA: Towards Native Omni-Modal AI Agents</title>\n    <updated>2026-02-26T11:35:04Z</updated>\n    <link href='https://arxiv.org/abs/2602.22897v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22897v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Human intelligence naturally intertwines omni-modal perception -- spanning vision, audio, and language -- with complex reasoning and tool usage to interact with the world. However, current multi-modal LLMs are primarily confined to bi-modal interactions (e.g., vision-language), lacking the unified cognitive capabilities required for general AI assistants. To bridge this gap, we introduce OmniGAIA, a comprehensive benchmark designed to evaluate omni-modal agents on tasks necessitating deep reasoning and multi-turn tool execution across video, audio, and image modalities. Constructed via a novel omni-modal event graph approach, OmniGAIA synthesizes complex, multi-hop queries derived from real-world data that require cross-modal reasoning and external tool integration. Furthermore, we propose OmniAtlas, a native omni-modal foundation agent under tool-integrated reasoning paradigm with active omni-modal perception. Trained on trajectories synthesized via a hindsight-guided tree exploration strategy and OmniDPO for fine-grained error correction, OmniAtlas effectively enhances the tool-use capabilities of existing open-source models. This work marks a step towards next-generation native omni-modal AI assistants for real-world scenarios.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.MM'/>\n    <published>2026-02-26T11:35:04Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Xiaoxi Li</name>\n    </author>\n    <author>\n      <name>Wenxiang Jiao</name>\n    </author>\n    <author>\n      <name>Jiarui Jin</name>\n    </author>\n    <author>\n      <name>Shijian Wang</name>\n    </author>\n    <author>\n      <name>Guanting Dong</name>\n    </author>\n    <author>\n      <name>Jiajie Jin</name>\n    </author>\n    <author>\n      <name>Hao Wang</name>\n    </author>\n    <author>\n      <name>Yinuo Wang</name>\n    </author>\n    <author>\n      <name>Ji-Rong Wen</name>\n    </author>\n    <author>\n      <name>Yuan Lu</name>\n    </author>\n    <author>\n      <name>Zhicheng Dou</name>\n    </author>\n  </entry>"
}