Research

Paper

AI LLM March 13, 2026

Navig-AI-tion: Navigation by Contextual AI and Spatial Audio

Authors

Mathias N. Lystbæk, Haley Adams, Ranjith Kagathi Ananda, Eric J Gonzalez, Luca Ballan, Qiuxuan Wu, Andrea Colaço, Peter Tan, Mar Gonzalez-Franco

Abstract

Audio-only walking navigation can leave users disoriented, relying on vague cardinal directions and lacking real-time environmental context, leading to frequent errors. To address this, we present a novel system that integrates a Vision Language Model (VLM) with a spatial audio cue. Our system extracts environmental landmarks to anchor navigation instructions and, crucially, provides a directional spatial audio signal when the user faces the wrong direction, indicating the precise turn direction. In a user study (n=12), the spatial audio cue with VLM reduced route deviations compared to both VLM-only and Google Maps (audio-only) baseline systems. Users reported that the spatial audio cue effectively supported orientation and that landmark-anchored instructions provided a better navigation experience over audio-only Google Maps. This work serves as an initial look at the utility of future audio-only navigation systems for incorporating directional cues, especially real-time corrective spatial audio.

Metadata

arXiv ID: 2603.13200
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.13200v1</id>\n    <title>Navig-AI-tion: Navigation by Contextual AI and Spatial Audio</title>\n    <updated>2026-03-13T17:38:10Z</updated>\n    <link href='https://arxiv.org/abs/2603.13200v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.13200v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Audio-only walking navigation can leave users disoriented, relying on vague cardinal directions and lacking real-time environmental context, leading to frequent errors. To address this, we present a novel system that integrates a Vision Language Model (VLM) with a spatial audio cue. Our system extracts environmental landmarks to anchor navigation instructions and, crucially, provides a directional spatial audio signal when the user faces the wrong direction, indicating the precise turn direction. In a user study (n=12), the spatial audio cue with VLM reduced route deviations compared to both VLM-only and Google Maps (audio-only) baseline systems. Users reported that the spatial audio cue effectively supported orientation and that landmark-anchored instructions provided a better navigation experience over audio-only Google Maps. This work serves as an initial look at the utility of future audio-only navigation systems for incorporating directional cues, especially real-time corrective spatial audio.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-03-13T17:38:10Z</published>\n    <arxiv:comment>5 pages, 2 figures, to be published in Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems (CHI EA '26), 6 pages appendix</arxiv:comment>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Mathias N. Lystbæk</name>\n    </author>\n    <author>\n      <name>Haley Adams</name>\n    </author>\n    <author>\n      <name>Ranjith Kagathi Ananda</name>\n    </author>\n    <author>\n      <name>Eric J Gonzalez</name>\n    </author>\n    <author>\n      <name>Luca Ballan</name>\n    </author>\n    <author>\n      <name>Qiuxuan Wu</name>\n    </author>\n    <author>\n      <name>Andrea Colaço</name>\n    </author>\n    <author>\n      <name>Peter Tan</name>\n    </author>\n    <author>\n      <name>Mar Gonzalez-Franco</name>\n    </author>\n    <arxiv:doi>10.1145/3772363.3799295</arxiv:doi>\n    <link href='https://doi.org/10.1145/3772363.3799295' rel='related' title='doi'/>\n  </entry>"
}