Research

Paper

AI LLM March 13, 2026

Seeing Eye to Eye: Enabling Cognitive Alignment Through Shared First-Person Perspective in Human-AI Collaboration

Authors

Zhuyu Teng, Pei Chen, Yichen Cai, Ruoqing Lu, Zhaoqu Jiang, Jiayang Li, Weitao You, Lingyun Sun

Abstract

Despite advances in multimodal AI, current vision-based assistants often remain inefficient in collaborative tasks. We identify two key gulfs: a communication gulf, where users must translate rich parallel intentions into verbal commands due to the channel mismatch , and an understanding gulf, where AI struggles to interpret subtle embodied cues. To address these, we propose Eye2Eye, a framework that leverages first-person perspective as a channel for human-AI cognitive alignment. It integrates three components: (1) joint attention coordination for fluid focus alignment, (2) revisable memory to maintain evolving common ground, and (3) reflective feedback allowing users to clarify and refine AI's understanding. We implement this framework in an AR prototype and evaluate it through a user study and a post-hoc pipeline evaluation. Results show that Eye2Eye significantly reduces task completion time and interaction load while increasing trust, demonstrating its components work in concert to improve collaboration.

Metadata

arXiv ID: 2603.12701
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12701v1</id>\n    <title>Seeing Eye to Eye: Enabling Cognitive Alignment Through Shared First-Person Perspective in Human-AI Collaboration</title>\n    <updated>2026-03-13T06:28:20Z</updated>\n    <link href='https://arxiv.org/abs/2603.12701v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12701v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Despite advances in multimodal AI, current vision-based assistants often remain inefficient in collaborative tasks. We identify two key gulfs: a communication gulf, where users must translate rich parallel intentions into verbal commands due to the channel mismatch , and an understanding gulf, where AI struggles to interpret subtle embodied cues. To address these, we propose Eye2Eye, a framework that leverages first-person perspective as a channel for human-AI cognitive alignment. It integrates three components: (1) joint attention coordination for fluid focus alignment, (2) revisable memory to maintain evolving common ground, and (3) reflective feedback allowing users to clarify and refine AI's understanding. We implement this framework in an AR prototype and evaluate it through a user study and a post-hoc pipeline evaluation. Results show that Eye2Eye significantly reduces task completion time and interaction load while increasing trust, demonstrating its components work in concert to improve collaboration.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-13T06:28:20Z</published>\n    <arxiv:comment>19 pages, 11 figures. Accepted at ACM CHI 2026, Barcelona</arxiv:comment>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Zhuyu Teng</name>\n    </author>\n    <author>\n      <name>Pei Chen</name>\n    </author>\n    <author>\n      <name>Yichen Cai</name>\n    </author>\n    <author>\n      <name>Ruoqing Lu</name>\n    </author>\n    <author>\n      <name>Zhaoqu Jiang</name>\n    </author>\n    <author>\n      <name>Jiayang Li</name>\n    </author>\n    <author>\n      <name>Weitao You</name>\n    </author>\n    <author>\n      <name>Lingyun Sun</name>\n    </author>\n    <arxiv:doi>10.1145/3772318.3791059</arxiv:doi>\n    <link href='https://doi.org/10.1145/3772318.3791059' rel='related' title='doi'/>\n  </entry>"
}