Research

Paper

TESTING February 25, 2026

EgoAVFlow: Robot Policy Learning with Active Vision from Human Egocentric Videos via 3D Flow

Authors

Daesol Cho, Youngseok Jang, Danfei Xu, Sehoon Ha

Abstract

Egocentric human videos provide a scalable source of manipulation demonstrations; however, deploying them on robots requires active viewpoint control to maintain task-critical visibility, which human viewpoint imitation often fails to provide due to human-specific priors. We propose EgoAVFlow, which learns manipulation and active vision from egocentric videos through a shared 3D flow representation that supports geometric visibility reasoning and transfers without robot demonstrations. EgoAVFlow uses diffusion models to predict robot actions, future 3D flow, and camera trajectories, and refines viewpoints at test time with reward-maximizing denoising under a visibility-aware reward computed from predicted motion and scene geometry. Real-world experiments under actively changing viewpoints show that EgoAVFlow consistently outperforms prior human-demo-based baselines, demonstrating effective visibility maintenance and robust manipulation without robot demonstrations.

Metadata

arXiv ID: 2602.22461
Provider: ARXIV
Primary Category: cs.RO
Published: 2026-02-25
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22461v1</id>\n    <title>EgoAVFlow: Robot Policy Learning with Active Vision from Human Egocentric Videos via 3D Flow</title>\n    <updated>2026-02-25T22:50:51Z</updated>\n    <link href='https://arxiv.org/abs/2602.22461v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22461v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Egocentric human videos provide a scalable source of manipulation demonstrations; however, deploying them on robots requires active viewpoint control to maintain task-critical visibility, which human viewpoint imitation often fails to provide due to human-specific priors. We propose EgoAVFlow, which learns manipulation and active vision from egocentric videos through a shared 3D flow representation that supports geometric visibility reasoning and transfers without robot demonstrations. EgoAVFlow uses diffusion models to predict robot actions, future 3D flow, and camera trajectories, and refines viewpoints at test time with reward-maximizing denoising under a visibility-aware reward computed from predicted motion and scene geometry. Real-world experiments under actively changing viewpoints show that EgoAVFlow consistently outperforms prior human-demo-based baselines, demonstrating effective visibility maintenance and robust manipulation without robot demonstrations.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <published>2026-02-25T22:50:51Z</published>\n    <arxiv:primary_category term='cs.RO'/>\n    <author>\n      <name>Daesol Cho</name>\n    </author>\n    <author>\n      <name>Youngseok Jang</name>\n    </author>\n    <author>\n      <name>Danfei Xu</name>\n    </author>\n    <author>\n      <name>Sehoon Ha</name>\n    </author>\n  </entry>"
}