Research

Paper

TESTING March 17, 2026

Industrial cuVSLAM Benchmark & Integration

Authors

Charbel Abi Hana, Kameel Amareen, Mohamad Mostafa, Dmitry Slepichev, Hesam Rabeti, Zheng Wang, Mihir Acharya, Anthony Rizk

Abstract

This work presents a comprehensive benchmark evaluation of visual odometry (VO) and visual SLAM (VSLAM) systems for mobile robot navigation in real-world logistical environments. We compare multiple visual odometry approaches across controlled trajectories covering translational, rotational, and mixed motion patterns, as well as a large-scale production facility dataset spanning approximately 1.7 km. Performance is evaluated using Absolute Pose Error (APE) against ground truth from a Vicon motion capture system and a LiDAR-based SLAM reference. Our results show that a hybrid stack combining the cuVSLAM front-end with a custom SLAM back-end achieves the strongest mapping accuracy, motivating a deeper integration of cuVSLAM as the core VO component in our robotics stack. We further validate this integration by deploying and testing the cuVSLAM-based VO stack on an NVIDIA Jetson platform.

Metadata

arXiv ID: 2603.16240
Provider: ARXIV
Primary Category: cs.RO
Published: 2026-03-17
Fetched: 2026-03-18 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.16240v1</id>\n    <title>Industrial cuVSLAM Benchmark &amp; Integration</title>\n    <updated>2026-03-17T08:25:30Z</updated>\n    <link href='https://arxiv.org/abs/2603.16240v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.16240v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>This work presents a comprehensive benchmark evaluation of visual odometry (VO) and visual SLAM (VSLAM) systems for mobile robot navigation in real-world logistical environments. We compare multiple visual odometry approaches across controlled trajectories covering translational, rotational, and mixed motion patterns, as well as a large-scale production facility dataset spanning approximately 1.7 km. Performance is evaluated using Absolute Pose Error (APE) against ground truth from a Vicon motion capture system and a LiDAR-based SLAM reference. Our results show that a hybrid stack combining the cuVSLAM front-end with a custom SLAM back-end achieves the strongest mapping accuracy, motivating a deeper integration of cuVSLAM as the core VO component in our robotics stack. We further validate this integration by deploying and testing the cuVSLAM-based VO stack on an NVIDIA Jetson platform.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <published>2026-03-17T08:25:30Z</published>\n    <arxiv:primary_category term='cs.RO'/>\n    <author>\n      <name>Charbel Abi Hana</name>\n    </author>\n    <author>\n      <name>Kameel Amareen</name>\n    </author>\n    <author>\n      <name>Mohamad Mostafa</name>\n    </author>\n    <author>\n      <name>Dmitry Slepichev</name>\n    </author>\n    <author>\n      <name>Hesam Rabeti</name>\n    </author>\n    <author>\n      <name>Zheng Wang</name>\n    </author>\n    <author>\n      <name>Mihir Acharya</name>\n    </author>\n    <author>\n      <name>Anthony Rizk</name>\n    </author>\n  </entry>"
}