Research

Paper

AI LLM March 09, 2026

TrianguLang: Geometry-Aware Semantic Consensus for Pose-Free 3D Localization

Authors

Bryce Grant, Aryeh Rothenberg, Atri Banerjee, Peng Wang

Abstract

Localizing objects and parts from natural language in 3D space is essential for robotics, AR, and embodied AI, yet existing methods face a trade-off between the accuracy and geometric consistency of per-scene optimization and the efficiency of feed-forward inference. We present TrianguLang, a feed-forward framework for 3D localization that requires no camera calibration at inference. Unlike prior methods that treat views independently, we introduce Geometry-Aware Semantic Attention (GASA), which utilizes predicted geometry to gate cross-view feature correspondence, suppressing semantically plausible but geometrically inconsistent matches without requiring ground-truth poses. Validated on five benchmarks including ScanNet++ and uCO3D, TrianguLang achieves state-of-the-art feed-forward text-guided segmentation and localization, reducing user effort from $O(N)$ clicks to a single text query. The model processes each frame at 1008x1008 resolution in $\sim$57ms ($\sim$18 FPS) without optimization, enabling practical deployment for interactive robotics and AR applications. Code and checkpoints are available at https://cwru-aism.github.io/triangulang/.

Metadata

arXiv ID: 2603.08096
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-09
Fetched: 2026-03-10 05:43

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.08096v1</id>\n    <title>TrianguLang: Geometry-Aware Semantic Consensus for Pose-Free 3D Localization</title>\n    <updated>2026-03-09T08:37:05Z</updated>\n    <link href='https://arxiv.org/abs/2603.08096v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.08096v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Localizing objects and parts from natural language in 3D space is essential for robotics, AR, and embodied AI, yet existing methods face a trade-off between the accuracy and geometric consistency of per-scene optimization and the efficiency of feed-forward inference. We present TrianguLang, a feed-forward framework for 3D localization that requires no camera calibration at inference. Unlike prior methods that treat views independently, we introduce Geometry-Aware Semantic Attention (GASA), which utilizes predicted geometry to gate cross-view feature correspondence, suppressing semantically plausible but geometrically inconsistent matches without requiring ground-truth poses. Validated on five benchmarks including ScanNet++ and uCO3D, TrianguLang achieves state-of-the-art feed-forward text-guided segmentation and localization, reducing user effort from $O(N)$ clicks to a single text query. The model processes each frame at 1008x1008 resolution in $\\sim$57ms ($\\sim$18 FPS) without optimization, enabling practical deployment for interactive robotics and AR applications. Code and checkpoints are available at https://cwru-aism.github.io/triangulang/.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-09T08:37:05Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Bryce Grant</name>\n    </author>\n    <author>\n      <name>Aryeh Rothenberg</name>\n    </author>\n    <author>\n      <name>Atri Banerjee</name>\n    </author>\n    <author>\n      <name>Peng Wang</name>\n    </author>\n  </entry>"
}