Research

Paper

AI LLM February 24, 2026

CAD-Prompted SAM3: Geometry-Conditioned Instance Segmentation for Industrial Objects

Authors

Zhenran Tang, Rohan Nagabhirava, Changliu Liu

Abstract

Verbal-prompted segmentation is inherently limited by the expressiveness of natural language and struggles with uncommon, instance-specific, or difficult-to-describe objects: scenarios frequently encountered in manufacturing and 3D printing environments. While image exemplars provide an alternative, they primarily encode appearance cues such as color and texture, which are often unrelated to a part's geometric identity. In industrial settings, a single component may be produced in different materials, finishes, or colors, making appearance-based prompting unreliable. In contrast, such objects are typically defined by precise CAD models that capture their canonical geometry. We propose a CAD-prompted segmentation framework built on SAM3 that uses canonical multi-view renderings of a CAD model as prompt input. The rendered views provide geometry-based conditioning independent of surface appearance. The model is trained using synthetic data generated from mesh renderings in simulation under diverse viewpoints and scene contexts. Our approach enables single-stage, CAD-prompted mask prediction, extending promptable segmentation to objects that cannot be robustly described by language or appearance alone.

Metadata

arXiv ID: 2602.20551
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20551v1</id>\n    <title>CAD-Prompted SAM3: Geometry-Conditioned Instance Segmentation for Industrial Objects</title>\n    <updated>2026-02-24T05:10:22Z</updated>\n    <link href='https://arxiv.org/abs/2602.20551v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20551v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Verbal-prompted segmentation is inherently limited by the expressiveness of natural language and struggles with uncommon, instance-specific, or difficult-to-describe objects: scenarios frequently encountered in manufacturing and 3D printing environments. While image exemplars provide an alternative, they primarily encode appearance cues such as color and texture, which are often unrelated to a part's geometric identity. In industrial settings, a single component may be produced in different materials, finishes, or colors, making appearance-based prompting unreliable. In contrast, such objects are typically defined by precise CAD models that capture their canonical geometry. We propose a CAD-prompted segmentation framework built on SAM3 that uses canonical multi-view renderings of a CAD model as prompt input. The rendered views provide geometry-based conditioning independent of surface appearance. The model is trained using synthetic data generated from mesh renderings in simulation under diverse viewpoints and scene contexts. Our approach enables single-stage, CAD-prompted mask prediction, extending promptable segmentation to objects that cannot be robustly described by language or appearance alone.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-02-24T05:10:22Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Zhenran Tang</name>\n    </author>\n    <author>\n      <name>Rohan Nagabhirava</name>\n    </author>\n    <author>\n      <name>Changliu Liu</name>\n    </author>\n  </entry>"
}