Research

Paper

TESTING March 25, 2026

Teacher-Student Diffusion Model for Text-Driven 3D Hand Motion Generation

Authors

Ching-Lam Cheng, Bin Zhu, Shengfeng He

Abstract

Generating realistic 3D hand motion from natural language is vital for VR, robotics, and human-computer interaction. Existing methods either focus on full-body motion, overlooking detailed hand gestures, or require explicit 3D object meshes, limiting generality. We propose TSHaMo, a model-agnostic teacher-student diffusion framework for text-driven hand motion generation. The student model learns to synthesize motions from text alone, while the teacher leverages auxiliary signals (e.g., MANO parameters) to provide structured guidance during training. A co-training strategy enables the student to benefit from the teacher's intermediate predictions while remaining text-only at inference. Evaluated using two diffusion backbones on GRAB and H2O, TSHaMo consistently improves motion quality and diversity. Ablations confirm its robustness and flexibility in using diverse auxiliary inputs without requiring 3D objects at test time.

Metadata

arXiv ID: 2603.24407
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.24407v1</id>\n    <title>Teacher-Student Diffusion Model for Text-Driven 3D Hand Motion Generation</title>\n    <updated>2026-03-25T15:21:04Z</updated>\n    <link href='https://arxiv.org/abs/2603.24407v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.24407v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Generating realistic 3D hand motion from natural language is vital for VR, robotics, and human-computer interaction. Existing methods either focus on full-body motion, overlooking detailed hand gestures, or require explicit 3D object meshes, limiting generality. We propose TSHaMo, a model-agnostic teacher-student diffusion framework for text-driven hand motion generation. The student model learns to synthesize motions from text alone, while the teacher leverages auxiliary signals (e.g., MANO parameters) to provide structured guidance during training. A co-training strategy enables the student to benefit from the teacher's intermediate predictions while remaining text-only at inference. Evaluated using two diffusion backbones on GRAB and H2O, TSHaMo consistently improves motion quality and diversity. Ablations confirm its robustness and flexibility in using diverse auxiliary inputs without requiring 3D objects at test time.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-25T15:21:04Z</published>\n    <arxiv:comment>5 pages, accepted by ICASSP2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Ching-Lam Cheng</name>\n    </author>\n    <author>\n      <name>Bin Zhu</name>\n    </author>\n    <author>\n      <name>Shengfeng He</name>\n    </author>\n  </entry>"
}