Research

Paper

AI LLM March 16, 2026

Kimodo: Scaling Controllable Human Motion Generation

Authors

Davis Rempe, Mathis Petrovich, Ye Yuan, Haotian Zhang, Xue Bin Peng, Yifeng Jiang, Tingwu Wang, Umar Iqbal, David Minor, Michael de Ruyter, Jiefeng Li, Chen Tessler, Edy Lim, Eugene Jeong, Sam Wu, Ehsan Hassani, Michael Huang, Jin-Bey Yu, Chaeyeon Chung, Lina Song, Olivier Dionne, Jan Kautz, Simon Yuen, Sanja Fidler

Abstract

High-quality human motion data is becoming increasingly important for applications in robotics, simulation, and entertainment. Recent generative models offer a potential data source, enabling human motion synthesis through intuitive inputs like text prompts or kinematic constraints on poses. However, the small scale of public mocap datasets has limited the motion quality, control accuracy, and generalization of these models. In this work, we introduce Kimodo, an expressive and controllable kinematic motion diffusion model trained on 700 hours of optical motion capture data. Our model generates high-quality motions while being easily controlled through text and a comprehensive suite of kinematic constraints including full-body keyframes, sparse joint positions/rotations, 2D waypoints, and dense 2D paths. This is enabled through a carefully designed motion representation and two-stage denoiser architecture that decomposes root and body prediction to minimize motion artifacts while allowing for flexible constraint conditioning. Experiments on the large-scale mocap dataset justify key design decisions and analyze how the scaling of dataset size and model size affect performance.

Metadata

arXiv ID: 2603.15546
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15546v1</id>\n    <title>Kimodo: Scaling Controllable Human Motion Generation</title>\n    <updated>2026-03-16T17:09:30Z</updated>\n    <link href='https://arxiv.org/abs/2603.15546v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15546v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>High-quality human motion data is becoming increasingly important for applications in robotics, simulation, and entertainment. Recent generative models offer a potential data source, enabling human motion synthesis through intuitive inputs like text prompts or kinematic constraints on poses. However, the small scale of public mocap datasets has limited the motion quality, control accuracy, and generalization of these models. In this work, we introduce Kimodo, an expressive and controllable kinematic motion diffusion model trained on 700 hours of optical motion capture data. Our model generates high-quality motions while being easily controlled through text and a comprehensive suite of kinematic constraints including full-body keyframes, sparse joint positions/rotations, 2D waypoints, and dense 2D paths. This is enabled through a carefully designed motion representation and two-stage denoiser architecture that decomposes root and body prediction to minimize motion artifacts while allowing for flexible constraint conditioning. Experiments on the large-scale mocap dataset justify key design decisions and analyze how the scaling of dataset size and model size affect performance.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.GR'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <published>2026-03-16T17:09:30Z</published>\n    <arxiv:comment>Project page: https://research.nvidia.com/labs/sil/projects/kimodo/</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Davis Rempe</name>\n    </author>\n    <author>\n      <name>Mathis Petrovich</name>\n    </author>\n    <author>\n      <name>Ye Yuan</name>\n    </author>\n    <author>\n      <name>Haotian Zhang</name>\n    </author>\n    <author>\n      <name>Xue Bin Peng</name>\n    </author>\n    <author>\n      <name>Yifeng Jiang</name>\n    </author>\n    <author>\n      <name>Tingwu Wang</name>\n    </author>\n    <author>\n      <name>Umar Iqbal</name>\n    </author>\n    <author>\n      <name>David Minor</name>\n    </author>\n    <author>\n      <name>Michael de Ruyter</name>\n    </author>\n    <author>\n      <name>Jiefeng Li</name>\n    </author>\n    <author>\n      <name>Chen Tessler</name>\n    </author>\n    <author>\n      <name>Edy Lim</name>\n    </author>\n    <author>\n      <name>Eugene Jeong</name>\n    </author>\n    <author>\n      <name>Sam Wu</name>\n    </author>\n    <author>\n      <name>Ehsan Hassani</name>\n    </author>\n    <author>\n      <name>Michael Huang</name>\n    </author>\n    <author>\n      <name>Jin-Bey Yu</name>\n    </author>\n    <author>\n      <name>Chaeyeon Chung</name>\n    </author>\n    <author>\n      <name>Lina Song</name>\n    </author>\n    <author>\n      <name>Olivier Dionne</name>\n    </author>\n    <author>\n      <name>Jan Kautz</name>\n    </author>\n    <author>\n      <name>Simon Yuen</name>\n    </author>\n    <author>\n      <name>Sanja Fidler</name>\n    </author>\n  </entry>"
}