Research

Paper

AI LLM March 02, 2026

ViTex: Visual Texture Control for Multi-Track Symbolic Music Generation via Discrete Diffusion Models

Authors

Xiaoyu Yi, Qi He, Gus Xia, Ziyu Wang

Abstract

In automatic music generation, a central challenge is to design controls that enable meaningful human-machine interaction. Existing systems often rely on extrinsic inputs such as text prompts or metadata, which do not allow humans to directly shape the composition. While prior work has explored intrinsic controls such as chords or hierarchical structure, these approaches mainly address piano or vocal-accompaniment settings, leaving multitrack symbolic music largely underexplored. We identify instrumentation, the choice of instruments and their roles, as a natural dimension of control in multi-track composition, and propose ViTex, a visual representation of instrumental texture. In ViTex, color encodes instrument choice, spatial position represents pitch and time, and stroke properties capture local textures. Building on this representation, we develop a discrete diffusion model conditioned on ViTex and chord progressions to generate 8-measure multi-track symbolic music, enabling explicit texture-level control while maintaining strong unconditional generation quality. The demo page and code are avaliable at https://vitex2025.github.io/.

Metadata

arXiv ID: 2603.01984
Provider: ARXIV
Primary Category: cs.SD
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.01984v1</id>\n    <title>ViTex: Visual Texture Control for Multi-Track Symbolic Music Generation via Discrete Diffusion Models</title>\n    <updated>2026-03-02T15:39:30Z</updated>\n    <link href='https://arxiv.org/abs/2603.01984v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.01984v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>In automatic music generation, a central challenge is to design controls that enable meaningful human-machine interaction. Existing systems often rely on extrinsic inputs such as text prompts or metadata, which do not allow humans to directly shape the composition. While prior work has explored intrinsic controls such as chords or hierarchical structure, these approaches mainly address piano or vocal-accompaniment settings, leaving multitrack symbolic music largely underexplored. We identify instrumentation, the choice of instruments and their roles, as a natural dimension of control in multi-track composition, and propose ViTex, a visual representation of instrumental texture. In ViTex, color encodes instrument choice, spatial position represents pitch and time, and stroke properties capture local textures. Building on this representation, we develop a discrete diffusion model conditioned on ViTex and chord progressions to generate 8-measure multi-track symbolic music, enabling explicit texture-level control while maintaining strong unconditional generation quality. The demo page and code are avaliable at https://vitex2025.github.io/.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SD'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SC'/>\n    <published>2026-03-02T15:39:30Z</published>\n    <arxiv:primary_category term='cs.SD'/>\n    <author>\n      <name>Xiaoyu Yi</name>\n    </author>\n    <author>\n      <name>Qi He</name>\n    </author>\n    <author>\n      <name>Gus Xia</name>\n    </author>\n    <author>\n      <name>Ziyu Wang</name>\n    </author>\n  </entry>"
}