Research

Paper

TESTING February 19, 2026

DDiT: Dynamic Patch Scheduling for Efficient Diffusion Transformers

Authors

Dahye Kim, Deepti Ghadiyaram, Raghudeep Gadde

Abstract

Diffusion Transformers (DiTs) have achieved state-of-the-art performance in image and video generation, but their success comes at the cost of heavy computation. This inefficiency is largely due to the fixed tokenization process, which uses constant-sized patches throughout the entire denoising phase, regardless of the content's complexity. We propose dynamic tokenization, an efficient test-time strategy that varies patch sizes based on content complexity and the denoising timestep. Our key insight is that early timesteps only require coarser patches to model global structure, while later iterations demand finer (smaller-sized) patches to refine local details. During inference, our method dynamically reallocates patch sizes across denoising steps for image and video generation and substantially reduces cost while preserving perceptual generation quality. Extensive experiments demonstrate the effectiveness of our approach: it achieves up to $3.52\times$ and $3.2\times$ speedup on FLUX-1.Dev and Wan $2.1$, respectively, without compromising the generation quality and prompt adherence.

Metadata

arXiv ID: 2602.16968
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.16968v1</id>\n    <title>DDiT: Dynamic Patch Scheduling for Efficient Diffusion Transformers</title>\n    <updated>2026-02-19T00:15:20Z</updated>\n    <link href='https://arxiv.org/abs/2602.16968v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.16968v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Diffusion Transformers (DiTs) have achieved state-of-the-art performance in image and video generation, but their success comes at the cost of heavy computation. This inefficiency is largely due to the fixed tokenization process, which uses constant-sized patches throughout the entire denoising phase, regardless of the content's complexity. We propose dynamic tokenization, an efficient test-time strategy that varies patch sizes based on content complexity and the denoising timestep. Our key insight is that early timesteps only require coarser patches to model global structure, while later iterations demand finer (smaller-sized) patches to refine local details. During inference, our method dynamically reallocates patch sizes across denoising steps for image and video generation and substantially reduces cost while preserving perceptual generation quality. Extensive experiments demonstrate the effectiveness of our approach: it achieves up to $3.52\\times$ and $3.2\\times$ speedup on FLUX-1.Dev and Wan $2.1$, respectively, without compromising the generation quality and prompt adherence.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-19T00:15:20Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Dahye Kim</name>\n    </author>\n    <author>\n      <name>Deepti Ghadiyaram</name>\n    </author>\n    <author>\n      <name>Raghudeep Gadde</name>\n    </author>\n  </entry>"
}