Research

Paper

AI LLM February 24, 2026

Stop-Think-AutoRegress: Language Modeling with Latent Diffusion Planning

Authors

Justin Lovelace, Christian Belardi, Sofian Zalouk, Adhitya Polavaram, Srivatsa Kundurthy, Kilian Q. Weinberger

Abstract

The Stop-Think-AutoRegress Language Diffusion Model (STAR-LDM) integrates latent diffusion planning with autoregressive generation. Unlike conventional autoregressive language models limited to token-by-token decisions, STAR-LDM incorporates a "thinking" phase that pauses generation to refine a semantic plan through diffusion before continuing. This enables global planning in continuous space prior to committing to discrete tokens. Evaluations show STAR-LDM significantly outperforms similar-sized models on language understanding benchmarks and achieves $>70\%$ win rates in LLM-as-judge comparisons for narrative coherence and commonsense reasoning. The architecture also allows straightforward control through lightweight classifiers, enabling fine-grained steering of attributes without model retraining while maintaining better fluency-control trade-offs than specialized approaches.

Metadata

arXiv ID: 2602.20528
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20528v1</id>\n    <title>Stop-Think-AutoRegress: Language Modeling with Latent Diffusion Planning</title>\n    <updated>2026-02-24T04:09:31Z</updated>\n    <link href='https://arxiv.org/abs/2602.20528v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20528v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>The Stop-Think-AutoRegress Language Diffusion Model (STAR-LDM) integrates latent diffusion planning with autoregressive generation. Unlike conventional autoregressive language models limited to token-by-token decisions, STAR-LDM incorporates a \"thinking\" phase that pauses generation to refine a semantic plan through diffusion before continuing. This enables global planning in continuous space prior to committing to discrete tokens. Evaluations show STAR-LDM significantly outperforms similar-sized models on language understanding benchmarks and achieves $&gt;70\\%$ win rates in LLM-as-judge comparisons for narrative coherence and commonsense reasoning. The architecture also allows straightforward control through lightweight classifiers, enabling fine-grained steering of attributes without model retraining while maintaining better fluency-control trade-offs than specialized approaches.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-02-24T04:09:31Z</published>\n    <arxiv:comment>COLM 2025</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Justin Lovelace</name>\n    </author>\n    <author>\n      <name>Christian Belardi</name>\n    </author>\n    <author>\n      <name>Sofian Zalouk</name>\n    </author>\n    <author>\n      <name>Adhitya Polavaram</name>\n    </author>\n    <author>\n      <name>Srivatsa Kundurthy</name>\n    </author>\n    <author>\n      <name>Kilian Q. Weinberger</name>\n    </author>\n  </entry>"
}