Research

Paper

TESTING March 03, 2026

VisionCreator: A Native Visual-Generation Agentic Model with Understanding, Thinking, Planning and Creation

Authors

Jinxiang Lai, Zexin Lu, Jiajun He, Rongwei Quan, Wenzhe Zhao, Qinyu Yang, Qi Chen, Qin Lin, Chuyue Li, Tao Gao, Yuhao Shan, Shuai Shao, Song Guo, Qinglin Lu

Abstract

Visual content creation tasks demand a nuanced understanding of design conventions and creative workflows-capabilities challenging for general models, while workflow-based agents lack specialized knowledge for autonomous creative planning. To overcome these challenges, we propose VisionCreator, a native visual-generation agentic model that unifies Understanding, Thinking, Planning, and Creation (UTPC) capabilities within an end-to-end learnable framework. Our work introduces four key contributions: (i) VisGenData-4k and its construction methodology using metacognition-based VisionAgent to generate high-quality creation trajectories with explicit UTPC structures; (ii) The VisionCreator agentic model, optimized through Progressive Specialization Training (PST) and Virtual Reinforcement Learning (VRL) within a high-fidelity simulated environment, enabling stable and efficient acquisition of UTPC capabilities for complex creation tasks; (iii) VisGenBench, a comprehensive benchmark featuring 1.2k test samples across diverse scenarios for standardized evaluation of multi-step visual creation capabilities; (iv) Remarkably, our VisionCreator-8B/32B models demonstrate superior performance over larger closed-source models across multiple evaluation dimensions. Overall, this work provides a foundation for future research in visual-generation agentic systems.

Metadata

arXiv ID: 2603.02681
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02681v1</id>\n    <title>VisionCreator: A Native Visual-Generation Agentic Model with Understanding, Thinking, Planning and Creation</title>\n    <updated>2026-03-03T07:22:21Z</updated>\n    <link href='https://arxiv.org/abs/2603.02681v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02681v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Visual content creation tasks demand a nuanced understanding of design conventions and creative workflows-capabilities challenging for general models, while workflow-based agents lack specialized knowledge for autonomous creative planning. To overcome these challenges, we propose VisionCreator, a native visual-generation agentic model that unifies Understanding, Thinking, Planning, and Creation (UTPC) capabilities within an end-to-end learnable framework. Our work introduces four key contributions: (i) VisGenData-4k and its construction methodology using metacognition-based VisionAgent to generate high-quality creation trajectories with explicit UTPC structures; (ii) The VisionCreator agentic model, optimized through Progressive Specialization Training (PST) and Virtual Reinforcement Learning (VRL) within a high-fidelity simulated environment, enabling stable and efficient acquisition of UTPC capabilities for complex creation tasks; (iii) VisGenBench, a comprehensive benchmark featuring 1.2k test samples across diverse scenarios for standardized evaluation of multi-step visual creation capabilities; (iv) Remarkably, our VisionCreator-8B/32B models demonstrate superior performance over larger closed-source models across multiple evaluation dimensions. Overall, this work provides a foundation for future research in visual-generation agentic systems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-03T07:22:21Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Jinxiang Lai</name>\n    </author>\n    <author>\n      <name>Zexin Lu</name>\n    </author>\n    <author>\n      <name>Jiajun He</name>\n    </author>\n    <author>\n      <name>Rongwei Quan</name>\n    </author>\n    <author>\n      <name>Wenzhe Zhao</name>\n    </author>\n    <author>\n      <name>Qinyu Yang</name>\n    </author>\n    <author>\n      <name>Qi Chen</name>\n    </author>\n    <author>\n      <name>Qin Lin</name>\n    </author>\n    <author>\n      <name>Chuyue Li</name>\n    </author>\n    <author>\n      <name>Tao Gao</name>\n    </author>\n    <author>\n      <name>Yuhao Shan</name>\n    </author>\n    <author>\n      <name>Shuai Shao</name>\n    </author>\n    <author>\n      <name>Song Guo</name>\n    </author>\n    <author>\n      <name>Qinglin Lu</name>\n    </author>\n  </entry>"
}