Research

Paper

TESTING March 10, 2026

MM-Zero: Self-Evolving Multi-Model Vision Language Models From Zero Data

Authors

Zongxia Li, Hongyang Du, Chengsong Huang, Xiyang Wu, Lantao Yu, Yicheng He, Jing Xie, Xiaomin Wu, Zhichao Liu, Jiarui Zhang, Fuxiao Liu

Abstract

Self-evolving has emerged as a key paradigm for improving foundational models such as Large Language Models (LLMs) and Vision Language Models (VLMs) with minimal human intervention. While recent approaches have demonstrated that LLM agents can self-evolve from scratch with little to no data, VLMs introduce an additional visual modality that typically requires at least some seed data, such as images, to bootstrap the self-evolution process. In this work, we present Multi-model Multimodal Zero (MM-Zero), the first RL-based framework to achieve zero-data self-evolution for VLM reasoning. Moving beyond prior dual-role (Proposer and Solver) setups, MM-Zero introduces a multi-role self-evolving training framework comprising three specialized roles: a Proposer that generates abstract visual concepts and formulates questions; a Coder that translates these concepts into executable code (e.g., Python, SVG) to render visual images; and a Solver that performs multimodal reasoning over the generated visual content. All three roles are initialized from the same base model and trained using Group Relative Policy Optimization (GRPO), with carefully designed reward mechanisms that integrate execution feedback, visual verification, and difficulty balancing. Our experiments show that MM-Zero improves VLM reasoning performance across a wide range of multimodal benchmarks. MM-Zero establishes a scalable path toward self-evolving multi-model systems for multimodal models, extending the frontier of self-improvement beyond the conventional two-model paradigm.

Metadata

arXiv ID: 2603.09206
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09206v1</id>\n    <title>MM-Zero: Self-Evolving Multi-Model Vision Language Models From Zero Data</title>\n    <updated>2026-03-10T05:23:26Z</updated>\n    <link href='https://arxiv.org/abs/2603.09206v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09206v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Self-evolving has emerged as a key paradigm for improving foundational models such as Large Language Models (LLMs) and Vision Language Models (VLMs) with minimal human intervention. While recent approaches have demonstrated that LLM agents can self-evolve from scratch with little to no data, VLMs introduce an additional visual modality that typically requires at least some seed data, such as images, to bootstrap the self-evolution process. In this work, we present Multi-model Multimodal Zero (MM-Zero), the first RL-based framework to achieve zero-data self-evolution for VLM reasoning. Moving beyond prior dual-role (Proposer and Solver) setups, MM-Zero introduces a multi-role self-evolving training framework comprising three specialized roles: a Proposer that generates abstract visual concepts and formulates questions; a Coder that translates these concepts into executable code (e.g., Python, SVG) to render visual images; and a Solver that performs multimodal reasoning over the generated visual content. All three roles are initialized from the same base model and trained using Group Relative Policy Optimization (GRPO), with carefully designed reward mechanisms that integrate execution feedback, visual verification, and difficulty balancing. Our experiments show that MM-Zero improves VLM reasoning performance across a wide range of multimodal benchmarks. MM-Zero establishes a scalable path toward self-evolving multi-model systems for multimodal models, extending the frontier of self-improvement beyond the conventional two-model paradigm.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-10T05:23:26Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Zongxia Li</name>\n    </author>\n    <author>\n      <name>Hongyang Du</name>\n    </author>\n    <author>\n      <name>Chengsong Huang</name>\n    </author>\n    <author>\n      <name>Xiyang Wu</name>\n    </author>\n    <author>\n      <name>Lantao Yu</name>\n    </author>\n    <author>\n      <name>Yicheng He</name>\n    </author>\n    <author>\n      <name>Jing Xie</name>\n    </author>\n    <author>\n      <name>Xiaomin Wu</name>\n    </author>\n    <author>\n      <name>Zhichao Liu</name>\n    </author>\n    <author>\n      <name>Jiarui Zhang</name>\n    </author>\n    <author>\n      <name>Fuxiao Liu</name>\n    </author>\n  </entry>"
}