Research

Paper

TESTING March 12, 2026

ZeroSense:How Vision matters in Long Context Compression

Authors

Yonghan Gao, Zehong Chen, Lijian Xu, Jingzhi Chen, Jingwei Guan, Xingyu Zeng

Abstract

Recent visual-text compression (VTC) methods, typified by DeepSeek-OCR, report impressive high token compression ratios for long-context modeling tasks by leveraging text-to-image rendering. However, existing evaluation protocols heavily rely on downstream task performance. Such evaluation metrics fail to accurately measure text preservation due to the strong inherent linguistic priors of Multimodal Large Language Models (MLLMs). In this work, we introduce a new evaluation framework that decouples MLLMs' capabilities to faithfully assess VTC quality. Within this framework, we further introduce the ZeroSense Benchmark to ensure low semantic correlation of testing samples. By eliminating contextual dependencies, our benchmark guarantees that the evaluation results are purely reflective of VTC quality, unaffected by the semantic inference capabilities of downstream models. Extensive experiments across multiple datasets demonstrate that VTC quality and downstream task accuracy diverge significantly, highlighting the necessity of our decoupled evaluation framework.

Metadata

arXiv ID: 2603.11846
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-12
Fetched: 2026-03-13 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.11846v1</id>\n    <title>ZeroSense:How Vision matters in Long Context Compression</title>\n    <updated>2026-03-12T12:11:48Z</updated>\n    <link href='https://arxiv.org/abs/2603.11846v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.11846v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recent visual-text compression (VTC) methods, typified by DeepSeek-OCR, report impressive high token compression ratios for long-context modeling tasks by leveraging text-to-image rendering. However, existing evaluation protocols heavily rely on downstream task performance. Such evaluation metrics fail to accurately measure text preservation due to the strong inherent linguistic priors of Multimodal Large Language Models (MLLMs). In this work, we introduce a new evaluation framework that decouples MLLMs' capabilities to faithfully assess VTC quality. Within this framework, we further introduce the ZeroSense Benchmark to ensure low semantic correlation of testing samples. By eliminating contextual dependencies, our benchmark guarantees that the evaluation results are purely reflective of VTC quality, unaffected by the semantic inference capabilities of downstream models. Extensive experiments across multiple datasets demonstrate that VTC quality and downstream task accuracy diverge significantly, highlighting the necessity of our decoupled evaluation framework.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-12T12:11:48Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Yonghan Gao</name>\n    </author>\n    <author>\n      <name>Zehong Chen</name>\n    </author>\n    <author>\n      <name>Lijian Xu</name>\n    </author>\n    <author>\n      <name>Jingzhi Chen</name>\n    </author>\n    <author>\n      <name>Jingwei Guan</name>\n    </author>\n    <author>\n      <name>Xingyu Zeng</name>\n    </author>\n  </entry>"
}