Paper
VisBrowse-Bench: Benchmarking Visual-Native Search for Multimodal Browsing Agents
Authors
Zhengbo Zhang, Jinbo Su, Zhaowen Zhou, Changtao Miao, Yuhan Hong, Qimeng Wu, Yumeng Liu, Feier Wu, Yihe Tian, Yuhao Liang, Zitong Shan, Wanke Xia, Yi-Fan Zhang, Bo Zhang, Zhe Li, Shiming Xiang, Ying Yan
Abstract
The rapid advancement of Multimodal Large Language Models (MLLMs) has enabled browsing agents to acquire and reason over multimodal information in the real world. But existing benchmarks suffer from two limitations: insufficient evaluation of visual reasoning ability and the neglect of native visual information of web pages in the reasoning chains. To address these challenges, we introduce a new benchmark for visual-native search, VisBrowse-Bench. It contains 169 VQA instances covering multiple domains and evaluates the models' visual reasoning capabilities during the search process through multimodal evidence cross-validation via text-image retrieval and joint reasoning. These data were constructed by human experts using a multi-stage pipeline and underwent rigorous manual verification. We additionally propose an agent workflow that can effectively drive the browsing agent to actively collect and reason over visual information during the search process. We comprehensively evaluated both open-source and closed-source models in this workflow. Experimental results show that even the best-performing model, Claude-4.6-Opus only achieves an accuracy of 47.6%, while the proprietary Deep Research model, o3-deep-research only achieves an accuracy of 41.1%. The code and data can be accessed at: https://github.com/ZhengboZhang/VisBrowse-Bench
Metadata
Related papers
Fractal universe and quantum gravity made simple
Fabio Briscese, Gianluca Calcagni • 2026-03-25
POLY-SIM: Polyglot Speaker Identification with Missing Modality Grand Challenge 2026 Evaluation Plan
Marta Moscati, Muhammad Saad Saeed, Marina Zanoni, Mubashir Noman, Rohan Kuma... • 2026-03-25
LensWalk: Agentic Video Understanding by Planning How You See in Videos
Keliang Li, Yansong Li, Hongze Shen, Mengdi Liu, Hong Chang, Shiguang Shan • 2026-03-25
Orientation Reconstruction of Proteins using Coulomb Explosions
Tomas André, Alfredo Bellisario, Nicusor Timneanu, Carl Caleman • 2026-03-25
The role of spatial context and multitask learning in the detection of organic and conventional farming systems based on Sentinel-2 time series
Jan Hemmerling, Marcel Schwieder, Philippe Rufin, Leon-Friedrich Thomas, Mire... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.16289v1</id>\n <title>VisBrowse-Bench: Benchmarking Visual-Native Search for Multimodal Browsing Agents</title>\n <updated>2026-03-17T09:24:13Z</updated>\n <link href='https://arxiv.org/abs/2603.16289v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.16289v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>The rapid advancement of Multimodal Large Language Models (MLLMs) has enabled browsing agents to acquire and reason over multimodal information in the real world. But existing benchmarks suffer from two limitations: insufficient evaluation of visual reasoning ability and the neglect of native visual information of web pages in the reasoning chains. To address these challenges, we introduce a new benchmark for visual-native search, VisBrowse-Bench. It contains 169 VQA instances covering multiple domains and evaluates the models' visual reasoning capabilities during the search process through multimodal evidence cross-validation via text-image retrieval and joint reasoning. These data were constructed by human experts using a multi-stage pipeline and underwent rigorous manual verification. We additionally propose an agent workflow that can effectively drive the browsing agent to actively collect and reason over visual information during the search process. We comprehensively evaluated both open-source and closed-source models in this workflow. Experimental results show that even the best-performing model, Claude-4.6-Opus only achieves an accuracy of 47.6%, while the proprietary Deep Research model, o3-deep-research only achieves an accuracy of 41.1%. The code and data can be accessed at: https://github.com/ZhengboZhang/VisBrowse-Bench</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <published>2026-03-17T09:24:13Z</published>\n <arxiv:primary_category term='cs.CV'/>\n <author>\n <name>Zhengbo Zhang</name>\n </author>\n <author>\n <name>Jinbo Su</name>\n </author>\n <author>\n <name>Zhaowen Zhou</name>\n </author>\n <author>\n <name>Changtao Miao</name>\n </author>\n <author>\n <name>Yuhan Hong</name>\n </author>\n <author>\n <name>Qimeng Wu</name>\n </author>\n <author>\n <name>Yumeng Liu</name>\n </author>\n <author>\n <name>Feier Wu</name>\n </author>\n <author>\n <name>Yihe Tian</name>\n </author>\n <author>\n <name>Yuhao Liang</name>\n </author>\n <author>\n <name>Zitong Shan</name>\n </author>\n <author>\n <name>Wanke Xia</name>\n </author>\n <author>\n <name>Yi-Fan Zhang</name>\n </author>\n <author>\n <name>Bo Zhang</name>\n </author>\n <author>\n <name>Zhe Li</name>\n </author>\n <author>\n <name>Shiming Xiang</name>\n </author>\n <author>\n <name>Ying Yan</name>\n </author>\n </entry>"
}