Research

Paper

AI LLM February 27, 2026

DLEBench: Evaluating Small-scale Object Editing Ability for Instruction-based Image Editing Model

Authors

Shibo Hong, Boxian Ai, Jun Kuang, Wei Wang, FengJiao Chen, Zhongyuan Peng, Chenhao Huang, Yixin Cao

Abstract

Significant progress has been made in the field of Instruction-based Image Editing Models (IIEMs). However, while these models demonstrate plausible adherence to instructions and strong reasoning ability on current benchmarks, their ability to edit small objects remains underexplored, despite its importance for precise local editing and refining details in both real and generated images. In this paper, we introduce DeepLookEditBench (DLEBench), the first benchmark dedicated to assessing the abilities of IIEMs in editing small-scale objects. Specifically, we construct a challenging testbed comprising 1889 samples across seven instruction types. In these samples, target objects occupy only 1%-10% of the image area, covering complex scenarios such as partial occlusion and multi-object editing. To ensure robust evaluation on this benchmark, we propose an evaluation protocol with refined score rubrics to minimize subjectivity and ambiguity in two criteria: Instruction Following and Visual Consistency. This protocol also introduces a dual-mode evaluation framework (Tool-driven and Oracle-guided Modes) addressing the misalignment between LMM-as-a-Judge and human judgements on DLEBench. Empirical results on 10 IIEMs reveal significant performance gaps in small-scale object editing, highlighting the need for specialized benchmarks to advance this ability.

Metadata

arXiv ID: 2602.23622
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-27
Fetched: 2026-03-02 06:04

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23622v1</id>\n    <title>DLEBench: Evaluating Small-scale Object Editing Ability for Instruction-based Image Editing Model</title>\n    <updated>2026-02-27T02:59:34Z</updated>\n    <link href='https://arxiv.org/abs/2602.23622v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23622v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Significant progress has been made in the field of Instruction-based Image Editing Models (IIEMs). However, while these models demonstrate plausible adherence to instructions and strong reasoning ability on current benchmarks, their ability to edit small objects remains underexplored, despite its importance for precise local editing and refining details in both real and generated images. In this paper, we introduce DeepLookEditBench (DLEBench), the first benchmark dedicated to assessing the abilities of IIEMs in editing small-scale objects. Specifically, we construct a challenging testbed comprising 1889 samples across seven instruction types. In these samples, target objects occupy only 1%-10% of the image area, covering complex scenarios such as partial occlusion and multi-object editing. To ensure robust evaluation on this benchmark, we propose an evaluation protocol with refined score rubrics to minimize subjectivity and ambiguity in two criteria: Instruction Following and Visual Consistency. This protocol also introduces a dual-mode evaluation framework (Tool-driven and Oracle-guided Modes) addressing the misalignment between LMM-as-a-Judge and human judgements on DLEBench. Empirical results on 10 IIEMs reveal significant performance gaps in small-scale object editing, highlighting the need for specialized benchmarks to advance this ability.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-27T02:59:34Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Shibo Hong</name>\n    </author>\n    <author>\n      <name>Boxian Ai</name>\n    </author>\n    <author>\n      <name>Jun Kuang</name>\n    </author>\n    <author>\n      <name>Wei Wang</name>\n    </author>\n    <author>\n      <name>FengJiao Chen</name>\n    </author>\n    <author>\n      <name>Zhongyuan Peng</name>\n    </author>\n    <author>\n      <name>Chenhao Huang</name>\n    </author>\n    <author>\n      <name>Yixin Cao</name>\n    </author>\n  </entry>"
}