Research

Paper

AI LLM March 25, 2026

PosterIQ: A Design Perspective Benchmark for Poster Understanding and Generation

Authors

Yuheng Feng, Wen Zhang, Haodong Duan, Xingxing Zou

Abstract

We present PosterIQ, a design-driven benchmark for poster understanding and generation, annotated across composition structure, typographic hierarchy, and semantic intent. It includes 7,765 image-annotation instances and 822 generation prompts spanning real, professional, and synthetic cases. To bridge visual design cognition and generative modeling, we define tasks for layout parsing, text-image correspondence, typography/readability and font perception, design quality assessment, and controllable, composition-aware generation with metaphor. We evaluate state-of-the-art MLLMs and diffusion-based generators, finding persistent gaps in visual hierarchy, typographic semantics, saliency control, and intention communication; commercial models lead on high-level reasoning but act as insensitive automatic raters, while generators render text well yet struggle with composition-aware synthesis. Extensive analyses show PosterIQ is both a quantitative benchmark and a diagnostic tool for design reasoning, offering reproducible, task-specific metrics. We aim to catalyze models' creativity and integrate human-centred design principles into generative vision-language systems.

Metadata

arXiv ID: 2603.24078
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.24078v1</id>\n    <title>PosterIQ: A Design Perspective Benchmark for Poster Understanding and Generation</title>\n    <updated>2026-03-25T08:33:51Z</updated>\n    <link href='https://arxiv.org/abs/2603.24078v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.24078v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>We present PosterIQ, a design-driven benchmark for poster understanding and generation, annotated across composition structure, typographic hierarchy, and semantic intent. It includes 7,765 image-annotation instances and 822 generation prompts spanning real, professional, and synthetic cases. To bridge visual design cognition and generative modeling, we define tasks for layout parsing, text-image correspondence, typography/readability and font perception, design quality assessment, and controllable, composition-aware generation with metaphor. We evaluate state-of-the-art MLLMs and diffusion-based generators, finding persistent gaps in visual hierarchy, typographic semantics, saliency control, and intention communication; commercial models lead on high-level reasoning but act as insensitive automatic raters, while generators render text well yet struggle with composition-aware synthesis. Extensive analyses show PosterIQ is both a quantitative benchmark and a diagnostic tool for design reasoning, offering reproducible, task-specific metrics. We aim to catalyze models' creativity and integrate human-centred design principles into generative vision-language systems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-25T08:33:51Z</published>\n    <arxiv:comment>CVPR 2026, Project Page: https://github.com/ArtmeScienceLab/PosterIQ-Benchmark</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Yuheng Feng</name>\n    </author>\n    <author>\n      <name>Wen Zhang</name>\n    </author>\n    <author>\n      <name>Haodong Duan</name>\n    </author>\n    <author>\n      <name>Xingxing Zou</name>\n    </author>\n  </entry>"
}