Research

Paper

AI LLM February 26, 2026

Asymmetric Idiosyncrasies in Multimodal Models

Authors

Muzi Tao, Chufan Shi, Huijuan Wang, Shengbang Tong, Xuezhe Ma

Abstract

In this work, we study idiosyncrasies in the caption models and their downstream impact on text-to-image models. We design a systematic analysis: given either a generated caption or the corresponding image, we train neural networks to predict the originating caption model. Our results show that text classification yields very high accuracy (99.70\%), indicating that captioning models embed distinctive stylistic signatures. In contrast, these signatures largely disappear in the generated images, with classification accuracy dropping to at most 50\% even for the state-of-the-art Flux model. To better understand this cross-modal discrepancy, we further analyze the data and find that the generated images fail to preserve key variations present in captions, such as differences in the level of detail, emphasis on color and texture, and the distribution of objects within a scene. Overall, our classification-based framework provides a novel methodology for quantifying both the stylistic idiosyncrasies of caption models and the prompt-following ability of text-to-image systems.

Metadata

arXiv ID: 2602.22734
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22734v1</id>\n    <title>Asymmetric Idiosyncrasies in Multimodal Models</title>\n    <updated>2026-02-26T08:16:47Z</updated>\n    <link href='https://arxiv.org/abs/2602.22734v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22734v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>In this work, we study idiosyncrasies in the caption models and their downstream impact on text-to-image models. We design a systematic analysis: given either a generated caption or the corresponding image, we train neural networks to predict the originating caption model. Our results show that text classification yields very high accuracy (99.70\\%), indicating that captioning models embed distinctive stylistic signatures. In contrast, these signatures largely disappear in the generated images, with classification accuracy dropping to at most 50\\% even for the state-of-the-art Flux model. To better understand this cross-modal discrepancy, we further analyze the data and find that the generated images fail to preserve key variations present in captions, such as differences in the level of detail, emphasis on color and texture, and the distribution of objects within a scene. Overall, our classification-based framework provides a novel methodology for quantifying both the stylistic idiosyncrasies of caption models and the prompt-following ability of text-to-image systems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-02-26T08:16:47Z</published>\n    <arxiv:comment>Project page: https://muzi-tao.github.io/asymmetric-idiosyncrasies/</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Muzi Tao</name>\n    </author>\n    <author>\n      <name>Chufan Shi</name>\n    </author>\n    <author>\n      <name>Huijuan Wang</name>\n    </author>\n    <author>\n      <name>Shengbang Tong</name>\n    </author>\n    <author>\n      <name>Xuezhe Ma</name>\n    </author>\n  </entry>"
}