Research

Paper

AI LLM February 20, 2026

Towards LLM-centric Affective Visual Customization via Efficient and Precise Emotion Manipulating

Authors

Jiamin Luo, Xuqian Gu, Jingjing Wang, Jiahong Lu

Abstract

Previous studies on visual customization primarily rely on the objective alignment between various control signals (e.g., language, layout and canny) and the edited images, which largely ignore the subjective emotional contents, and more importantly lack general-purpose foundation models for affective visual customization. With this in mind, this paper proposes an LLM-centric Affective Visual Customization (L-AVC) task, which focuses on generating images within modifying their subjective emotions via Multimodal LLM. Further, this paper contends that how to make the model efficiently align emotion conversion in semantics (named inter-emotion semantic conversion) and how to precisely retain emotion-agnostic contents (named exter-emotion semantic retaining) are rather important and challenging in this L-AVC task. To this end, this paper proposes an Efficient and Precise Emotion Manipulating approach for editing subjective emotions in images. Specifically, an Efficient Inter-emotion Converting (EIC) module is tailored to make the LLM efficiently align emotion conversion in semantics before and after editing, followed by a Precise Exter-emotion Retaining (PER) module to precisely retain the emotion-agnostic contents. Comprehensive experimental evaluations on our constructed L-AVC dataset demonstrate the great advantage of the proposed EPEM approach to the L-AVC task over several state-of-the-art baselines. This justifies the importance of emotion information for L-AVC and the effectiveness of EPEM in efficiently and precisely manipulating such information.

Metadata

arXiv ID: 2602.18016
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-20
Fetched: 2026-02-23 05:33

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.18016v1</id>\n    <title>Towards LLM-centric Affective Visual Customization via Efficient and Precise Emotion Manipulating</title>\n    <updated>2026-02-20T06:12:48Z</updated>\n    <link href='https://arxiv.org/abs/2602.18016v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.18016v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Previous studies on visual customization primarily rely on the objective alignment between various control signals (e.g., language, layout and canny) and the edited images, which largely ignore the subjective emotional contents, and more importantly lack general-purpose foundation models for affective visual customization. With this in mind, this paper proposes an LLM-centric Affective Visual Customization (L-AVC) task, which focuses on generating images within modifying their subjective emotions via Multimodal LLM. Further, this paper contends that how to make the model efficiently align emotion conversion in semantics (named inter-emotion semantic conversion) and how to precisely retain emotion-agnostic contents (named exter-emotion semantic retaining) are rather important and challenging in this L-AVC task. To this end, this paper proposes an Efficient and Precise Emotion Manipulating approach for editing subjective emotions in images. Specifically, an Efficient Inter-emotion Converting (EIC) module is tailored to make the LLM efficiently align emotion conversion in semantics before and after editing, followed by a Precise Exter-emotion Retaining (PER) module to precisely retain the emotion-agnostic contents. Comprehensive experimental evaluations on our constructed L-AVC dataset demonstrate the great advantage of the proposed EPEM approach to the L-AVC task over several state-of-the-art baselines. This justifies the importance of emotion information for L-AVC and the effectiveness of EPEM in efficiently and precisely manipulating such information.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-02-20T06:12:48Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Jiamin Luo</name>\n    </author>\n    <author>\n      <name>Xuqian Gu</name>\n    </author>\n    <author>\n      <name>Jingjing Wang</name>\n    </author>\n    <author>\n      <name>Jiahong Lu</name>\n    </author>\n    <arxiv:doi>10.1145/3774904.3792585</arxiv:doi>\n    <link href='https://doi.org/10.1145/3774904.3792585' rel='related' title='doi'/>\n  </entry>"
}