Research

Paper

AI LLM March 10, 2026

Ego: Embedding-Guided Personalization of Vision-Language Models

Authors

Soroush Seifi, Simon Gardier, Vaggelis Dorovatas, Daniel Olmeda Reino, Rahaf Aljundi

Abstract

AI assistants that support humans in daily life are becoming increasingly feasible, driven by the rapid advancements in multimodal language models. A key challenge lies in overcoming the generic nature of these models to deliver personalized experiences. Existing approaches to personalizing large vision language models often rely on additional training stages, which limit generality and scalability, or on engineered pipelines with external pre-trained modules, which hinder deployment efficiency. In this work, we propose an efficient personalization method that leverages the model's inherent ability to capture personalized concepts. Specifically, we extract visual tokens that predominantly represent the target concept by utilizing the model's internal attention mechanisms. These tokens serve as a memory of that specific concept, enabling the model to recall and describe it when it appears in test images. We conduct a comprehensive and unified evaluation of our approach and SOTA methods across various personalization settings including single-concept, multi-concept, and video personalization, demonstrating strong performance gains with minimal personalization overhead.

Metadata

arXiv ID: 2603.09771
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09771v1</id>\n    <title>Ego: Embedding-Guided Personalization of Vision-Language Models</title>\n    <updated>2026-03-10T15:10:41Z</updated>\n    <link href='https://arxiv.org/abs/2603.09771v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09771v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>AI assistants that support humans in daily life are becoming increasingly feasible, driven by the rapid advancements in multimodal language models. A key challenge lies in overcoming the generic nature of these models to deliver personalized experiences. Existing approaches to personalizing large vision language models often rely on additional training stages, which limit generality and scalability, or on engineered pipelines with external pre-trained modules, which hinder deployment efficiency. In this work, we propose an efficient personalization method that leverages the model's inherent ability to capture personalized concepts. Specifically, we extract visual tokens that predominantly represent the target concept by utilizing the model's internal attention mechanisms. These tokens serve as a memory of that specific concept, enabling the model to recall and describe it when it appears in test images. We conduct a comprehensive and unified evaluation of our approach and SOTA methods across various personalization settings including single-concept, multi-concept, and video personalization, demonstrating strong performance gains with minimal personalization overhead.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-10T15:10:41Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Soroush Seifi</name>\n    </author>\n    <author>\n      <name>Simon Gardier</name>\n    </author>\n    <author>\n      <name>Vaggelis Dorovatas</name>\n    </author>\n    <author>\n      <name>Daniel Olmeda Reino</name>\n    </author>\n    <author>\n      <name>Rahaf Aljundi</name>\n    </author>\n  </entry>"
}