Research

Paper

AI LLM March 19, 2026

SEM: Sparse Embedding Modulation for Post-Hoc Debiasing of Vision-Language Models

Authors

Quentin Guimard, Federico Bartsch, Simone Caldarella, Rahaf Aljundi, Elisa Ricci, Massimiliano Mancini

Abstract

Models that bridge vision and language, such as CLIP, are key components of multimodal AI, yet their large-scale, uncurated training data introduce severe social and spurious biases. Existing post-hoc debiasing methods often operate directly in the dense CLIP embedding space, where bias and task-relevant information are highly entangled. This entanglement limits their ability to remove bias without degrading semantic fidelity. In this work, we propose Sparse Embedding Modulation (SEM), a post-hoc, zero-shot debiasing framework that operates in a Sparse Autoencoder (SAE) latent space. By decomposing CLIP text embeddings into disentangled features, SEM identifies and modulates bias-relevant neurons while preserving query-relevant ones. This enables more precise, non-linear interventions. Across four benchmark datasets and two CLIP backbones, SEM achieves substantial fairness gains in retrieval and zero-shot classification. Our results demonstrate that sparse latent representations provide an effective foundation for post-hoc debiasing of vision-language models.

Metadata

arXiv ID: 2603.19028
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-19
Fetched: 2026-03-20 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.19028v1</id>\n    <title>SEM: Sparse Embedding Modulation for Post-Hoc Debiasing of Vision-Language Models</title>\n    <updated>2026-03-19T15:28:08Z</updated>\n    <link href='https://arxiv.org/abs/2603.19028v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.19028v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Models that bridge vision and language, such as CLIP, are key components of multimodal AI, yet their large-scale, uncurated training data introduce severe social and spurious biases. Existing post-hoc debiasing methods often operate directly in the dense CLIP embedding space, where bias and task-relevant information are highly entangled. This entanglement limits their ability to remove bias without degrading semantic fidelity. In this work, we propose Sparse Embedding Modulation (SEM), a post-hoc, zero-shot debiasing framework that operates in a Sparse Autoencoder (SAE) latent space. By decomposing CLIP text embeddings into disentangled features, SEM identifies and modulates bias-relevant neurons while preserving query-relevant ones. This enables more precise, non-linear interventions. Across four benchmark datasets and two CLIP backbones, SEM achieves substantial fairness gains in retrieval and zero-shot classification. Our results demonstrate that sparse latent representations provide an effective foundation for post-hoc debiasing of vision-language models.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-19T15:28:08Z</published>\n    <arxiv:comment>CVPR Findings 2026. Project website: https://sparse-embedding-modulation.github.io/</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Quentin Guimard</name>\n    </author>\n    <author>\n      <name>Federico Bartsch</name>\n    </author>\n    <author>\n      <name>Simone Caldarella</name>\n    </author>\n    <author>\n      <name>Rahaf Aljundi</name>\n    </author>\n    <author>\n      <name>Elisa Ricci</name>\n    </author>\n    <author>\n      <name>Massimiliano Mancini</name>\n    </author>\n  </entry>"
}