Research

Paper

AI LLM February 27, 2026

Interpretable Debiasing of Vision-Language Models for Social Fairness

Authors

Na Min An, Yoonna Jang, Yusuke Hirota, Ryo Hachiuma, Isabelle Augenstein, Hyunjung Shim

Abstract

The rapid advancement of Vision-Language models (VLMs) has raised growing concerns that their black-box reasoning processes could lead to unintended forms of social bias. Current debiasing approaches focus on mitigating surface-level bias signals through post-hoc learning or test-time algorithms, while leaving the internal dynamics of the model largely unexplored. In this work, we introduce an interpretable, model-agnostic bias mitigation framework, DeBiasLens, that localizes social attribute neurons in VLMs through sparse autoencoders (SAEs) applied to multimodal encoders. Building upon the disentanglement ability of SAEs, we train them on facial image or caption datasets without corresponding social attribute labels to uncover neurons highly responsive to specific demographics, including those that are underrepresented. By selectively deactivating the social neurons most strongly tied to bias for each group, we effectively mitigate socially biased behaviors of VLMs without degrading their semantic knowledge. Our research lays the groundwork for future auditing tools, prioritizing social fairness in emerging real-world AI systems.

Metadata

arXiv ID: 2602.24014
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-27
Fetched: 2026-03-02 06:04

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.24014v1</id>\n    <title>Interpretable Debiasing of Vision-Language Models for Social Fairness</title>\n    <updated>2026-02-27T13:37:11Z</updated>\n    <link href='https://arxiv.org/abs/2602.24014v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.24014v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>The rapid advancement of Vision-Language models (VLMs) has raised growing concerns that their black-box reasoning processes could lead to unintended forms of social bias. Current debiasing approaches focus on mitigating surface-level bias signals through post-hoc learning or test-time algorithms, while leaving the internal dynamics of the model largely unexplored. In this work, we introduce an interpretable, model-agnostic bias mitigation framework, DeBiasLens, that localizes social attribute neurons in VLMs through sparse autoencoders (SAEs) applied to multimodal encoders. Building upon the disentanglement ability of SAEs, we train them on facial image or caption datasets without corresponding social attribute labels to uncover neurons highly responsive to specific demographics, including those that are underrepresented. By selectively deactivating the social neurons most strongly tied to bias for each group, we effectively mitigate socially biased behaviors of VLMs without degrading their semantic knowledge. Our research lays the groundwork for future auditing tools, prioritizing social fairness in emerging real-world AI systems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-27T13:37:11Z</published>\n    <arxiv:comment>25 pages, 30 figures, 13 Tables Accepted to CVPR 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Na Min An</name>\n    </author>\n    <author>\n      <name>Yoonna Jang</name>\n    </author>\n    <author>\n      <name>Yusuke Hirota</name>\n    </author>\n    <author>\n      <name>Ryo Hachiuma</name>\n    </author>\n    <author>\n      <name>Isabelle Augenstein</name>\n    </author>\n    <author>\n      <name>Hyunjung Shim</name>\n    </author>\n  </entry>"
}