Research

Paper

AI LLM March 19, 2026

UGID: Unified Graph Isomorphism for Debiasing Large Language Models

Authors

Zikang Ding, Junchi Yao, Junhao Li, Yi Zhang, Wenbo Jiang, Hongbo Liu, Lijie Hu

Abstract

Large language models (LLMs) exhibit pronounced social biases. Output-level or data-optimization--based debiasing methods cannot fully resolve these biases, and many prior works have shown that biases are embedded in internal representations. We propose \underline{U}nified \underline{G}raph \underline{I}somorphism for \underline{D}ebiasing large language models (\textit{\textbf{UGID}}), an internal-representation--level debiasing framework for large language models that models the Transformer as a structured computational graph, where attention mechanisms define the routing edges of the graph and hidden states define the graph nodes. Specifically, debiasing is formulated as enforcing invariance of the graph structure across counterfactual inputs, with differences allowed only on sensitive attributes. \textit{\textbf{UGID}} jointly constrains attention routing and hidden representations in bias-sensitive regions, effectively preventing bias migration across architectural components. To achieve effective behavioral alignment without degrading general capabilities, we introduce a log-space constraint on sensitive logits and a selective anchor-based objective to preserve definitional semantics. Extensive experiments on large language models demonstrate that \textit{\textbf{UGID}} effectively reduces bias under both in-distribution and out-of-distribution settings, significantly reduces internal structural discrepancies, and preserves model safety and utility.

Metadata

arXiv ID: 2603.19144
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-19
Fetched: 2026-03-20 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.19144v1</id>\n    <title>UGID: Unified Graph Isomorphism for Debiasing Large Language Models</title>\n    <updated>2026-03-19T16:59:37Z</updated>\n    <link href='https://arxiv.org/abs/2603.19144v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.19144v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) exhibit pronounced social biases. Output-level or data-optimization--based debiasing methods cannot fully resolve these biases, and many prior works have shown that biases are embedded in internal representations. We propose \\underline{U}nified \\underline{G}raph \\underline{I}somorphism for \\underline{D}ebiasing large language models (\\textit{\\textbf{UGID}}), an internal-representation--level debiasing framework for large language models that models the Transformer as a structured computational graph, where attention mechanisms define the routing edges of the graph and hidden states define the graph nodes. Specifically, debiasing is formulated as enforcing invariance of the graph structure across counterfactual inputs, with differences allowed only on sensitive attributes. \\textit{\\textbf{UGID}} jointly constrains attention routing and hidden representations in bias-sensitive regions, effectively preventing bias migration across architectural components. To achieve effective behavioral alignment without degrading general capabilities, we introduce a log-space constraint on sensitive logits and a selective anchor-based objective to preserve definitional semantics. Extensive experiments on large language models demonstrate that \\textit{\\textbf{UGID}} effectively reduces bias under both in-distribution and out-of-distribution settings, significantly reduces internal structural discrepancies, and preserves model safety and utility.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-19T16:59:37Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Zikang Ding</name>\n    </author>\n    <author>\n      <name>Junchi Yao</name>\n    </author>\n    <author>\n      <name>Junhao Li</name>\n    </author>\n    <author>\n      <name>Yi Zhang</name>\n    </author>\n    <author>\n      <name>Wenbo Jiang</name>\n    </author>\n    <author>\n      <name>Hongbo Liu</name>\n    </author>\n    <author>\n      <name>Lijie Hu</name>\n    </author>\n  </entry>"
}