Research

Paper

AI LLM March 16, 2026

GradCFA: A Hybrid Gradient-Based Counterfactual and Feature Attribution Explanation Algorithm for Local Interpretation of Neural Networks

Authors

Jacob Sanderson, Hua Mao, Wai Lok Woo

Abstract

Explainable Artificial Intelligence (XAI) is increasingly essential as AI systems are deployed in critical fields such as healthcare and finance, offering transparency into AI-driven decisions. Two major XAI paradigms, counterfactual explanations (CFX) and feature attribution (FA), serve distinct roles in model interpretability. This study introduces GradCFA, a hybrid framework combining CFX and FA to improve interpretability by explicitly optimizing feasibility, plausibility, and diversity - key qualities often unbalanced in existing methods. Unlike most CFX research focused on binary classification, GradCFA extends to multi-class scenarios, supporting a wider range of applications. We evaluate GradCFA's validity, proximity, sparsity, plausibility, and diversity against state-of-the-art methods, including Wachter, DiCE, CARE for CFX, and SHAP for FA. Results show GradCFA effectively generates feasible, plausible, and diverse counterfactuals while offering valuable FA insights. By identifying influential features and validating their impact, GradCFA advances AI interpretability. The code for implementation of this work can be found at: https://github.com/jacob-ws/GradCFs .

Metadata

arXiv ID: 2603.15373
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15373v1</id>\n    <title>GradCFA: A Hybrid Gradient-Based Counterfactual and Feature Attribution Explanation Algorithm for Local Interpretation of Neural Networks</title>\n    <updated>2026-03-16T14:49:48Z</updated>\n    <link href='https://arxiv.org/abs/2603.15373v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15373v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Explainable Artificial Intelligence (XAI) is increasingly essential as AI systems are deployed in critical fields such as healthcare and finance, offering transparency into AI-driven decisions. Two major XAI paradigms, counterfactual explanations (CFX) and feature attribution (FA), serve distinct roles in model interpretability. This study introduces GradCFA, a hybrid framework combining CFX and FA to improve interpretability by explicitly optimizing feasibility, plausibility, and diversity - key qualities often unbalanced in existing methods. Unlike most CFX research focused on binary classification, GradCFA extends to multi-class scenarios, supporting a wider range of applications. We evaluate GradCFA's validity, proximity, sparsity, plausibility, and diversity against state-of-the-art methods, including Wachter, DiCE, CARE for CFX, and SHAP for FA. Results show GradCFA effectively generates feasible, plausible, and diverse counterfactuals while offering valuable FA insights. By identifying influential features and validating their impact, GradCFA advances AI interpretability. The code for implementation of this work can be found at: https://github.com/jacob-ws/GradCFs .</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-16T14:49:48Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <arxiv:journal_ref>IEEE Trans. Artif. Intell., 6, (2025), 2575 - 2587</arxiv:journal_ref>\n    <author>\n      <name>Jacob Sanderson</name>\n    </author>\n    <author>\n      <name>Hua Mao</name>\n    </author>\n    <author>\n      <name>Wai Lok Woo</name>\n    </author>\n    <arxiv:doi>10.1109/TAI.2025.3552057</arxiv:doi>\n    <link href='https://doi.org/10.1109/TAI.2025.3552057' rel='related' title='doi'/>\n  </entry>"
}