Research

Paper

TESTING March 24, 2026

Designing to Forget: Deep Semi-parametric Models for Unlearning

Authors

Amber Yijia Zheng, Yu-Shan Tai, Raymond A. Yeh

Abstract

Recent advances in machine unlearning have focused on developing algorithms to remove specific training samples from a trained model. In contrast, we observe that not all models are equally easy to unlearn. Hence, we introduce a family of deep semi-parametric models (SPMs) that exhibit non-parametric behavior during unlearning. SPMs use a fusion module that aggregates information from each training sample, enabling explicit test-time deletion of selected samples without altering model parameters. Empirically, we demonstrate that SPMs achieve competitive task performance to parametric models in image classification and generation, while being significantly more efficient for unlearning. Notably, on ImageNet classification, SPMs reduce the prediction gap relative to a retrained (oracle) baseline by $11\%$ and achieve over $10\times$ faster unlearning compared to existing approaches on parametric models. The code is available at https://github.com/amberyzheng/spm_unlearning.

Metadata

arXiv ID: 2603.22870
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-24
Fetched: 2026-03-25 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.22870v1</id>\n    <title>Designing to Forget: Deep Semi-parametric Models for Unlearning</title>\n    <updated>2026-03-24T07:13:45Z</updated>\n    <link href='https://arxiv.org/abs/2603.22870v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.22870v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recent advances in machine unlearning have focused on developing algorithms to remove specific training samples from a trained model. In contrast, we observe that not all models are equally easy to unlearn. Hence, we introduce a family of deep semi-parametric models (SPMs) that exhibit non-parametric behavior during unlearning. SPMs use a fusion module that aggregates information from each training sample, enabling explicit test-time deletion of selected samples without altering model parameters. Empirically, we demonstrate that SPMs achieve competitive task performance to parametric models in image classification and generation, while being significantly more efficient for unlearning. Notably, on ImageNet classification, SPMs reduce the prediction gap relative to a retrained (oracle) baseline by $11\\%$ and achieve over $10\\times$ faster unlearning compared to existing approaches on parametric models. The code is available at https://github.com/amberyzheng/spm_unlearning.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-24T07:13:45Z</published>\n    <arxiv:comment>CVPR 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Amber Yijia Zheng</name>\n    </author>\n    <author>\n      <name>Yu-Shan Tai</name>\n    </author>\n    <author>\n      <name>Raymond A. Yeh</name>\n    </author>\n  </entry>"
}