Research

Paper

AI LLM March 24, 2026

Dual-Teacher Distillation with Subnetwork Rectification for Black-Box Domain Adaptation

Authors

Zhe Zhang, Jing Li, Wanli Xue, Xu Cheng, Jianhua Zhang, Qinghua Hu, Shengyong Chen

Abstract

Assuming that neither source data nor the source model is accessible, black box domain adaptation represents a highly practical yet extremely challenging setting, as transferable information is restricted to the predictions of the black box source model, which can only be queried using target samples. Existing approaches attempt to extract transferable knowledge through pseudo label refinement or by leveraging external vision language models (ViLs), but they often suffer from noisy supervision or insufficient utilization of the semantic priors provided by ViLs, which ultimately hinder adaptation performance. To overcome these limitations, we propose a dual teacher distillation with subnetwork rectification (DDSR) model that jointly exploits the specific knowledge embedded in black box source models and the general semantic information of a ViL. DDSR adaptively integrates their complementary predictions to generate reliable pseudo labels for the target domain and introduces a subnetwork driven regularization strategy to mitigate overfitting caused by noisy supervision. Furthermore, the refined target predictions iteratively enhance both the pseudo labels and ViL prompts, enabling more accurate and semantically consistent adaptation. Finally, the target model is further optimized through self training with classwise prototypes. Extensive experiments on multiple benchmark datasets validate the effectiveness of our approach, demonstrating consistent improvements over state of the art methods, including those using source data or models.

Metadata

arXiv ID: 2603.22908
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-24
Fetched: 2026-03-25 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.22908v1</id>\n    <title>Dual-Teacher Distillation with Subnetwork Rectification for Black-Box Domain Adaptation</title>\n    <updated>2026-03-24T07:54:19Z</updated>\n    <link href='https://arxiv.org/abs/2603.22908v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.22908v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Assuming that neither source data nor the source model is accessible, black box domain adaptation represents a highly practical yet extremely challenging setting, as transferable information is restricted to the predictions of the black box source model, which can only be queried using target samples. Existing approaches attempt to extract transferable knowledge through pseudo label refinement or by leveraging external vision language models (ViLs), but they often suffer from noisy supervision or insufficient utilization of the semantic priors provided by ViLs, which ultimately hinder adaptation performance. To overcome these limitations, we propose a dual teacher distillation with subnetwork rectification (DDSR) model that jointly exploits the specific knowledge embedded in black box source models and the general semantic information of a ViL. DDSR adaptively integrates their complementary predictions to generate reliable pseudo labels for the target domain and introduces a subnetwork driven regularization strategy to mitigate overfitting caused by noisy supervision. Furthermore, the refined target predictions iteratively enhance both the pseudo labels and ViL prompts, enabling more accurate and semantically consistent adaptation. Finally, the target model is further optimized through self training with classwise prototypes. Extensive experiments on multiple benchmark datasets validate the effectiveness of our approach, demonstrating consistent improvements over state of the art methods, including those using source data or models.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-24T07:54:19Z</published>\n    <arxiv:comment>This manuscript is under review at IEEE Transactions on Multimedia</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Zhe Zhang</name>\n    </author>\n    <author>\n      <name>Jing Li</name>\n    </author>\n    <author>\n      <name>Wanli Xue</name>\n    </author>\n    <author>\n      <name>Xu Cheng</name>\n    </author>\n    <author>\n      <name>Jianhua Zhang</name>\n    </author>\n    <author>\n      <name>Qinghua Hu</name>\n    </author>\n    <author>\n      <name>Shengyong Chen</name>\n    </author>\n  </entry>"
}