Haoran Zhao , Shuwen Tian , Jinlong Wang , Zhaopeng Deng , Xin Sun , Junyu Dong
{"title":"Boosting accuracy of student models via Masked Adaptive Self-Distillation","authors":"Haoran Zhao , Shuwen Tian , Jinlong Wang , Zhaopeng Deng , Xin Sun , Junyu Dong","doi":"10.1016/j.neucom.2025.129988","DOIUrl":null,"url":null,"abstract":"<div><div>Knowledge distillation (KD) has achieved impressive success, yet conventional KD approaches are time-consuming and computationally costly. In contrast, self-distillation methods provide an efficient alternative. However, existing self-distillation methods mostly suffer from information redundancy due to the same network architecture from the teacher and student models. Additionally, they simultaneously face the inherent limitation of lacking a high-capacity teacher model. To cope with the above challenges, we propose a novel and efficient method named Masked Adaptive Self-Distillation (MASD). Specifically, we first introduce the Mask Generation Module, which masks random pixels of the feature maps and force it to reconstruct and refine more valuable features on different layers. Moreover, the Adaptive Weighting Mechanism is designed to dynamically adjust and optimize the weights of supervisory signals utilizing the probabilities from the mutual masked supervisory signals, thereby compensating the absence of high-capacity teacher model. We demonstrate the effectiveness of our MASD method on conventional image classification datasets and fine-grained datasets using state-of-the-art CNN architectures, and show that MASD significantly enhances the generalization of various backbone networks. For instance, on the CIFAR-100 classification benchmark, the proposed MASD method achieves an accuracy of 80.40% with the ResNet-18 architecture, surpassing the baseline with a 4.16% margin in Top-1 accuracy.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"637 ","pages":"Article 129988"},"PeriodicalIF":5.5000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225006605","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Knowledge distillation (KD) has achieved impressive success, yet conventional KD approaches are time-consuming and computationally costly. In contrast, self-distillation methods provide an efficient alternative. However, existing self-distillation methods mostly suffer from information redundancy due to the same network architecture from the teacher and student models. Additionally, they simultaneously face the inherent limitation of lacking a high-capacity teacher model. To cope with the above challenges, we propose a novel and efficient method named Masked Adaptive Self-Distillation (MASD). Specifically, we first introduce the Mask Generation Module, which masks random pixels of the feature maps and force it to reconstruct and refine more valuable features on different layers. Moreover, the Adaptive Weighting Mechanism is designed to dynamically adjust and optimize the weights of supervisory signals utilizing the probabilities from the mutual masked supervisory signals, thereby compensating the absence of high-capacity teacher model. We demonstrate the effectiveness of our MASD method on conventional image classification datasets and fine-grained datasets using state-of-the-art CNN architectures, and show that MASD significantly enhances the generalization of various backbone networks. For instance, on the CIFAR-100 classification benchmark, the proposed MASD method achieves an accuracy of 80.40% with the ResNet-18 architecture, surpassing the baseline with a 4.16% margin in Top-1 accuracy.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.