Diqi Chen , Yang Li , Jiajun Liu , Jun Zhou , Yongsheng Gao
{"title":"SATE: Efficient knowledge distillation with implicit student-aware teacher ensembles","authors":"Diqi Chen , Yang Li , Jiajun Liu , Jun Zhou , Yongsheng Gao","doi":"10.1016/j.patcog.2025.112355","DOIUrl":null,"url":null,"abstract":"<div><div>Recent findings suggest that with the same teacher architecture, a fully converged or “stronger” checkpoint surprisingly leads to a worse student. This can be explained by the Information Bottleneck (IB) principle, as the features of a weaker teacher transfer more “dark” knowledge because they maintain higher mutual information with the inputs. Meanwhile, various works have shown that severe teacher-student structural disparity or capability mismatch often leads to worse student performance. To deal with these issues, we propose a generalizable and efficient Knowledge Distillation (KD) framework with implicit Student-Aware Teacher Ensembles (SATE). The SATE framework simultaneously trains a student network and a student-aware intermediate teacher as a learning companion. With the proposed co-training strategy, the intermediate teacher is trained gradually and forms implicit ensembles of weaker teachers along the learning process. Such a design enables the student model to retain more dark knowledge for better generalization ability. The proposed framework improves the training scheme in a plug-and-play way so that it can be applied to improve various classic and state-of-the-art KD methods on both intra-domain (up to <span><math><mrow><mn>2.184</mn><mspace></mspace><mo>%</mo></mrow></math></span>) and cross-domain (up to <span><math><mrow><mn>7.358</mn><mspace></mspace><mo>%</mo></mrow></math></span>) settings, under a diversified configurations on teacher-student architectures, and achieves a major efficient advantage over other generic frameworks. The code is available at <span><span>https://github.com/diqichen91/SATE.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"172 ","pages":"Article 112355"},"PeriodicalIF":7.6000,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325010167","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recent findings suggest that with the same teacher architecture, a fully converged or “stronger” checkpoint surprisingly leads to a worse student. This can be explained by the Information Bottleneck (IB) principle, as the features of a weaker teacher transfer more “dark” knowledge because they maintain higher mutual information with the inputs. Meanwhile, various works have shown that severe teacher-student structural disparity or capability mismatch often leads to worse student performance. To deal with these issues, we propose a generalizable and efficient Knowledge Distillation (KD) framework with implicit Student-Aware Teacher Ensembles (SATE). The SATE framework simultaneously trains a student network and a student-aware intermediate teacher as a learning companion. With the proposed co-training strategy, the intermediate teacher is trained gradually and forms implicit ensembles of weaker teachers along the learning process. Such a design enables the student model to retain more dark knowledge for better generalization ability. The proposed framework improves the training scheme in a plug-and-play way so that it can be applied to improve various classic and state-of-the-art KD methods on both intra-domain (up to ) and cross-domain (up to ) settings, under a diversified configurations on teacher-student architectures, and achieves a major efficient advantage over other generic frameworks. The code is available at https://github.com/diqichen91/SATE.git.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.