{"title":"Adaptive class token knowledge distillation for efficient vision transformer","authors":"","doi":"10.1016/j.knosys.2024.112531","DOIUrl":null,"url":null,"abstract":"<div><p>The Vision Transformer (ViT) outperforms Convolutional Neural Networks (CNNs) but at the cost of significantly higher computational demands. Knowledge Distillation (KD) has shown promise in compressing complex networks by transferring knowledge from a large pre-trained model to a smaller one. However, current KD methods for ViT often rely on CNNs as teachers or neglect the importance of class token ([CLS]) information, resulting in ineffective distillation of ViT’s unique knowledge. In this paper, we propose Adaptive Class token Knowledge Distillation ([CLS]-KD), which fully exploits information from the class token and patches in ViT. For class embedding (CLS) distillation, the intermediate CLS of the student model is aligned with the corresponding CLS of the teacher model through a projector. Furthermore, we introduce CLS-patch attention map distillation, where an attention map between the CLS and patch embeddings is generated and matched at each layer. This empowers the student model to learn how to dynamically extract patch embedding information into the CLS under teacher guidance. Finally, we propose Adaptive Layer-wise Distillation (ALD) to mitigate the imbalance in distillation effects varying with the depth of layers. This method assigns greater weight to the losses in layers where the training discrepancies between the teacher and student models are larger during distillation. Through these strategies, [CLS]-KD consistently surpasses existing state-of-the-art methods on the ImageNet-1K dataset across various teacher–student configurations. Furthermore, the proposed method demonstrates its generalization capability through transfer learning experiments on the CIFAR-10, CIFAR-100, and CALTECH-256 datasets.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705124011651","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The Vision Transformer (ViT) outperforms Convolutional Neural Networks (CNNs) but at the cost of significantly higher computational demands. Knowledge Distillation (KD) has shown promise in compressing complex networks by transferring knowledge from a large pre-trained model to a smaller one. However, current KD methods for ViT often rely on CNNs as teachers or neglect the importance of class token ([CLS]) information, resulting in ineffective distillation of ViT’s unique knowledge. In this paper, we propose Adaptive Class token Knowledge Distillation ([CLS]-KD), which fully exploits information from the class token and patches in ViT. For class embedding (CLS) distillation, the intermediate CLS of the student model is aligned with the corresponding CLS of the teacher model through a projector. Furthermore, we introduce CLS-patch attention map distillation, where an attention map between the CLS and patch embeddings is generated and matched at each layer. This empowers the student model to learn how to dynamically extract patch embedding information into the CLS under teacher guidance. Finally, we propose Adaptive Layer-wise Distillation (ALD) to mitigate the imbalance in distillation effects varying with the depth of layers. This method assigns greater weight to the losses in layers where the training discrepancies between the teacher and student models are larger during distillation. Through these strategies, [CLS]-KD consistently surpasses existing state-of-the-art methods on the ImageNet-1K dataset across various teacher–student configurations. Furthermore, the proposed method demonstrates its generalization capability through transfer learning experiments on the CIFAR-10, CIFAR-100, and CALTECH-256 datasets.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.