DenseKD:利用区域和样本重要性的密集知识蒸馏

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Haonan Zhang;Longjun Liu;Yi Zhang;Xinyu Lei;Fei Hui;Bihan Wen
{"title":"DenseKD:利用区域和样本重要性的密集知识蒸馏","authors":"Haonan Zhang;Longjun Liu;Yi Zhang;Xinyu Lei;Fei Hui;Bihan Wen","doi":"10.1109/TNNLS.2025.3525737","DOIUrl":null,"url":null,"abstract":"Knowledge distillation (KD) can compress deep neural networks (DNNs) by transferring the knowledge of the redundant teacher model to the resource-friendly student model, where cross-layer KD (CKD) conducts KD between each stage of students and the multiple stages of teachers. However, previous CKD schemes select the coarse-grained stagewise features of teachers to teach students, leading to improper channel alignment. Also, most of these methods conduct uniform distillation for all the knowledge, limiting students to focus more on important knowledge. To address these problems, we propose a dense KD (DenseKD) in this article, dubbed as DenseKD. First, to achieve more accurate feature alignment in CKD, we construct the learnable dense architecture to make each channel of student flexibly capture more diverse channelwise features from teacher. Moreover, we introduce region importance to investigate the region’s guiding potential, it distinguishes the influence of different regions by the variation of representations of teacher models. In addition, to make students pay more attention to useful samples in KD, we calculate sample importance by the loss of teacher models. Consistent improvements over state-of-the-art approaches are observed in experiments on multiple vision tasks. For example, in the classification task, DenseKD achieves 72.30% accuracy of ResNet-20 on CIFAR-100, which is higher than the results of previous CKD methods. In addition, in the object detection task, DenseKD gains 2.84% mean average precision (mAP) improvements of Faster R-CNN with ResNet-18 against vanilla KD.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 6","pages":"11243-11257"},"PeriodicalIF":8.9000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DenseKD: Dense Knowledge Distillation by Exploiting Region and Sample Importance\",\"authors\":\"Haonan Zhang;Longjun Liu;Yi Zhang;Xinyu Lei;Fei Hui;Bihan Wen\",\"doi\":\"10.1109/TNNLS.2025.3525737\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Knowledge distillation (KD) can compress deep neural networks (DNNs) by transferring the knowledge of the redundant teacher model to the resource-friendly student model, where cross-layer KD (CKD) conducts KD between each stage of students and the multiple stages of teachers. However, previous CKD schemes select the coarse-grained stagewise features of teachers to teach students, leading to improper channel alignment. Also, most of these methods conduct uniform distillation for all the knowledge, limiting students to focus more on important knowledge. To address these problems, we propose a dense KD (DenseKD) in this article, dubbed as DenseKD. First, to achieve more accurate feature alignment in CKD, we construct the learnable dense architecture to make each channel of student flexibly capture more diverse channelwise features from teacher. Moreover, we introduce region importance to investigate the region’s guiding potential, it distinguishes the influence of different regions by the variation of representations of teacher models. In addition, to make students pay more attention to useful samples in KD, we calculate sample importance by the loss of teacher models. Consistent improvements over state-of-the-art approaches are observed in experiments on multiple vision tasks. For example, in the classification task, DenseKD achieves 72.30% accuracy of ResNet-20 on CIFAR-100, which is higher than the results of previous CKD methods. In addition, in the object detection task, DenseKD gains 2.84% mean average precision (mAP) improvements of Faster R-CNN with ResNet-18 against vanilla KD.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"36 6\",\"pages\":\"11243-11257\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-01-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10849946/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10849946/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

知识蒸馏(Knowledge distillation, KD)通过将冗余教师模型中的知识转移到资源友好型学生模型中来压缩深度神经网络(deep neural networks, dnn),其中跨层KD (cross-layer KD, CKD)在每个阶段的学生和多个阶段的教师之间进行KD。然而,以往的CKD方案选择教师的粗粒度分阶段特征来教学生,导致通道对齐不当。而且,这些方法大多对所有的知识进行了统一的提炼,限制了学生更多地关注重要的知识。为了解决这些问题,我们在本文中提出了密集KD (DenseKD),称为DenseKD。首先,为了在CKD中实现更精确的特征对齐,我们构建了可学习的密集架构,使学生的每个通道能够灵活地从教师那里捕获更多样化的通道特征。此外,我们引入区域重要性来考察区域的指导潜力,它通过教师模型表征的差异来区分不同区域的影响。此外,为了使学生更加关注KD中有用的样本,我们通过教师模型的损失来计算样本重要性。在多种视觉任务的实验中,观察到对最先进方法的持续改进。例如,在分类任务中,DenseKD在CIFAR-100上达到了ResNet-20的72.30%准确率,高于以往CKD方法的结果。此外,在目标检测任务中,DenseKD比使用ResNet-18的Faster R-CNN提高了2.84%的平均精度(mAP)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DenseKD: Dense Knowledge Distillation by Exploiting Region and Sample Importance
Knowledge distillation (KD) can compress deep neural networks (DNNs) by transferring the knowledge of the redundant teacher model to the resource-friendly student model, where cross-layer KD (CKD) conducts KD between each stage of students and the multiple stages of teachers. However, previous CKD schemes select the coarse-grained stagewise features of teachers to teach students, leading to improper channel alignment. Also, most of these methods conduct uniform distillation for all the knowledge, limiting students to focus more on important knowledge. To address these problems, we propose a dense KD (DenseKD) in this article, dubbed as DenseKD. First, to achieve more accurate feature alignment in CKD, we construct the learnable dense architecture to make each channel of student flexibly capture more diverse channelwise features from teacher. Moreover, we introduce region importance to investigate the region’s guiding potential, it distinguishes the influence of different regions by the variation of representations of teacher models. In addition, to make students pay more attention to useful samples in KD, we calculate sample importance by the loss of teacher models. Consistent improvements over state-of-the-art approaches are observed in experiments on multiple vision tasks. For example, in the classification task, DenseKD achieves 72.30% accuracy of ResNet-20 on CIFAR-100, which is higher than the results of previous CKD methods. In addition, in the object detection task, DenseKD gains 2.84% mean average precision (mAP) improvements of Faster R-CNN with ResNet-18 against vanilla KD.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信