Decoupling Dark Knowledge via Block-Wise Logit Distillation for Feature-Level Alignment

Chengting Yu;Fengzhao Zhang;Ruizhe Chen;Aili Wang;Zuozhu Liu;Shurun Tan;Er-Ping Li
{"title":"Decoupling Dark Knowledge via Block-Wise Logit Distillation for Feature-Level Alignment","authors":"Chengting Yu;Fengzhao Zhang;Ruizhe Chen;Aili Wang;Zuozhu Liu;Shurun Tan;Er-Ping Li","doi":"10.1109/TAI.2024.3512498","DOIUrl":null,"url":null,"abstract":"Knowledge distillation (KD), a learning manner with a larger teacher network guiding a smaller student network, transfers dark knowledge from the teacher to the student via logits or intermediate features, with the aim of producing a well-performed lightweight model. Notably, many subsequent feature-based KD methods outperformed the earliest logit-based KD method and iteratively generated numerous state-of-the-art distillation methods. Nevertheless, recent work has uncovered the potential of the logit-based method, bringing the simple KD form based on logits back into the limelight. Features or logits? They partially implement the KD with entirely distinct perspectives; therefore, choosing between logits and features is not straightforward. This article provides a unified perspective of feature alignment to obtain a better comprehension of their fundamental distinction. Inheriting the design philosophy and insights of feature-based and logit-based methods, we introduce a block-wise logit distillation framework to apply implicit logit-based feature alignment by gradually replacing teacher's blocks as intermediate stepping-stone models to bridge the gap between the student and the teacher. Our method obtains comparable or superior results to state-of-the-art distillation methods. This article demonstrates the great potential of combining logit and features, and we hope it will inspire future research to revisit KD from a higher vantage point.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 5","pages":"1143-1155"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10780970/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Knowledge distillation (KD), a learning manner with a larger teacher network guiding a smaller student network, transfers dark knowledge from the teacher to the student via logits or intermediate features, with the aim of producing a well-performed lightweight model. Notably, many subsequent feature-based KD methods outperformed the earliest logit-based KD method and iteratively generated numerous state-of-the-art distillation methods. Nevertheless, recent work has uncovered the potential of the logit-based method, bringing the simple KD form based on logits back into the limelight. Features or logits? They partially implement the KD with entirely distinct perspectives; therefore, choosing between logits and features is not straightforward. This article provides a unified perspective of feature alignment to obtain a better comprehension of their fundamental distinction. Inheriting the design philosophy and insights of feature-based and logit-based methods, we introduce a block-wise logit distillation framework to apply implicit logit-based feature alignment by gradually replacing teacher's blocks as intermediate stepping-stone models to bridge the gap between the student and the teacher. Our method obtains comparable or superior results to state-of-the-art distillation methods. This article demonstrates the great potential of combining logit and features, and we hope it will inspire future research to revisit KD from a higher vantage point.
基于块明智Logit蒸馏的特征级对齐解耦暗知识
知识蒸馏(Knowledge distillation, KD)是一种用较大的教师网络指导较小的学生网络的学习方式,它通过logits或中间特征将暗知识从教师传递给学生,目的是产生一个性能良好的轻量级模型。值得注意的是,许多随后的基于特征的KD方法优于最早的基于对数的KD方法,并迭代地生成了许多最先进的蒸馏方法。然而,最近的工作已经揭示了基于对数的方法的潜力,使基于对数的简单KD形式重新成为人们关注的焦点。功能还是逻辑?他们以完全不同的视角部分实施KD;因此,在逻辑和特征之间进行选择并不简单。本文提供了特征对齐的统一视角,以便更好地理解它们的基本区别。我们继承了基于特征和基于逻辑的方法的设计理念和见解,引入了一种基于块的逻辑蒸馏框架,通过逐步取代教师的块作为中间跳板模型,来应用基于逻辑的隐式特征对齐,以弥合学生和教师之间的差距。我们的方法获得与最先进的蒸馏方法相当或更好的结果。本文展示了logit和特征相结合的巨大潜力,我们希望它能激发未来的研究,从更高的角度重新审视KD。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信