Yang Yang;Chao Wang;Lei Gong;Min Wu;Zhenghua Chen;Yingxue Gao;Teng Wang;Xuehai Zhou
{"title":"不确定性感知自我知识的提炼","authors":"Yang Yang;Chao Wang;Lei Gong;Min Wu;Zhenghua Chen;Yingxue Gao;Teng Wang;Xuehai Zhou","doi":"10.1109/TCSVT.2024.3516145","DOIUrl":null,"url":null,"abstract":"Self-knowledge distillation has emerged as a powerful method, notably boosting the prediction accuracy of deep neural networks while being resource-efficient, setting it apart from traditional teacher-student knowledge distillation approaches. However, in safety-critical applications, high accuracy alone is not adequate; conveying uncertainty effectively holds equal importance. Regrettably, existing self-knowledge distillation methods have not met the need to improve both prediction accuracy and uncertainty quantification simultaneously. In response to this gap, we present an uncertainty-aware self-knowledge distillation method named UASKD. UASKD introduces an uncertainty-aware contrastive loss and a prediction synthesis technique within the self-knowledge distillation process, aiming to fully harness the potential of self-knowledge distillation for improving both prediction accuracy and uncertainty quantification. Extensive assessments illustrate that UASKD consistently surpasses other self-knowledge distillation techniques and numerous uncertainty calibration methods in both prediction accuracy and uncertainty quantification metrics across various classification and object detection tasks, highlighting its efficacy and adaptability.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"4464-4478"},"PeriodicalIF":8.3000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Uncertainty-Aware Self-Knowledge Distillation\",\"authors\":\"Yang Yang;Chao Wang;Lei Gong;Min Wu;Zhenghua Chen;Yingxue Gao;Teng Wang;Xuehai Zhou\",\"doi\":\"10.1109/TCSVT.2024.3516145\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Self-knowledge distillation has emerged as a powerful method, notably boosting the prediction accuracy of deep neural networks while being resource-efficient, setting it apart from traditional teacher-student knowledge distillation approaches. However, in safety-critical applications, high accuracy alone is not adequate; conveying uncertainty effectively holds equal importance. Regrettably, existing self-knowledge distillation methods have not met the need to improve both prediction accuracy and uncertainty quantification simultaneously. In response to this gap, we present an uncertainty-aware self-knowledge distillation method named UASKD. UASKD introduces an uncertainty-aware contrastive loss and a prediction synthesis technique within the self-knowledge distillation process, aiming to fully harness the potential of self-knowledge distillation for improving both prediction accuracy and uncertainty quantification. Extensive assessments illustrate that UASKD consistently surpasses other self-knowledge distillation techniques and numerous uncertainty calibration methods in both prediction accuracy and uncertainty quantification metrics across various classification and object detection tasks, highlighting its efficacy and adaptability.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 5\",\"pages\":\"4464-4478\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2024-12-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10795207/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10795207/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Self-knowledge distillation has emerged as a powerful method, notably boosting the prediction accuracy of deep neural networks while being resource-efficient, setting it apart from traditional teacher-student knowledge distillation approaches. However, in safety-critical applications, high accuracy alone is not adequate; conveying uncertainty effectively holds equal importance. Regrettably, existing self-knowledge distillation methods have not met the need to improve both prediction accuracy and uncertainty quantification simultaneously. In response to this gap, we present an uncertainty-aware self-knowledge distillation method named UASKD. UASKD introduces an uncertainty-aware contrastive loss and a prediction synthesis technique within the self-knowledge distillation process, aiming to fully harness the potential of self-knowledge distillation for improving both prediction accuracy and uncertainty quantification. Extensive assessments illustrate that UASKD consistently surpasses other self-knowledge distillation techniques and numerous uncertainty calibration methods in both prediction accuracy and uncertainty quantification metrics across various classification and object detection tasks, highlighting its efficacy and adaptability.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.