{"title":"Knowledge Distillation Based on Narrow-Deep Networks","authors":"Yan Zhou, Zhiqiang Wang, Jianxun Li","doi":"10.1007/s11063-024-11646-5","DOIUrl":null,"url":null,"abstract":"<p>Deep neural networks perform better than shallow neural networks, but the former tends to be deeper or wider, introducing large numbers of parameters and computations. We know that networks that are too wide have a high risk of overfitting and networks that are too deep require a large amount of computation. This paper proposed a narrow-deep ResNet, increasing the depth of the network while avoiding other issues caused by making the network too wide, and used the strategy of knowledge distillation, where we set up a trained teacher model to train an unmodified, wide, and narrow-deep ResNet that allows students to learn the teacher’s output. To validate the effectiveness of this method, it is tested on Cifar-100 and Pascal VOC datasets. The method proposed in this paper allows a small model to have about the same accuracy rate as a large model, while dramatically shrinking the response time and computational effort.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"15 1","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Processing Letters","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11063-024-11646-5","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks perform better than shallow neural networks, but the former tends to be deeper or wider, introducing large numbers of parameters and computations. We know that networks that are too wide have a high risk of overfitting and networks that are too deep require a large amount of computation. This paper proposed a narrow-deep ResNet, increasing the depth of the network while avoiding other issues caused by making the network too wide, and used the strategy of knowledge distillation, where we set up a trained teacher model to train an unmodified, wide, and narrow-deep ResNet that allows students to learn the teacher’s output. To validate the effectiveness of this method, it is tested on Cifar-100 and Pascal VOC datasets. The method proposed in this paper allows a small model to have about the same accuracy rate as a large model, while dramatically shrinking the response time and computational effort.
期刊介绍:
Neural Processing Letters is an international journal publishing research results and innovative ideas on all aspects of artificial neural networks. Coverage includes theoretical developments, biological models, new formal modes, learning, applications, software and hardware developments, and prospective researches.
The journal promotes fast exchange of information in the community of neural network researchers and users. The resurgence of interest in the field of artificial neural networks since the beginning of the 1980s is coupled to tremendous research activity in specialized or multidisciplinary groups. Research, however, is not possible without good communication between people and the exchange of information, especially in a field covering such different areas; fast communication is also a key aspect, and this is the reason for Neural Processing Letters