{"title":"Deep neural network compression via knowledge distillation for embedded applications","authors":"Bhavesh Jaiswal, Nagendra P. Gajjar","doi":"10.1109/nuicone.2017.8325620","DOIUrl":null,"url":null,"abstract":"Deep neural networks have shown significant success across various applications. To solve the complex problems, the increasing depth and complexity pose the challenges of large computation and storage requirements when deploying such networks on embedded devices with limited storage and power. Many techniques have been developed by researchers to compress the deep neural networks to make them deployable on portable devices by reducing the storage requirements. This paper describes the implementation of deep neural network with teacher student model. A comparatively smaller student model learns the information passed from the larger teacher model without losing the accuracy and its learning/inference rate is also improved. So this kind of framework is suitable for embedded applications deployment where real-time performance is required.","PeriodicalId":306637,"journal":{"name":"2017 Nirma University International Conference on Engineering (NUiCONE)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Nirma University International Conference on Engineering (NUiCONE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/nuicone.2017.8325620","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Deep neural networks have shown significant success across various applications. To solve the complex problems, the increasing depth and complexity pose the challenges of large computation and storage requirements when deploying such networks on embedded devices with limited storage and power. Many techniques have been developed by researchers to compress the deep neural networks to make them deployable on portable devices by reducing the storage requirements. This paper describes the implementation of deep neural network with teacher student model. A comparatively smaller student model learns the information passed from the larger teacher model without losing the accuracy and its learning/inference rate is also improved. So this kind of framework is suitable for embedded applications deployment where real-time performance is required.