{"title":"Residual squeeze CNDS deep learning CNN model for very large scale places image recognition","authors":"Abhishek Verma, Hussam Qassim, David Feinzimer","doi":"10.1109/UEMCON.2017.8248975","DOIUrl":null,"url":null,"abstract":"Deep convolutional neural network models have achieved great success in the recent years. However, the optimization of size and the time needed to train a deep network is a research area that needs much improvement. In this paper, we address the issue of speed and size by proposing a compressed convolutional neural network model namely Residual Squeeze CNDS. Proposed models compresses the earlier very successful Residual-CNDS network and further improves on following aspects: (1) small model size, (2) faster speed, (3) uses residual learning for faster convergence, better generalization, and solves the issue of degradation, (4) matches the recognition accuracy of the non-compressed model on the very large-scale grand challenge MIT Places 365-Standard scene dataset. In comparison to Residual-CNDS the proposed model is 87.64% smaller in size and 13.33% faster in the training time. This supports our claim that the proposed model inherits the best aspects of Residual-CNDS model and further improves upon it. Moreover, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. In comparison to SQUEEZENET our proposed framework can be more easily adapted and fully integrated with the residual learning for compressing various other contemporary deep learning convolutional neural network models.","PeriodicalId":403890,"journal":{"name":"2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UEMCON.2017.8248975","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19
Abstract
Deep convolutional neural network models have achieved great success in the recent years. However, the optimization of size and the time needed to train a deep network is a research area that needs much improvement. In this paper, we address the issue of speed and size by proposing a compressed convolutional neural network model namely Residual Squeeze CNDS. Proposed models compresses the earlier very successful Residual-CNDS network and further improves on following aspects: (1) small model size, (2) faster speed, (3) uses residual learning for faster convergence, better generalization, and solves the issue of degradation, (4) matches the recognition accuracy of the non-compressed model on the very large-scale grand challenge MIT Places 365-Standard scene dataset. In comparison to Residual-CNDS the proposed model is 87.64% smaller in size and 13.33% faster in the training time. This supports our claim that the proposed model inherits the best aspects of Residual-CNDS model and further improves upon it. Moreover, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. In comparison to SQUEEZENET our proposed framework can be more easily adapted and fully integrated with the residual learning for compressing various other contemporary deep learning convolutional neural network models.