{"title":"残差挤压CNDS深度学习CNN模型用于超大尺度场所图像识别","authors":"Abhishek Verma, Hussam Qassim, David Feinzimer","doi":"10.1109/UEMCON.2017.8248975","DOIUrl":null,"url":null,"abstract":"Deep convolutional neural network models have achieved great success in the recent years. However, the optimization of size and the time needed to train a deep network is a research area that needs much improvement. In this paper, we address the issue of speed and size by proposing a compressed convolutional neural network model namely Residual Squeeze CNDS. Proposed models compresses the earlier very successful Residual-CNDS network and further improves on following aspects: (1) small model size, (2) faster speed, (3) uses residual learning for faster convergence, better generalization, and solves the issue of degradation, (4) matches the recognition accuracy of the non-compressed model on the very large-scale grand challenge MIT Places 365-Standard scene dataset. In comparison to Residual-CNDS the proposed model is 87.64% smaller in size and 13.33% faster in the training time. This supports our claim that the proposed model inherits the best aspects of Residual-CNDS model and further improves upon it. Moreover, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. In comparison to SQUEEZENET our proposed framework can be more easily adapted and fully integrated with the residual learning for compressing various other contemporary deep learning convolutional neural network models.","PeriodicalId":403890,"journal":{"name":"2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"Residual squeeze CNDS deep learning CNN model for very large scale places image recognition\",\"authors\":\"Abhishek Verma, Hussam Qassim, David Feinzimer\",\"doi\":\"10.1109/UEMCON.2017.8248975\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep convolutional neural network models have achieved great success in the recent years. However, the optimization of size and the time needed to train a deep network is a research area that needs much improvement. In this paper, we address the issue of speed and size by proposing a compressed convolutional neural network model namely Residual Squeeze CNDS. Proposed models compresses the earlier very successful Residual-CNDS network and further improves on following aspects: (1) small model size, (2) faster speed, (3) uses residual learning for faster convergence, better generalization, and solves the issue of degradation, (4) matches the recognition accuracy of the non-compressed model on the very large-scale grand challenge MIT Places 365-Standard scene dataset. In comparison to Residual-CNDS the proposed model is 87.64% smaller in size and 13.33% faster in the training time. This supports our claim that the proposed model inherits the best aspects of Residual-CNDS model and further improves upon it. Moreover, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. In comparison to SQUEEZENET our proposed framework can be more easily adapted and fully integrated with the residual learning for compressing various other contemporary deep learning convolutional neural network models.\",\"PeriodicalId\":403890,\"journal\":{\"name\":\"2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON)\",\"volume\":\"93 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/UEMCON.2017.8248975\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UEMCON.2017.8248975","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19
摘要
近年来,深度卷积神经网络模型取得了巨大的成功。然而,深度网络的大小和训练时间的优化是一个需要改进的研究领域。在本文中,我们通过提出一个压缩卷积神经网络模型即残余挤压CNDS来解决速度和大小问题。本文提出的模型对之前非常成功的残差- cnds网络进行了压缩,并在以下方面进行了进一步改进:(1)模型尺寸小;(2)速度更快;(3)利用残差学习实现更快的收敛、更好的一般化,解决了退化问题;(4)在非常大规模的grand challenge MIT Places 365-Standard场景数据集上匹配非压缩模型的识别精度。与残差- cnds模型相比,该模型的大小缩小了87.64%,训练时间加快了13.33%。这支持了我们的观点,即所提出的模型继承了残差- cnds模型的最佳方面,并在此基础上进一步改进。此外,我们提出了一种更有纪律的方法来寻找新颖的CNN架构的设计空间。与SQUEEZENET相比,我们提出的框架可以更容易地适应并与残差学习完全集成,用于压缩各种其他当代深度学习卷积神经网络模型。
Residual squeeze CNDS deep learning CNN model for very large scale places image recognition
Deep convolutional neural network models have achieved great success in the recent years. However, the optimization of size and the time needed to train a deep network is a research area that needs much improvement. In this paper, we address the issue of speed and size by proposing a compressed convolutional neural network model namely Residual Squeeze CNDS. Proposed models compresses the earlier very successful Residual-CNDS network and further improves on following aspects: (1) small model size, (2) faster speed, (3) uses residual learning for faster convergence, better generalization, and solves the issue of degradation, (4) matches the recognition accuracy of the non-compressed model on the very large-scale grand challenge MIT Places 365-Standard scene dataset. In comparison to Residual-CNDS the proposed model is 87.64% smaller in size and 13.33% faster in the training time. This supports our claim that the proposed model inherits the best aspects of Residual-CNDS model and further improves upon it. Moreover, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. In comparison to SQUEEZENET our proposed framework can be more easily adapted and fully integrated with the residual learning for compressing various other contemporary deep learning convolutional neural network models.