{"title":"一种新的轻量级深度卷积神经网络架构","authors":"Baicheng Liu, Xi’ai Chen, Zhi Han, Huidi Jia, Yandong Tang","doi":"10.1109/CYBER55403.2022.9907319","DOIUrl":null,"url":null,"abstract":"Deep convolutional neural networks have achieved much success in many computer vision tasks. However, a network has millions of parameters which limit its inference speed and usage for some situations with limited storage space. Low-rank based methods and pruning methods are verified effective to compress the number of parameters and accelerate inference speed of deep convolutional neural networks. As the price, the performance of the networks decreases. To overcome this problem, in this paper, we design a novel low-rank and sparse architecture of convolutional neural networks. Besides accelerating inference speed and reducing parameters, our approach achieves better performance than baseline networks.","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Novel Lightweight Architecture of Deep Convolutional Neural Networks\",\"authors\":\"Baicheng Liu, Xi’ai Chen, Zhi Han, Huidi Jia, Yandong Tang\",\"doi\":\"10.1109/CYBER55403.2022.9907319\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep convolutional neural networks have achieved much success in many computer vision tasks. However, a network has millions of parameters which limit its inference speed and usage for some situations with limited storage space. Low-rank based methods and pruning methods are verified effective to compress the number of parameters and accelerate inference speed of deep convolutional neural networks. As the price, the performance of the networks decreases. To overcome this problem, in this paper, we design a novel low-rank and sparse architecture of convolutional neural networks. Besides accelerating inference speed and reducing parameters, our approach achieves better performance than baseline networks.\",\"PeriodicalId\":34110,\"journal\":{\"name\":\"IET Cybersystems and Robotics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2022-07-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Cybersystems and Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CYBER55403.2022.9907319\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Cybersystems and Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CYBER55403.2022.9907319","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
A Novel Lightweight Architecture of Deep Convolutional Neural Networks
Deep convolutional neural networks have achieved much success in many computer vision tasks. However, a network has millions of parameters which limit its inference speed and usage for some situations with limited storage space. Low-rank based methods and pruning methods are verified effective to compress the number of parameters and accelerate inference speed of deep convolutional neural networks. As the price, the performance of the networks decreases. To overcome this problem, in this paper, we design a novel low-rank and sparse architecture of convolutional neural networks. Besides accelerating inference speed and reducing parameters, our approach achieves better performance than baseline networks.