Ruikang Ju, Jen-Shiun Chiang, Chih-Chia Chen, Yu-Shian Lin
{"title":"Connection Reduction of DenseNet for Image Recognition","authors":"Ruikang Ju, Jen-Shiun Chiang, Chih-Chia Chen, Yu-Shian Lin","doi":"10.1109/ISPACS57703.2022.10082842","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks increase depth by stacking convolution layers, and deeper network models perform better in image recognition. Empirical research shows that simply stacking convolution layers does not make the network train better, and skip connection (residual learning) can improve network model performance. For the image classification tasks, models with global densely connected architectures perform well in large datasets like ImageNet, but they are not suitable for small datasets such as CIFAR-10 and SVHN. Different from dense connections, we propose two new algorithms to connect layers in this paper. Baseline is a densely connected network, and the networks connected by the two new algorithms are named ShortNet1 and ShortNet2, respectively. The experimental results of image classification on CIFAR-10 and SVHN show that ShortNet1has a 5% lower test error rate and 25% faster inference time than Baseline. ShortNet2 speeds up inference time by 40% with less loss in test accuracy. Code and pretrained models are available at https://github.com/RuiyangJu/Connection_Reduction/","PeriodicalId":410603,"journal":{"name":"2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPACS57703.2022.10082842","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Convolutional Neural Networks increase depth by stacking convolution layers, and deeper network models perform better in image recognition. Empirical research shows that simply stacking convolution layers does not make the network train better, and skip connection (residual learning) can improve network model performance. For the image classification tasks, models with global densely connected architectures perform well in large datasets like ImageNet, but they are not suitable for small datasets such as CIFAR-10 and SVHN. Different from dense connections, we propose two new algorithms to connect layers in this paper. Baseline is a densely connected network, and the networks connected by the two new algorithms are named ShortNet1 and ShortNet2, respectively. The experimental results of image classification on CIFAR-10 and SVHN show that ShortNet1has a 5% lower test error rate and 25% faster inference time than Baseline. ShortNet2 speeds up inference time by 40% with less loss in test accuracy. Code and pretrained models are available at https://github.com/RuiyangJu/Connection_Reduction/