{"title":"VGGreNet: A Light-Weight VGGNet with Reused Convolutional Set","authors":"Ka‐Hou Chan, S. Im, W. Ke","doi":"10.1109/UCC48980.2020.00068","DOIUrl":null,"url":null,"abstract":"This article introduces a light-weight VGGNet for deeper neural networks. In our model, we present a reusable convolution set that is designed to capture as much information as possible until the feature size is reduced to 1. The use of reusable layers for convolution can ensure the convergence without using a pre-trained model, and can greatly reduce the number of training parameters. Since these can be about 22.0% of the VGGNet, this leads to a reduction in memory consumption and faster convergence. As a result, the proposed model can improve the accuracy of testing. Moreover, the design and implementation can be easily deployed in the CNN approach related to the VGGNet model.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UCC48980.2020.00068","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
This article introduces a light-weight VGGNet for deeper neural networks. In our model, we present a reusable convolution set that is designed to capture as much information as possible until the feature size is reduced to 1. The use of reusable layers for convolution can ensure the convergence without using a pre-trained model, and can greatly reduce the number of training parameters. Since these can be about 22.0% of the VGGNet, this leads to a reduction in memory consumption and faster convergence. As a result, the proposed model can improve the accuracy of testing. Moreover, the design and implementation can be easily deployed in the CNN approach related to the VGGNet model.