{"title":"群二元权重网络","authors":"K. Guo, Yicai Yang, Xiaofen Xing, Xiangmin Xu","doi":"10.1117/12.2540888","DOIUrl":null,"url":null,"abstract":"In recent years, quantizing the weights of a deep neural network draws increasing attention in the area of network compression. An efficient and popular way to quantize the weight parameters is to replace a filter with the product of binary values and a real-valued scaling factor. However, the quantization error of such binarization method raises as the number of a filter's parameter increases. To reduce quantization error in existing network binarization methods, we propose group binary weight networks (GBWN), which divides the channels of each filter into groups and every channel in the same group shares the same scaling factor. We binarize the popular network architectures VGG, ResNet and DesneNet, and verify the performance on CIFAR10, CIFAR100, Fashion-MNIST, SVHN and ImageNet datasets. Experiment results show that GBWN achieves considerable accuracy increment compared to recent network binarization methods, including BinaryConnect, Binary Weight Networks and Stochastic Quantization Binary Weight Networks.","PeriodicalId":90079,"journal":{"name":"... International Workshop on Pattern Recognition in NeuroImaging. International Workshop on Pattern Recognition in NeuroImaging","volume":"12 1","pages":"1119812 - 1119812-6"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Group binary weight networks\",\"authors\":\"K. Guo, Yicai Yang, Xiaofen Xing, Xiangmin Xu\",\"doi\":\"10.1117/12.2540888\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, quantizing the weights of a deep neural network draws increasing attention in the area of network compression. An efficient and popular way to quantize the weight parameters is to replace a filter with the product of binary values and a real-valued scaling factor. However, the quantization error of such binarization method raises as the number of a filter's parameter increases. To reduce quantization error in existing network binarization methods, we propose group binary weight networks (GBWN), which divides the channels of each filter into groups and every channel in the same group shares the same scaling factor. We binarize the popular network architectures VGG, ResNet and DesneNet, and verify the performance on CIFAR10, CIFAR100, Fashion-MNIST, SVHN and ImageNet datasets. Experiment results show that GBWN achieves considerable accuracy increment compared to recent network binarization methods, including BinaryConnect, Binary Weight Networks and Stochastic Quantization Binary Weight Networks.\",\"PeriodicalId\":90079,\"journal\":{\"name\":\"... International Workshop on Pattern Recognition in NeuroImaging. International Workshop on Pattern Recognition in NeuroImaging\",\"volume\":\"12 1\",\"pages\":\"1119812 - 1119812-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"... International Workshop on Pattern Recognition in NeuroImaging. International Workshop on Pattern Recognition in NeuroImaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2540888\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"... International Workshop on Pattern Recognition in NeuroImaging. International Workshop on Pattern Recognition in NeuroImaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2540888","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In recent years, quantizing the weights of a deep neural network draws increasing attention in the area of network compression. An efficient and popular way to quantize the weight parameters is to replace a filter with the product of binary values and a real-valued scaling factor. However, the quantization error of such binarization method raises as the number of a filter's parameter increases. To reduce quantization error in existing network binarization methods, we propose group binary weight networks (GBWN), which divides the channels of each filter into groups and every channel in the same group shares the same scaling factor. We binarize the popular network architectures VGG, ResNet and DesneNet, and verify the performance on CIFAR10, CIFAR100, Fashion-MNIST, SVHN and ImageNet datasets. Experiment results show that GBWN achieves considerable accuracy increment compared to recent network binarization methods, including BinaryConnect, Binary Weight Networks and Stochastic Quantization Binary Weight Networks.