{"title":"VGG深度神经网络压缩通过SVD和CUR分解技术","authors":"An Mai, L. Tran, Linh Tran, Nguyen Trinh","doi":"10.1109/NICS51282.2020.9335842","DOIUrl":null,"url":null,"abstract":"We all know that VGG deep neural network is one of the most advanced and powerful deep learning models popular used in computer vision. However, the cost of training and serving VGG models sometimes is considerable due to the large sets of parameters. Therefore, in practice, it is necessary to provide constructive methods to compress these models, while keeping the same level of accuracy. In this paper, we study on the use of SVD and CUR decomposition techniques to compress the VGG models, and compare them with the original VGG deep neural networks on the image classification problems. Experimental results, conducted in three image datasets MNIST, FASHION MNIST, and CIFAR10, show that although the number of parameters of the compressed models is much smaller than the number of parameters of the original VGG models, the accuracy performances of the compressed models are competitive to the original ones. Even, the proposed compression with CUR performs better than the one with SVD. Moreover, it is noteworthy to see the training times of all compressed models are obviously faster than the training times of the original VGG models.","PeriodicalId":308944,"journal":{"name":"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"VGG deep neural network compression via SVD and CUR decomposition techniques\",\"authors\":\"An Mai, L. Tran, Linh Tran, Nguyen Trinh\",\"doi\":\"10.1109/NICS51282.2020.9335842\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We all know that VGG deep neural network is one of the most advanced and powerful deep learning models popular used in computer vision. However, the cost of training and serving VGG models sometimes is considerable due to the large sets of parameters. Therefore, in practice, it is necessary to provide constructive methods to compress these models, while keeping the same level of accuracy. In this paper, we study on the use of SVD and CUR decomposition techniques to compress the VGG models, and compare them with the original VGG deep neural networks on the image classification problems. Experimental results, conducted in three image datasets MNIST, FASHION MNIST, and CIFAR10, show that although the number of parameters of the compressed models is much smaller than the number of parameters of the original VGG models, the accuracy performances of the compressed models are competitive to the original ones. Even, the proposed compression with CUR performs better than the one with SVD. Moreover, it is noteworthy to see the training times of all compressed models are obviously faster than the training times of the original VGG models.\",\"PeriodicalId\":308944,\"journal\":{\"name\":\"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NICS51282.2020.9335842\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 7th NAFOSTED Conference on Information and Computer Science (NICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NICS51282.2020.9335842","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
VGG deep neural network compression via SVD and CUR decomposition techniques
We all know that VGG deep neural network is one of the most advanced and powerful deep learning models popular used in computer vision. However, the cost of training and serving VGG models sometimes is considerable due to the large sets of parameters. Therefore, in practice, it is necessary to provide constructive methods to compress these models, while keeping the same level of accuracy. In this paper, we study on the use of SVD and CUR decomposition techniques to compress the VGG models, and compare them with the original VGG deep neural networks on the image classification problems. Experimental results, conducted in three image datasets MNIST, FASHION MNIST, and CIFAR10, show that although the number of parameters of the compressed models is much smaller than the number of parameters of the original VGG models, the accuracy performances of the compressed models are competitive to the original ones. Even, the proposed compression with CUR performs better than the one with SVD. Moreover, it is noteworthy to see the training times of all compressed models are obviously faster than the training times of the original VGG models.