Sangeeta Sarkar, Meenakshi Agarwalla, S. Agarwal, M. Sarma
{"title":"An Incremental Pruning Strategy for Fast Training of CNN Models","authors":"Sangeeta Sarkar, Meenakshi Agarwalla, S. Agarwal, M. Sarma","doi":"10.1109/ComPE49325.2020.9200168","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks have progressed significantly over the past few years and they are growing better and bigger each day. Thus, it becomes difficult to compute as well as store these over-parameterized networks. Pruning is a technique to reduce the parameter-count resulting in improved speed, reduced size and reduced computation power. In this paper, we have explored a new pruning strategy based on the technique of Incremental Pruning with less pre-training and achieved better accuracy in lesser computation time on MNIST, CIFAR-10 and CIFAR-100 datasets compared to previous related works with small decrease in compression rates. On MNIST, CIFAR-10 and CIFAR-100 datasets, the proposed technique prunes 10x faster than conventional models with similar accuracy.","PeriodicalId":6804,"journal":{"name":"2020 International Conference on Computational Performance Evaluation (ComPE)","volume":"46 1","pages":"371-375"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Computational Performance Evaluation (ComPE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ComPE49325.2020.9200168","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Deep Neural Networks have progressed significantly over the past few years and they are growing better and bigger each day. Thus, it becomes difficult to compute as well as store these over-parameterized networks. Pruning is a technique to reduce the parameter-count resulting in improved speed, reduced size and reduced computation power. In this paper, we have explored a new pruning strategy based on the technique of Incremental Pruning with less pre-training and achieved better accuracy in lesser computation time on MNIST, CIFAR-10 and CIFAR-100 datasets compared to previous related works with small decrease in compression rates. On MNIST, CIFAR-10 and CIFAR-100 datasets, the proposed technique prunes 10x faster than conventional models with similar accuracy.