{"title":"Sparse Neural Networks with Topologies Inspired by Butterfly Structures","authors":"D. Puchala, K. Stokfiszewski","doi":"10.1109/spsympo51155.2020.9593501","DOIUrl":null,"url":null,"abstract":"In this paper we propose sparse neural networks with topologies inspired by fast computational butterfly structures of selected linear transforms, namely: fast discrete cosine transform, and Beneš network like topologies. We demonstrate that sparse neural networks allow to obtain high reduction in the number of arithmetic operations and weights needed by neural structures while preserving good efficiency in selected tasks where dense neural networks find their applications. In order to verify the efficiency of the considered structures we conducted a series of experiments in data compression and image recognition. The obtained experimental results confirm good efficiency of the considered sparse neural structures and reveal that such topologies allow for significant reduction in the number of arithmetic operations and weights that must be trained and stored in order to re-use trained neural networks.","PeriodicalId":380515,"journal":{"name":"2021 Signal Processing Symposium (SPSympo)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Signal Processing Symposium (SPSympo)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/spsympo51155.2020.9593501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper we propose sparse neural networks with topologies inspired by fast computational butterfly structures of selected linear transforms, namely: fast discrete cosine transform, and Beneš network like topologies. We demonstrate that sparse neural networks allow to obtain high reduction in the number of arithmetic operations and weights needed by neural structures while preserving good efficiency in selected tasks where dense neural networks find their applications. In order to verify the efficiency of the considered structures we conducted a series of experiments in data compression and image recognition. The obtained experimental results confirm good efficiency of the considered sparse neural structures and reveal that such topologies allow for significant reduction in the number of arithmetic operations and weights that must be trained and stored in order to re-use trained neural networks.