{"title":"并行稀疏恢复算法:一种随机方法","authors":"A. Shah, A. Majumdar","doi":"10.1109/ICDSP.2014.6900814","DOIUrl":null,"url":null,"abstract":"This work proposes a novel technique for accelerating sparse recovery algorithms on multi-core shared memory architectures. All prior works attempt to speed-up algorithms by leveraging the speed-ups in matrix-vector products offered by the GPU. A major limitation of these studies is that in most signal processing applications, the operators are not available as explicit matrices but as implicit fast operators. In such a practical scenario, the prior techniques fail to speed up the sparse recovery algorithms. Our work is based on the principles of stochastic gradient descent. The main sequential bottleneck of sparse recovery methods is a gradient descent step. Instead of computing the full gradient, we compute multiple stochastic gradients in parallel cores; the full gradient is estimated by averaging these stochastic gradients. The other step of sparse recovery algorithms is a shrinkage operation which is inherently parallel. Our proposed method has been compared with existing sequential algorithms. We find that our method is as accurate as the sequential version but is significantly faster - the larger the size of the problem, the faster is our method.","PeriodicalId":301856,"journal":{"name":"2014 19th International Conference on Digital Signal Processing","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Parallelizing sparse recovery algorithms: A stochastic approach\",\"authors\":\"A. Shah, A. Majumdar\",\"doi\":\"10.1109/ICDSP.2014.6900814\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work proposes a novel technique for accelerating sparse recovery algorithms on multi-core shared memory architectures. All prior works attempt to speed-up algorithms by leveraging the speed-ups in matrix-vector products offered by the GPU. A major limitation of these studies is that in most signal processing applications, the operators are not available as explicit matrices but as implicit fast operators. In such a practical scenario, the prior techniques fail to speed up the sparse recovery algorithms. Our work is based on the principles of stochastic gradient descent. The main sequential bottleneck of sparse recovery methods is a gradient descent step. Instead of computing the full gradient, we compute multiple stochastic gradients in parallel cores; the full gradient is estimated by averaging these stochastic gradients. The other step of sparse recovery algorithms is a shrinkage operation which is inherently parallel. Our proposed method has been compared with existing sequential algorithms. We find that our method is as accurate as the sequential version but is significantly faster - the larger the size of the problem, the faster is our method.\",\"PeriodicalId\":301856,\"journal\":{\"name\":\"2014 19th International Conference on Digital Signal Processing\",\"volume\":\"68 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 19th International Conference on Digital Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDSP.2014.6900814\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 19th International Conference on Digital Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSP.2014.6900814","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Parallelizing sparse recovery algorithms: A stochastic approach
This work proposes a novel technique for accelerating sparse recovery algorithms on multi-core shared memory architectures. All prior works attempt to speed-up algorithms by leveraging the speed-ups in matrix-vector products offered by the GPU. A major limitation of these studies is that in most signal processing applications, the operators are not available as explicit matrices but as implicit fast operators. In such a practical scenario, the prior techniques fail to speed up the sparse recovery algorithms. Our work is based on the principles of stochastic gradient descent. The main sequential bottleneck of sparse recovery methods is a gradient descent step. Instead of computing the full gradient, we compute multiple stochastic gradients in parallel cores; the full gradient is estimated by averaging these stochastic gradients. The other step of sparse recovery algorithms is a shrinkage operation which is inherently parallel. Our proposed method has been compared with existing sequential algorithms. We find that our method is as accurate as the sequential version but is significantly faster - the larger the size of the problem, the faster is our method.