Omar M. Sleem, M. E. Ashour, N. S. Aybat, Constantino M. Lagoa
{"title":"Lp 准规范最小化:算法与应用","authors":"Omar M. Sleem, M. E. Ashour, N. S. Aybat, Constantino M. Lagoa","doi":"10.1186/s13634-024-01114-6","DOIUrl":null,"url":null,"abstract":"<p>Sparsity finds applications in diverse areas such as statistics, machine learning, and signal processing. Computations over sparse structures are less complex compared to their dense counterparts and need less storage. This paper proposes a heuristic method for retrieving sparse approximate solutions of optimization problems via minimizing the <span>\\(\\ell _{p}\\)</span> quasi-norm, where <span>\\(0<p<1\\)</span>. An iterative two-block algorithm for minimizing the <span>\\(\\ell _{p}\\)</span> quasi-norm subject to convex constraints is proposed. The proposed algorithm requires solving for the roots of a scalar degree polynomial as opposed to applying a soft thresholding operator in the case of <span>\\(\\ell _{1}\\)</span> norm minimization. The algorithm’s merit relies on its ability to solve the <span>\\(\\ell _{p}\\)</span> quasi-norm minimization subject to any convex constraints set. For the specific case of constraints defined by differentiable functions with Lipschitz continuous gradient, a second, faster algorithm is proposed. Using a proximal gradient step, we mitigate the convex projection step and hence enhance the algorithm’s speed while proving its convergence. We present various applications where the proposed algorithm excels, namely, sparse signal reconstruction, system identification, and matrix completion. The results demonstrate the significant gains obtained by the proposed algorithm compared to other <span>\\(\\ell _{p}\\)</span> quasi-norm based methods presented in previous literature.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"8 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lp quasi-norm minimization: algorithm and applications\",\"authors\":\"Omar M. Sleem, M. E. Ashour, N. S. Aybat, Constantino M. Lagoa\",\"doi\":\"10.1186/s13634-024-01114-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Sparsity finds applications in diverse areas such as statistics, machine learning, and signal processing. Computations over sparse structures are less complex compared to their dense counterparts and need less storage. This paper proposes a heuristic method for retrieving sparse approximate solutions of optimization problems via minimizing the <span>\\\\(\\\\ell _{p}\\\\)</span> quasi-norm, where <span>\\\\(0<p<1\\\\)</span>. An iterative two-block algorithm for minimizing the <span>\\\\(\\\\ell _{p}\\\\)</span> quasi-norm subject to convex constraints is proposed. The proposed algorithm requires solving for the roots of a scalar degree polynomial as opposed to applying a soft thresholding operator in the case of <span>\\\\(\\\\ell _{1}\\\\)</span> norm minimization. The algorithm’s merit relies on its ability to solve the <span>\\\\(\\\\ell _{p}\\\\)</span> quasi-norm minimization subject to any convex constraints set. For the specific case of constraints defined by differentiable functions with Lipschitz continuous gradient, a second, faster algorithm is proposed. Using a proximal gradient step, we mitigate the convex projection step and hence enhance the algorithm’s speed while proving its convergence. We present various applications where the proposed algorithm excels, namely, sparse signal reconstruction, system identification, and matrix completion. The results demonstrate the significant gains obtained by the proposed algorithm compared to other <span>\\\\(\\\\ell _{p}\\\\)</span> quasi-norm based methods presented in previous literature.</p>\",\"PeriodicalId\":11816,\"journal\":{\"name\":\"EURASIP Journal on Advances in Signal Processing\",\"volume\":\"8 1\",\"pages\":\"\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-02-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EURASIP Journal on Advances in Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1186/s13634-024-01114-6\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EURASIP Journal on Advances in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1186/s13634-024-01114-6","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Engineering","Score":null,"Total":0}
Lp quasi-norm minimization: algorithm and applications
Sparsity finds applications in diverse areas such as statistics, machine learning, and signal processing. Computations over sparse structures are less complex compared to their dense counterparts and need less storage. This paper proposes a heuristic method for retrieving sparse approximate solutions of optimization problems via minimizing the \(\ell _{p}\) quasi-norm, where \(0<p<1\). An iterative two-block algorithm for minimizing the \(\ell _{p}\) quasi-norm subject to convex constraints is proposed. The proposed algorithm requires solving for the roots of a scalar degree polynomial as opposed to applying a soft thresholding operator in the case of \(\ell _{1}\) norm minimization. The algorithm’s merit relies on its ability to solve the \(\ell _{p}\) quasi-norm minimization subject to any convex constraints set. For the specific case of constraints defined by differentiable functions with Lipschitz continuous gradient, a second, faster algorithm is proposed. Using a proximal gradient step, we mitigate the convex projection step and hence enhance the algorithm’s speed while proving its convergence. We present various applications where the proposed algorithm excels, namely, sparse signal reconstruction, system identification, and matrix completion. The results demonstrate the significant gains obtained by the proposed algorithm compared to other \(\ell _{p}\) quasi-norm based methods presented in previous literature.
期刊介绍:
The aim of the EURASIP Journal on Advances in Signal Processing is to highlight the theoretical and practical aspects of signal processing in new and emerging technologies. The journal is directed as much at the practicing engineer as at the academic researcher. Authors of articles with novel contributions to the theory and/or practice of signal processing are welcome to submit their articles for consideration.