{"title":"亚线性稀疏性压缩传感的广义近似信息传递","authors":"Keigo Takeuchi","doi":"arxiv-2409.06320","DOIUrl":null,"url":null,"abstract":"This paper addresses the reconstruction of an unknown signal vector with\nsublinear sparsity from generalized linear measurements. Generalized\napproximate message-passing (GAMP) is proposed via state evolution in the\nsublinear sparsity limit, where the signal dimension $N$, measurement dimension\n$M$, and signal sparsity $k$ satisfy $\\log k/\\log N\\to \\gamma\\in[0, 1)$ and\n$M/\\{k\\log (N/k)\\}\\to\\delta$ as $N$ and $k$ tend to infinity. While the overall\nflow in state evolution is the same as that for linear sparsity, each proof\nstep for inner denoising requires stronger assumptions than those for linear\nsparsity. The required new assumptions are proved for Bayesian inner denoising.\nWhen Bayesian outer and inner denoisers are used in GAMP, the obtained state\nevolution recursion is utilized to evaluate the prefactor $\\delta$ in the\nsample complexity, called reconstruction threshold. If and only if $\\delta$ is\nlarger than the reconstruction threshold, Bayesian GAMP can achieve\nasymptotically exact signal reconstruction. In particular, the reconstruction\nthreshold is finite for noisy linear measurements when the support of non-zero\nsignal elements does not include a neighborhood of zero. As numerical examples,\nthis paper considers linear measurements and 1-bit compressed sensing.\nNumerical simulations for both cases show that Bayesian GAMP outperforms\nexisting algorithms for sublinear sparsity in terms of the sample complexity.","PeriodicalId":501082,"journal":{"name":"arXiv - MATH - Information Theory","volume":"15 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generalized Approximate Message-Passing for Compressed Sensing with Sublinear Sparsity\",\"authors\":\"Keigo Takeuchi\",\"doi\":\"arxiv-2409.06320\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper addresses the reconstruction of an unknown signal vector with\\nsublinear sparsity from generalized linear measurements. Generalized\\napproximate message-passing (GAMP) is proposed via state evolution in the\\nsublinear sparsity limit, where the signal dimension $N$, measurement dimension\\n$M$, and signal sparsity $k$ satisfy $\\\\log k/\\\\log N\\\\to \\\\gamma\\\\in[0, 1)$ and\\n$M/\\\\{k\\\\log (N/k)\\\\}\\\\to\\\\delta$ as $N$ and $k$ tend to infinity. While the overall\\nflow in state evolution is the same as that for linear sparsity, each proof\\nstep for inner denoising requires stronger assumptions than those for linear\\nsparsity. The required new assumptions are proved for Bayesian inner denoising.\\nWhen Bayesian outer and inner denoisers are used in GAMP, the obtained state\\nevolution recursion is utilized to evaluate the prefactor $\\\\delta$ in the\\nsample complexity, called reconstruction threshold. If and only if $\\\\delta$ is\\nlarger than the reconstruction threshold, Bayesian GAMP can achieve\\nasymptotically exact signal reconstruction. In particular, the reconstruction\\nthreshold is finite for noisy linear measurements when the support of non-zero\\nsignal elements does not include a neighborhood of zero. As numerical examples,\\nthis paper considers linear measurements and 1-bit compressed sensing.\\nNumerical simulations for both cases show that Bayesian GAMP outperforms\\nexisting algorithms for sublinear sparsity in terms of the sample complexity.\",\"PeriodicalId\":501082,\"journal\":{\"name\":\"arXiv - MATH - Information Theory\",\"volume\":\"15 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - MATH - Information Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06320\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Information Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06320","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generalized Approximate Message-Passing for Compressed Sensing with Sublinear Sparsity
This paper addresses the reconstruction of an unknown signal vector with
sublinear sparsity from generalized linear measurements. Generalized
approximate message-passing (GAMP) is proposed via state evolution in the
sublinear sparsity limit, where the signal dimension $N$, measurement dimension
$M$, and signal sparsity $k$ satisfy $\log k/\log N\to \gamma\in[0, 1)$ and
$M/\{k\log (N/k)\}\to\delta$ as $N$ and $k$ tend to infinity. While the overall
flow in state evolution is the same as that for linear sparsity, each proof
step for inner denoising requires stronger assumptions than those for linear
sparsity. The required new assumptions are proved for Bayesian inner denoising.
When Bayesian outer and inner denoisers are used in GAMP, the obtained state
evolution recursion is utilized to evaluate the prefactor $\delta$ in the
sample complexity, called reconstruction threshold. If and only if $\delta$ is
larger than the reconstruction threshold, Bayesian GAMP can achieve
asymptotically exact signal reconstruction. In particular, the reconstruction
threshold is finite for noisy linear measurements when the support of non-zero
signal elements does not include a neighborhood of zero. As numerical examples,
this paper considers linear measurements and 1-bit compressed sensing.
Numerical simulations for both cases show that Bayesian GAMP outperforms
existing algorithms for sublinear sparsity in terms of the sample complexity.