Information and Inference-A Journal of the Ima最新文献

筛选
英文 中文
Multivariate super-resolution without separation 无分离的多元超分辨率
4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2023-04-27 DOI: 10.1093/imaiai/iaad024
Bakytzhan Kurmanbek, Elina Robeva
{"title":"Multivariate super-resolution without separation","authors":"Bakytzhan Kurmanbek, Elina Robeva","doi":"10.1093/imaiai/iaad024","DOIUrl":"https://doi.org/10.1093/imaiai/iaad024","url":null,"abstract":"Abstract In this paper, we study the high-dimensional super-resolution imaging problem. Here, we are given an image of a number of point sources of light whose locations and intensities are unknown. The image is pixelized and is blurred by a known point-spread function arising from the imaging device. We encode the unknown point sources and their intensities via a non-negative measure and we propose a convex optimization program to find it. Assuming the device’s point-spread function is componentwise decomposable, we show that the optimal solution is the true measure in the noiseless case, and it approximates the true measure well in the noisy case with respect to the generalized Wasserstein distance. Our main assumption is that the components of the point-spread function form a Tchebychev system ($T$-system) in the noiseless case and a $T^{*}$-system in the noisy case, mild conditions that are satisfied by Gaussian point-spread functions. Our work is a generalization to all dimensions of the work [14] where the same analysis is carried out in two dimensions. We also extend results in [27] to the high-dimensional case when the point-spread function decomposes.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136266723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A manifold two-sample test study: integral probability metric with neural networks 流形双样本检验研究:基于神经网络的积分概率度量
4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2023-04-27 DOI: 10.1093/imaiai/iaad018
Jie Wang, Minshuo Chen, Tuo Zhao, Wenjing Liao, Yao Xie
{"title":"A manifold two-sample test study: integral probability metric with neural networks","authors":"Jie Wang, Minshuo Chen, Tuo Zhao, Wenjing Liao, Yao Xie","doi":"10.1093/imaiai/iaad018","DOIUrl":"https://doi.org/10.1093/imaiai/iaad018","url":null,"abstract":"Abstract Two-sample tests are important areas aiming to determine whether two collections of observations follow the same distribution or not. We propose two-sample tests based on integral probability metric (IPM) for high-dimensional samples supported on a low-dimensional manifold. We characterize the properties of proposed tests with respect to the number of samples $n$ and the structure of the manifold with intrinsic dimension $d$. When an atlas is given, we propose a two-step test to identify the difference between general distributions, which achieves the type-II risk in the order of $n^{-1/max {d,2}}$. When an atlas is not given, we propose Hölder IPM test that applies for data distributions with $(s,beta )$-Hölder densities, which achieves the type-II risk in the order of $n^{-(s+beta )/d}$. To mitigate the heavy computation burden of evaluating the Hölder IPM, we approximate the Hölder function class using neural networks. Based on the approximation theory of neural networks, we show that the neural network IPM test has the type-II risk in the order of $n^{-(s+beta )/d}$, which is in the same order of the type-II risk as the Hölder IPM test. Our proposed tests are adaptive to low-dimensional geometric structure because their performance crucially depends on the intrinsic dimension instead of the data dimension.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136266917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and provable tensor robust principal component analysis via scaled gradient descent 快速和可证明的张量鲁棒主成分分析通过缩放梯度下降
4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2023-04-27 DOI: 10.1093/imaiai/iaad019
Harry Dong, Tian Tong, Cong Ma, Yuejie Chi
{"title":"Fast and provable tensor robust principal component analysis via scaled gradient descent","authors":"Harry Dong, Tian Tong, Cong Ma, Yuejie Chi","doi":"10.1093/imaiai/iaad019","DOIUrl":"https://doi.org/10.1093/imaiai/iaad019","url":null,"abstract":"Abstract An increasing number of data science and machine learning problems rely on computation with tensors, which better capture the multi-way relationships and interactions of data than matrices. When tapping into this critical advantage, a key challenge is to develop computationally efficient and provably correct algorithms for extracting useful information from tensor data that are simultaneously robust to corruptions and ill-conditioning. This paper tackles tensor robust principal component analysis (RPCA), which aims to recover a low-rank tensor from its observations contaminated by sparse corruptions, under the Tucker decomposition. To minimize the computation and memory footprints, we propose to directly recover the low-dimensional tensor factors—starting from a tailored spectral initialization—via scaled gradient descent (ScaledGD), coupled with an iteration-varying thresholding operation to adaptively remove the impact of corruptions. Theoretically, we establish that the proposed algorithm converges linearly to the true low-rank tensor at a constant rate that is independent with its condition number, as long as the level of corruptions is not too large. Empirically, we demonstrate that the proposed algorithm achieves better and more scalable performance than state-of-the-art tensor RPCA algorithms through synthetic experiments and real-world applications.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136267080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization error bounds for iterative recovery algorithms unfolded as neural networks 迭代恢复算法的泛化误差边界以神经网络的形式展开
4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2023-04-27 DOI: 10.1093/imaiai/iaad023
Ekkehard Schnoor, Arash Behboodi, Holger Rauhut
{"title":"Generalization error bounds for iterative recovery algorithms unfolded as neural networks","authors":"Ekkehard Schnoor, Arash Behboodi, Holger Rauhut","doi":"10.1093/imaiai/iaad023","DOIUrl":"https://doi.org/10.1093/imaiai/iaad023","url":null,"abstract":"Abstract Motivated by the learned iterative soft thresholding algorithm (LISTA), we introduce a general class of neural networks suitable for sparse reconstruction from few linear measurements. By allowing a wide range of degrees of weight-sharing between the flayers, we enable a unified analysis for very different neural network types, ranging from recurrent ones to networks more similar to standard feedforward neural networks. Based on training samples, via empirical risk minimization, we aim at learning the optimal network parameters and thereby the optimal network that reconstructs signals from their low-dimensional linear measurements. We derive generalization bounds by analyzing the Rademacher complexity of hypothesis classes consisting of such deep networks, that also take into account the thresholding parameters. We obtain estimates of the sample complexity that essentially depend only linearly on the number of parameters and on the depth. We apply our main result to obtain specific generalization bounds for several practical examples, including different algorithms for (implicit) dictionary learning, and convolutional neural networks.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136266735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Separation-free super-resolution from compressed measurements is possible: an orthonormal atomic norm minimization approach 从压缩测量中实现无分离的超分辨率是可能的:一种标准正交原子范数最小化方法
4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2023-04-27 DOI: 10.1093/imaiai/iaad033
Jirong Yi, Soura Dasgupta, Jian-Feng Cai, Mathews Jacob, Jingchao Gao, Myung Cho, Weiyu Xu
{"title":"Separation-free super-resolution from compressed measurements is possible: an orthonormal atomic norm minimization approach","authors":"Jirong Yi, Soura Dasgupta, Jian-Feng Cai, Mathews Jacob, Jingchao Gao, Myung Cho, Weiyu Xu","doi":"10.1093/imaiai/iaad033","DOIUrl":"https://doi.org/10.1093/imaiai/iaad033","url":null,"abstract":"Abstract We consider the problem of recovering the superposition of $R$ distinct complex exponential functions from compressed non-uniform time-domain samples. Total variation (TV) minimization or atomic norm minimization was proposed in the literature to recover the $R$ frequencies or the missing data. However, it is known that in order for TV minimization and atomic norm minimization to recover the missing data or the frequencies, the underlying $R$ frequencies are required to be well separated, even when the measurements are noiseless. This paper shows that the Hankel matrix recovery approach can super-resolve the $R$ complex exponentials and their frequencies from compressed non-uniform measurements, regardless of how close their frequencies are to each other. We propose a new concept of orthonormal atomic norm minimization (OANM), and demonstrate that the success of Hankel matrix recovery in separation-free super-resolution comes from the fact that the nuclear norm of a Hankel matrix is an orthonormal atomic norm. More specifically, we show that, in traditional atomic norm minimization, the underlying parameter values must be well separated to achieve successful signal recovery, if the atoms are changing continuously with respect to the continuously valued parameter. In contrast, for the OANM, it is possible the OANM is successful even though the original atoms can be arbitrarily close. As a byproduct of this research, we provide one matrix-theoretic inequality of nuclear norm, and give its proof using the theory of compressed sensing.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136266739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimax detection of localized signals in statistical inverse problems 统计逆问题中局域信号的极大极小检测
4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2023-04-27 DOI: 10.1093/imaiai/iaad026
Markus Pohlmann, Frank Werner, Axel Munk
{"title":"Minimax detection of localized signals in statistical inverse problems","authors":"Markus Pohlmann, Frank Werner, Axel Munk","doi":"10.1093/imaiai/iaad026","DOIUrl":"https://doi.org/10.1093/imaiai/iaad026","url":null,"abstract":"Abstract We investigate minimax testing for detecting local signals or linear combinations of such signals when only indirect data are available. Naturally, in the presence of noise, signals that are too small cannot be reliably detected. In a Gaussian white noise model, we discuss upper and lower bounds for the minimal size of the signal such that testing with small error probabilities is possible. In certain situations we are able to characterize the asymptotic minimax detection boundary. Our results are applied to inverse problems such as numerical differentiation, deconvolution and the inversion of the Radon transform.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136266731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sharp, strong and unique minimizers for low complexity robust recovery 锐利,强大和独特的最小化低复杂性稳健恢复
4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2023-04-23 DOI: 10.1093/imaiai/iaad005
Jalal Fadili, Tran T. A. Nghia, Trinh T. T. Tran
{"title":"Sharp, strong and unique minimizers for low complexity robust recovery","authors":"Jalal Fadili, Tran T. A. Nghia, Trinh T. T. Tran","doi":"10.1093/imaiai/iaad005","DOIUrl":"https://doi.org/10.1093/imaiai/iaad005","url":null,"abstract":"Abstract In this paper, we show the important roles of sharp minima and strong minima for robust recovery. We also obtain several characterizations of sharp minima for convex regularized optimization problems. Our characterizations are quantitative and verifiable especially for the case of decomposable norm regularized problems including sparsity, group-sparsity and low-rank convex problems. For group-sparsity optimization problems, we show that a unique solution is a strong solution and obtains quantitative characterizations for solution uniqueness.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134956496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-adaptive algorithms for threshold group testing with consecutive positives 连续阳性阈值组检测的非自适应算法
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2023-04-04 DOI: 10.1093/imaiai/iaad009
{"title":"Non-adaptive algorithms for threshold group testing with consecutive positives","authors":"","doi":"10.1093/imaiai/iaad009","DOIUrl":"https://doi.org/10.1093/imaiai/iaad009","url":null,"abstract":"\u0000 Given up to $d$ positive items in a large population of $n$ items ($d ll n$), the goal of threshold group testing is to efficiently identify the positives via tests, where a test on a subset of items is positive if the subset contains at least $u$ positive items, negative if it contains up to $ell $ positive items and arbitrary (either positive or negative) otherwise. The parameter $g = u - ell - 1$ is called the gap. In non-adaptive strategies, all tests are fixed in advance and can be represented as a measurement matrix, in which each row and column represent a test and an item, respectively. In this paper, we consider non-adaptive threshold group testing with consecutive positives in which the items are linearly ordered and the positives are consecutive in that order. We show that by designing deterministic and strongly explicit measurement matrices, $lceil log _{2}{lceil frac {n}{d} rceil } rceil + 2d + 3$ (respectively, $lceil log _{2}{lceil frac {n}{d} rceil } rceil + 3d$) tests suffice to identify the positives in $O left ( log _{2}{frac {n}{d}} + d right )$ time when $g = 0$ (respectively, $g> 0$). The results significantly improve the state-of-the-art scheme that needs $15 lceil log _{2}{lceil frac {n}{d} rceil } rceil + 4d + 71$ tests to identify the positives in $O left ( frac {n}{d} log _{2}{frac {n}{d}} + ud^{2} right )$ time, and whose associated measurement matrices are random and (non-strongly) explicit.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"25 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74278014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Theoretical analysis and computation of the sample Fréchet mean of sets of large graphs for various metrics 各种指标的大图集样本均值的理论分析与计算
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2023-03-28 DOI: 10.1093/imaiai/iaad002
Daniel Ferguson, F. G. Meyer
{"title":"Theoretical analysis and computation of the sample Fréchet mean of sets of large graphs for various metrics","authors":"Daniel Ferguson, F. G. Meyer","doi":"10.1093/imaiai/iaad002","DOIUrl":"https://doi.org/10.1093/imaiai/iaad002","url":null,"abstract":"\u0000 To characterize the location (mean, median) of a set of graphs, one needs a notion of centrality that has been adapted to metric spaces. A standard approach is to consider the Fréchet mean. In practice, computing the Fréchet mean for sets of large graphs presents many computational issues. In this work, we suggest a method that may be used to compute the Fréchet mean for sets of graphs which is metric independent. We show that the technique proposed can be used to determine the Fréchet mean when considering the Hamming distance or a distance defined by the difference between the spectra of the adjacency matrices of the graphs.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"105 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89007827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Viterbi property in decoding 译码中的局部维特比特性
IF 1.6 4区 数学
Information and Inference-A Journal of the Ima Pub Date : 2023-03-20 DOI: 10.1093/imaiai/iaad004
J. Lember
{"title":"Local Viterbi property in decoding","authors":"J. Lember","doi":"10.1093/imaiai/iaad004","DOIUrl":"https://doi.org/10.1093/imaiai/iaad004","url":null,"abstract":"\u0000 The article studies the decoding problem (also known as the classification or the segmentation problem) with pairwise Markov models (PMMs). A PMM is a process where the observation process and the underlying state sequence form a two-dimensional Markov chain, a natural generalization of hidden Markov model. The standard solutions to the decoding problem are the so-called Viterbi path—a sequence with maximum state path probability given the observations—or the pointwise maximum a posteriori (PMAP) path that maximizes the expected number of correctly classified entries. When the goal is to simultaneously maximize both criterions—conditional probability (corresponding to Viterbi path) and pointwise conditional probability (corresponding to PMAP path)—then they are combined into one single criterion via the regularization parameter $C$. The main objective of the article is to study the behaviour of the solution—called the hybrid path—as $C$ grows. Increasing $C$ increases the conditional probability of the hybrid path and when $C$ is big enough then every hybrid path is a Viterbi path. We show that hybrid paths also approach the Viterbi path locally: we define $m$-locally Viterbi paths and show that the hybrid path is $m$-locally Viterbi whenever $C$ is big enough. This all might lead to an impression that when $C$ is relatively big then any hybrid path that is not yet Viterbi differs from the Viterbi path by a few single entries only. We argue that this intuition is wrong, because when unique and $m$-locally Viterbi, then different hybrid paths differ by at least $m$ entries. Thus, when $C$ increases then the different hybrid paths tend to differ from each other by larger and larger intervals. Hence the hybrid paths might offer a variety of rather different solutions to the decoding problem.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"95 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82746142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信