{"title":"The Large Deviation Principle for W-random spectral measures","authors":"Mahya Ghandehari , Georgi S. Medvedev","doi":"10.1016/j.acha.2025.101756","DOIUrl":"10.1016/j.acha.2025.101756","url":null,"abstract":"<div><div>The <em>W</em>-random graphs provide a flexible framework for modeling large random networks. Using the Large Deviation Principle (LDP) for <em>W</em>-random graphs from <span><span>[19]</span></span>, we prove the LDP for the corresponding class of random symmetric Hilbert-Schmidt integral operators. Our main result describes how the eigenvalues and the eigenspaces of the integral operator are affected by large deviations in the underlying random graphon. To prove the LDP, we demonstrate continuous dependence of the spectral measures associated with integral operators on the corresponding graphons and use the Contraction Principle. To illustrate our results, we obtain leading order asymptotics of the eigenvalues of small-world and bipartite random graphs conditioned on atypical edge counts. These examples suggest several representative scenarios of how the eigenvalues and the eigenspaces are affected by large deviations. We discuss the implications of these observations for bifurcation analysis of Dynamical Systems and Graph Signal Processing.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"77 ","pages":"Article 101756"},"PeriodicalIF":2.6,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143479228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Massimo Fornasier , Timo Klock , Marco Mondelli , Michael Rauchensteiner
{"title":"Efficient identification of wide shallow neural networks with biases","authors":"Massimo Fornasier , Timo Klock , Marco Mondelli , Michael Rauchensteiner","doi":"10.1016/j.acha.2025.101749","DOIUrl":"10.1016/j.acha.2025.101749","url":null,"abstract":"<div><div>The identification of the parameters of a neural network from finite samples of input-output pairs is often referred to as the <em>teacher-student model</em>, and this model has represented a popular framework for understanding training and generalization. Even if the problem is NP-complete in the worst case, a rapidly growing literature – after adding suitable distributional assumptions – has established finite sample identification of two-layer networks with a number of neurons <span><math><mi>m</mi><mo>=</mo><mi>O</mi><mo>(</mo><mi>D</mi><mo>)</mo></math></span>, <em>D</em> being the input dimension. For the range <span><math><mi>D</mi><mo><</mo><mi>m</mi><mo><</mo><msup><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> the problem becomes harder, and truly little is known for networks parametrized by biases as well. This paper fills the gap by providing efficient algorithms and rigorous theoretical guarantees of finite sample identification for such wider shallow networks with biases. Our approach is based on a two-step pipeline: first, we recover the direction of the weights, by exploiting second order information; next, we identify the signs by suitable algebraic evaluations, and we recover the biases by empirical risk minimization via gradient descent. Numerical results demonstrate the effectiveness of our approach.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"77 ","pages":"Article 101749"},"PeriodicalIF":2.6,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kadec-type theorems for sampled group orbits","authors":"Ilya Krishtal, Brendan Miller","doi":"10.1016/j.acha.2025.101748","DOIUrl":"10.1016/j.acha.2025.101748","url":null,"abstract":"<div><div>We extend the classical Kadec <span><math><mfrac><mrow><mn>1</mn></mrow><mrow><mn>4</mn></mrow></mfrac></math></span> theorem for systems of exponential functions on an interval to frames and atomic decompositions formed by sampling an orbit of a vector under an isometric group representation.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"76 ","pages":"Article 101748"},"PeriodicalIF":2.6,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142990415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Horst , Jakob Lemvig , Allan Erlang Videbæk
{"title":"On the non-frame property of Gabor systems with Hermite generators and the frame set conjecture","authors":"Andreas Horst , Jakob Lemvig , Allan Erlang Videbæk","doi":"10.1016/j.acha.2025.101747","DOIUrl":"10.1016/j.acha.2025.101747","url":null,"abstract":"<div><div>The frame set conjecture for Hermite functions formulated in <span><span>[13]</span></span> states that the Gabor frame set for these generators is the largest possible, that is, the time-frequency shifts of the Hermite functions associated with sampling rates <em>α</em> and modulation rates <em>β</em> that avoid all known obstructions lead to Gabor frames for <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mi>R</mi><mo>)</mo></math></span>. By results in <span><span>[24]</span></span>, <span><span>[25]</span></span> and <span><span>[22]</span></span>, it is known that the conjecture is true for the Gaussian, the 0th order Hermite functions, and false for Hermite functions of order <span><math><mn>2</mn><mo>,</mo><mn>3</mn><mo>,</mo><mn>6</mn><mo>,</mo><mn>7</mn><mo>,</mo><mn>10</mn><mo>,</mo><mn>11</mn><mo>,</mo><mo>…</mo></math></span>, respectively. In this paper we disprove the remaining cases <em>except</em> for the 1st order Hermite function.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"76 ","pages":"Article 101747"},"PeriodicalIF":2.6,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142990416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How robust is randomized blind deconvolution via nuclear norm minimization against adversarial noise?","authors":"Julia Kostin , Felix Krahmer , Dominik Stöger","doi":"10.1016/j.acha.2024.101746","DOIUrl":"10.1016/j.acha.2024.101746","url":null,"abstract":"<div><div>In this paper, we study the problem of recovering two unknown signals from their convolution, which is commonly referred to as blind deconvolution. Reformulation of blind deconvolution as a low-rank recovery problem has led to multiple theoretical recovery guarantees in the past decade due to the success of the nuclear norm minimization heuristic. In particular, in the absence of noise, exact recovery has been established for sufficiently incoherent signals contained in lower-dimensional subspaces. However, if the convolution is corrupted by additive bounded noise, the stability of the recovery problem remains much less understood. In particular, existing reconstruction bounds involve large dimension factors and therefore fail to explain the empirical evidence for dimension-independent robustness of nuclear norm minimization. Recently, theoretical evidence has emerged for ill-posed behaviour of low-rank matrix recovery for sufficiently small noise levels. In this work, we develop improved recovery guarantees for blind deconvolution with adversarial noise which exhibit square-root scaling in the noise level. Hence, our results are consistent with existing counterexamples which speak against linear scaling in the noise level as demonstrated for related low-rank matrix recovery problems.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"76 ","pages":"Article 101746"},"PeriodicalIF":2.6,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naveen Gupta , S. Sivananthan , Bharath K. Sriperumbudur
{"title":"Optimal rates for functional linear regression with general regularization","authors":"Naveen Gupta , S. Sivananthan , Bharath K. Sriperumbudur","doi":"10.1016/j.acha.2024.101745","DOIUrl":"10.1016/j.acha.2024.101745","url":null,"abstract":"<div><div>Functional linear regression is one of the fundamental and well-studied methods in functional data analysis. In this work, we investigate the functional linear regression model within the context of reproducing kernel Hilbert space by employing general spectral regularization to approximate the slope function with certain smoothness assumptions. We establish optimal convergence rates for estimation and prediction errors associated with the proposed method under Hölder type source condition, which generalizes and sharpens all the known results in the literature.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"76 ","pages":"Article 101745"},"PeriodicalIF":2.6,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncertainty principles, restriction, Bourgain's Λq theorem, and signal recovery","authors":"A. Iosevich , A. Mayeli","doi":"10.1016/j.acha.2024.101734","DOIUrl":"10.1016/j.acha.2024.101734","url":null,"abstract":"<div><div>Let <em>G</em> be a finite abelian group. Let <span><math><mi>f</mi><mo>:</mo><mi>G</mi><mo>→</mo><mi>C</mi></math></span> be a signal (i.e. function). The classical uncertainty principle asserts that the product of the size of the support of <em>f</em> and its Fourier transform <span><math><mover><mrow><mi>f</mi></mrow><mrow><mo>ˆ</mo></mrow></mover></math></span>, <span><math><mtext>supp</mtext><mo>(</mo><mi>f</mi><mo>)</mo></math></span> and <span><math><mtext>supp</mtext><mo>(</mo><mover><mrow><mi>f</mi></mrow><mrow><mo>ˆ</mo></mrow></mover><mo>)</mo></math></span> respectively, must satisfy the condition:<span><span><span><math><mo>|</mo><mtext>supp</mtext><mo>(</mo><mi>f</mi><mo>)</mo><mo>|</mo><mo>⋅</mo><mo>|</mo><mtext>supp</mtext><mo>(</mo><mover><mrow><mi>f</mi></mrow><mrow><mo>ˆ</mo></mrow></mover><mo>)</mo><mo>|</mo><mo>≥</mo><mo>|</mo><mi>G</mi><mo>|</mo><mo>.</mo></math></span></span></span></div><div>In the first part of this paper, we improve the uncertainty principle for signals with Fourier transform supported on generic sets. This improvement is achieved by employing <em>the restriction theory</em>, including Bourgain celebrate result on <span><math><msub><mrow><mi>Λ</mi></mrow><mrow><mi>q</mi></mrow></msub></math></span>-sets, and <em>the Salem set</em> mechanism from harmonic analysis. Then we investigate some applications of uncertainty principles that were developed in the first part of this paper, to the problem of unique recovery of finite sparse signals in the absence of some frequencies.</div><div>Donoho and Stark (<span><span>[14]</span></span>), and, independently, Matolcsi and Szucs (<span><span>[33]</span></span>) showed that a signal of length <em>N</em> can be recovered exactly, even if some of the frequencies are unobserved, provided that the product of the size of the number of non-zero entries of the signal and the number of missing frequencies is not too large, leveraging the classical uncertainty principle for vectors. Our results broaden the scope for a natural class of signals in higher-dimensional spaces. In the case when the signal is binary, we provide a very simple exact recovery mechanism through the DRA algorithm.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"76 ","pages":"Article 101734"},"PeriodicalIF":2.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tikhonov regularization for Gaussian empirical gain maximization in RKHS is consistent","authors":"Yunlong Feng , Qiang Wu","doi":"10.1016/j.acha.2024.101735","DOIUrl":"10.1016/j.acha.2024.101735","url":null,"abstract":"<div><div>Without imposing light-tailed noise assumptions, we prove that Tikhonov regularization for Gaussian Empirical Gain Maximization (EGM) in a reproducing kernel Hilbert space is consistent and further establish its fast exponential type convergence rates. In the literature, Gaussian EGM was proposed in various contexts to tackle robust estimation problems and has been applied extensively in a great variety of real-world applications. A reproducing kernel Hilbert space is frequently chosen as the hypothesis space, and Tikhonov regularization plays a crucial role in model selection. Although Gaussian EGM has been studied theoretically in a series of papers recently and has been well-understood, theoretical understanding of its Tikhonov regularized variants in RKHS is still limited. Several fundamental challenges remain, especially when light-tailed noise assumptions are absent. To fill the gap and address these challenges, we conduct the present study and make the following contributions. First, under weak moment conditions, we establish a new comparison theorem that enables the investigation of the asymptotic mean calibration properties of regularized Gaussian EGM. Second, under the same weak moment conditions, we show that regularized Gaussian EGM estimators are consistent and further establish their fast exponential-type convergence rates. Our study justifies its feasibility in tackling robust regression problems and explains its robustness from a theoretical viewpoint. Moreover, new technical tools including probabilistic initial upper bounds, confined effective hypothesis spaces, and novel comparison theorems are introduced and developed, which can faciliate the analysis of general regularized empirical gain maximization schemes that fall into the same vein as regularized Gaussian EGM.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"76 ","pages":"Article 101735"},"PeriodicalIF":2.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antoine Maillard , Afonso S. Bandeira , David Belius , Ivan Dokmanić , Shuta Nakajima
{"title":"Injectivity of ReLU networks: Perspectives from statistical physics","authors":"Antoine Maillard , Afonso S. Bandeira , David Belius , Ivan Dokmanić , Shuta Nakajima","doi":"10.1016/j.acha.2024.101736","DOIUrl":"10.1016/j.acha.2024.101736","url":null,"abstract":"<div><div>When can the input of a ReLU neural network be inferred from its output? In other words, when is the network injective? We consider a single layer, <span><math><mi>x</mi><mo>↦</mo><mrow><mi>ReLU</mi></mrow><mo>(</mo><mi>W</mi><mi>x</mi><mo>)</mo></math></span>, with a random Gaussian <span><math><mi>m</mi><mo>×</mo><mi>n</mi></math></span> matrix <em>W</em>, in a high-dimensional setting where <span><math><mi>n</mi><mo>,</mo><mi>m</mi><mo>→</mo><mo>∞</mo></math></span>. Recent work connects this problem to spherical integral geometry giving rise to a conjectured sharp injectivity threshold for <span><math><mi>α</mi><mo>=</mo><mi>m</mi><mo>/</mo><mi>n</mi></math></span> by studying the expected Euler characteristic of a certain random set. We adopt a different perspective and show that injectivity is equivalent to a property of the ground state of the spherical perceptron, an important spin glass model in statistical physics. By leveraging the (non-rigorous) replica symmetry-breaking theory, we derive analytical equations for the threshold whose solution is at odds with that from the Euler characteristic. Furthermore, we use Gordon's min–max theorem to prove that a replica-symmetric upper bound refutes the Euler characteristic prediction. Along the way we aim to give a tutorial-style introduction to key ideas from statistical physics in an effort to make the exposition accessible to a broad audience. Our analysis establishes a connection between spin glasses and integral geometry but leaves open the problem of explaining the discrepancies.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"76 ","pages":"Article 101736"},"PeriodicalIF":2.6,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142790150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Group projected subspace pursuit for block sparse signal reconstruction: Convergence analysis and applications 1","authors":"Roy Y. He , Haixia Liu , Hao Liu","doi":"10.1016/j.acha.2024.101726","DOIUrl":"10.1016/j.acha.2024.101726","url":null,"abstract":"<div><div>In this paper, we present a convergence analysis of the Group Projected Subspace Pursuit (GPSP) algorithm proposed by He et al. <span><span>[26]</span></span> (Group Projected subspace pursuit for IDENTification of variable coefficient differential equations (GP-IDENT), <em>Journal of Computational Physics</em>, 494, 112526) and extend its application to general tasks of block sparse signal recovery. Given an observation <strong>y</strong> and sampling matrix <strong>A</strong>, we focus on minimizing the approximation error <span><math><msubsup><mrow><mo>‖</mo><mi>A</mi><mi>c</mi><mo>−</mo><mi>y</mi><mo>‖</mo></mrow><mrow><mn>2</mn></mrow><mrow><mn>2</mn></mrow></msubsup></math></span> with respect to the signal <strong>c</strong> with block sparsity constraints. We prove that when the sampling matrix <strong>A</strong> satisfies the Block Restricted Isometry Property (BRIP) with a sufficiently small Block Restricted Isometry Constant (BRIC), GPSP exactly recovers the true block sparse signals. When the observations are noisy, this convergence property of GPSP remains valid if the magnitude of the true signal is sufficiently large. GPSP selects the features by subspace projection criterion (SPC) for candidate inclusion and response magnitude criterion (RMC) for candidate exclusion. We compare these criteria with counterparts of other state-of-the-art greedy algorithms. Our theoretical analysis and numerical ablation studies reveal that SPC is critical to the superior performances of GPSP, and that RMC can enhance the robustness of feature identification when observations contain noises. We test and compare GPSP with other methods in diverse settings, including heterogeneous random block matrices, inexact observations, face recognition, and PDE identification. We find that GPSP outperforms the other algorithms in most cases for various levels of block sparsity and block sizes, justifying its effectiveness for general applications.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"75 ","pages":"Article 101726"},"PeriodicalIF":2.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}