{"title":"ANOVA-boosting for random Fourier features","authors":"Daniel Potts, Laura Weidensager","doi":"10.1016/j.acha.2025.101789","DOIUrl":"10.1016/j.acha.2025.101789","url":null,"abstract":"<div><div>We propose two algorithms for boosting random Fourier feature models for approximating high-dimensional functions. These methods utilize the classical and generalized analysis of variance (ANOVA) decomposition to learn low-order functions, where there are few interactions between the variables. Our algorithms are able to find an index set of important input variables and variable interactions reliably.</div><div>Furthermore, we generalize already existing random Fourier feature models to an ANOVA setting, where terms of different order can be used. Our algorithms have the advantage of being interpretable, meaning that the influence of every input variable is known in the learned model, even for dependent input variables. We provide theoretical as well as numerical results that our algorithms perform well for sensitivity analysis. The ANOVA-boosting step reduces the approximation error of existing methods significantly.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101789"},"PeriodicalIF":2.6,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144313768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New results on sparse representations in unions of orthonormal bases","authors":"Tao Zhang , Gennian Ge","doi":"10.1016/j.acha.2025.101786","DOIUrl":"10.1016/j.acha.2025.101786","url":null,"abstract":"<div><div>The problem of sparse representation has significant applications in signal processing. The spark of a dictionary plays a crucial role in the study of sparse representation. Donoho and Elad initially explored the spark, and they provided a general lower bound. When the dictionary is a union of several orthonormal bases, Gribonval and Nielsen presented an improved lower bound for spark. In this paper, we introduce a new construction of dictionary, achieving the spark bound given by Gribonval and Nielsen. More precisely, let <em>q</em> be a power of 2, we show that for any positive integer <em>t</em>, there exists a dictionary in <span><math><msup><mrow><mi>R</mi></mrow><mrow><msup><mrow><mi>q</mi></mrow><mrow><mn>2</mn><mi>t</mi></mrow></msup></mrow></msup></math></span>, which is a union of <span><math><mi>q</mi><mo>+</mo><mn>1</mn></math></span> orthonormal bases, such that the spark of the dictionary attains Gribonval-Nielsen's bound. Our result extends previously best known result from <span><math><mi>t</mi><mo>=</mo><mn>1</mn><mo>,</mo><mn>2</mn></math></span> to arbitrarily positive integer <em>t</em>, and our construction is technically different from previous ones. Their method is more combinatorial, while ours is algebraic, which is more general.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101786"},"PeriodicalIF":2.6,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144262223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalization analysis of an unfolding network for analysis-based compressed sensing","authors":"Vicky Kouni , Yannis Panagakis","doi":"10.1016/j.acha.2025.101787","DOIUrl":"10.1016/j.acha.2025.101787","url":null,"abstract":"<div><div>Unfolding networks have shown promising results in the Compressed Sensing (CS) field. Yet, the investigation of their generalization ability is still in its infancy. In this paper, we perform a generalization analysis of a state-of-the-art ADMM-based unfolding network, which jointly learns a decoder for CS and a sparsifying redundant analysis operator. To this end, we first impose a structural constraint on the learnable sparsifier, which parametrizes the network's hypothesis class. For the latter, we estimate its Rademacher complexity. With this estimate in hand, we deliver generalization error bounds – which scale like the square root of the number of layers – for the examined network. Finally, the validity of our theory is assessed and numerical comparisons to a state-of-the-art unfolding network are made, on synthetic and real-world datasets. Our experimental results demonstrate that our proposed framework complies with our theoretical findings and outperforms the baseline, consistently for all datasets.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101787"},"PeriodicalIF":2.6,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computing the proximal operator of the q-th power of the ℓ1,q-norm for group sparsity","authors":"Rongrong Lin , Shihai Chen , Han Feng , Yulan Liu","doi":"10.1016/j.acha.2025.101788","DOIUrl":"10.1016/j.acha.2025.101788","url":null,"abstract":"<div><div>In this note, we comprehensively characterize the proximal operator of the <em>q</em>-th power of the <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn><mo>,</mo><mi>q</mi></mrow></msub></math></span>-norm (denoted by <span><math><msubsup><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn><mo>,</mo><mi>q</mi></mrow><mrow><mi>q</mi></mrow></msubsup></math></span>) with <span><math><mn>0</mn><mo><</mo><mi>q</mi><mo><</mo><mn>1</mn></math></span> by exploiting the well-known proximal operator of <span><math><mo>|</mo><mo>⋅</mo><msup><mrow><mo>|</mo></mrow><mrow><mi>q</mi></mrow></msup></math></span> on the real line. In particular, much more explicit characterizations can be obtained whenever <span><math><mi>q</mi><mo>=</mo><mn>1</mn><mo>/</mo><mn>2</mn></math></span> and <span><math><mi>q</mi><mo>=</mo><mn>2</mn><mo>/</mo><mn>3</mn></math></span> due to the existence of closed-form expressions for the proximal operators of <span><math><mo>|</mo><mo>⋅</mo><msup><mrow><mo>|</mo></mrow><mrow><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup></math></span> and <span><math><mo>|</mo><mo>⋅</mo><msup><mrow><mo>|</mo></mrow><mrow><mn>2</mn><mo>/</mo><mn>3</mn></mrow></msup></math></span>. Numerical experiments demonstrate potential advantages of the <span><math><msubsup><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn><mo>,</mo><mi>q</mi></mrow><mrow><mi>q</mi></mrow></msubsup></math></span> regularization in the inter-group and intra-group sparse vector recovery.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101788"},"PeriodicalIF":2.6,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144241941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unified stochastic framework for neural network quantization and pruning","authors":"Haoyu Zhang , Rayan Saab","doi":"10.1016/j.acha.2025.101778","DOIUrl":"10.1016/j.acha.2025.101778","url":null,"abstract":"<div><div>Quantization and pruning are two essential techniques for compressing neural networks, yet they are often treated independently, with limited theoretical analysis connecting them. This paper introduces a unified framework for post-training quantization and pruning using stochastic path-following algorithms. Our approach builds on the Stochastic Path Following Quantization (SPFQ) method, extending its applicability to pruning and low-bit quantization, including challenging 1-bit regimes. By incorporating a scaling parameter and generalizing the stochastic operator, the proposed method achieves robust error correction and yields rigorous theoretical error bounds for both quantization and pruning as well as their combination.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101778"},"PeriodicalIF":2.6,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A tighter generalization error bound for wide GCN based on loss landscape","authors":"Xianchen Zhou , Kun Hu , Hongxia Wang","doi":"10.1016/j.acha.2025.101777","DOIUrl":"10.1016/j.acha.2025.101777","url":null,"abstract":"<div><div>The generalization capability of Graph Convolutional Networks (GCNs) has been researched recently. The generalization error bound based on algorithmic stability is obtained for various structures of GCN. However, the generalization error bound computed by this method increases rapidly during the iteration since the algorithmic stability exponential depends on the number of iterations, which is not consistent with the performance of GCNs in practice. Based on the fact that the property of loss landscape, such as convex, exp-concave, or Polyak-Lojasiewicz* (PL*) leads to tighter stability and better generalization error bound, this paper focuses on the semi-supervised loss landscape of wide GCN. It shows that a wide GCN has a Hessian matrix with a small norm, which can lead to a positive definite training tangent kernel. Then GCN's loss can satisfy the PL* condition and lead to a tighter uniform stability independent of the iteration compared with previous work. Therefore, the generalization error bound in this paper depends on the graph filter's norm and layers, which is consistent with the experiments' results.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"78 ","pages":"Article 101777"},"PeriodicalIF":2.6,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael E. Mckenna , Hrushikesh N. Mhaskar , Richard G. Spencer
{"title":"An eigenfunction approach to conversion of the Laplace transform of point masses on the real line to the Fourier domain","authors":"Michael E. Mckenna , Hrushikesh N. Mhaskar , Richard G. Spencer","doi":"10.1016/j.acha.2025.101776","DOIUrl":"10.1016/j.acha.2025.101776","url":null,"abstract":"<div><div>Motivated by applications in magnetic resonance relaxometry, we consider the following problem: given samples of a function <span><math><mi>t</mi><mo>↦</mo><msubsup><mrow><mo>∑</mo></mrow><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>K</mi></mrow></msubsup><msub><mrow><mi>A</mi></mrow><mrow><mi>k</mi></mrow></msub><mi>exp</mi><mo></mo><mo>(</mo><mo>−</mo><mi>t</mi><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>)</mo></math></span>, where <span><math><mi>K</mi><mo>≥</mo><mn>2</mn></math></span> is an integer, <span><math><msub><mrow><mi>A</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>∈</mo><mi>R</mi></math></span>, <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>></mo><mn>0</mn></math></span> for <span><math><mi>k</mi><mo>=</mo><mn>1</mn><mo>,</mo><mo>⋯</mo><mo>,</mo><mi>K</mi></math></span>, determine <em>K</em>, <span><math><msub><mrow><mi>A</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s and <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s. Unlike the case in which the <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s are purely imaginary, this problem is notoriously ill-posed. Our goal is to show that this problem can be transformed into an equivalent one in which the <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s are replaced by <span><math><mi>i</mi><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>. We show that this may be accomplished by approximation in terms of Hermite functions, and using the fact that these functions are eigenfunctions of the Fourier transform. We present a preliminary numerical exploration of parameter extraction from this formalism, including the effect of noise. The inherent ill-posedness of the original problem persists in the new domain, as reflected in the numerical results.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101776"},"PeriodicalIF":2.6,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144222723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinliang Liu , Bingxin Zhou , Chutian Zhang , Yu Guang Wang
{"title":"Framelet message passing","authors":"Xinliang Liu , Bingxin Zhou , Chutian Zhang , Yu Guang Wang","doi":"10.1016/j.acha.2025.101773","DOIUrl":"10.1016/j.acha.2025.101773","url":null,"abstract":"<div><div>Graph neural networks have achieved champions in wide applications. Neural message passing is a typical key module for feature propagation by aggregating neighboring features. In this work, we propose a new message passing based on multiscale framelet transforms, called Framelet Message Passing. Different from traditional spatial methods, it integrates framelet representation of neighbor nodes from multiple hops away in node message update. We also propose a continuous message passing using neural ODE solvers. Both discrete and continuous cases can provably mitigate oversmoothing and achieve superior performance. Numerical experiments on real graph datasets show that the continuous version of the framelet message passing significantly outperforms existing methods when learning heterogeneous graphs and achieves state-of-the-art performance on classic node classification tasks with low computational costs.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"78 ","pages":"Article 101773"},"PeriodicalIF":2.6,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144125322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An oracle gradient regularized Newton method for quadratic measurements regression","authors":"Jun Fan , Jie Sun , Ailing Yan , Shenglong Zhou","doi":"10.1016/j.acha.2025.101775","DOIUrl":"10.1016/j.acha.2025.101775","url":null,"abstract":"<div><div>Recovering an unknown signal from quadratic measurements has gained popularity due to its wide range of applications, including phase retrieval, fusion frame phase retrieval, and positive operator-valued measures. In this paper, we employ a least squares approach to reconstruct the signal and establish its non-asymptotic statistical properties. Our analysis shows that the estimator perfectly recovers the true signal in the noiseless case, while the error between the estimator and the true signal is bounded by <span><math><mi>O</mi><mo>(</mo><msqrt><mrow><mi>p</mi><mi>log</mi><mo></mo><mo>(</mo><mn>1</mn><mo>+</mo><mn>2</mn><mi>n</mi><mo>)</mo><mo>/</mo><mi>n</mi></mrow></msqrt><mo>)</mo></math></span> in the noisy case, where <em>n</em> is the number of measurements and <em>p</em> is the dimension of the signal. We then develop a two-phase algorithm, gradient regularized Newton method (GRNM), to solve the least squares problem. It is proven that the first phase terminates within finitely many steps, and the sequence generated in the second phase converges to a unique local minimum at a superlinear rate under certain mild conditions. Beyond these deterministic results, GRNM is capable of exactly reconstructing the true signal in the noiseless case and achieving the stated error rate with a high probability in the noisy case. Numerical experiments demonstrate that GRNM offers a high level of recovery capability and accuracy as well as fast computational speed.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"78 ","pages":"Article 101775"},"PeriodicalIF":2.6,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143935916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A parameter-free two-bit covariance estimator with improved operator norm error rate","authors":"Junren Chen , Michael K. Ng","doi":"10.1016/j.acha.2025.101774","DOIUrl":"10.1016/j.acha.2025.101774","url":null,"abstract":"<div><div>A covariance matrix estimator using two bits per entry was recently developed by Dirksen et al. (2022) <span><span>[11]</span></span>. The estimator achieves near minimax operator norm rate for general sub-Gaussian distributions, but also suffers from two downsides: theoretically, there is an essential gap on operator norm error between their estimator and sample covariance when the diagonal of the covariance matrix is dominated by only a few entries; practically, its performance heavily relies on the dithering scale, which needs to be tuned according to some unknown parameters. In this work, we propose a new 2-bit covariance matrix estimator that simultaneously addresses both issues. Unlike the sign quantizer associated with uniform dither in Dirksen et al., we adopt a triangular dither prior to a 2-bit quantizer inspired by the multi-bit uniform quantizer. By employing dithering scales varying across entries, our estimator enjoys an improved operator norm error rate that depends on the effective rank of the underlying covariance matrix rather than the ambient dimension, which is optimal up to logarithmic factors. Moreover, our proposed method eliminates the need of <em>any</em> tuning parameter, as the dithering scales are entirely determined by the data. While our estimator requires a pass of all unquantized samples to determine the dithering scales, it can be adapted to the online setting where the samples arise sequentially. Experimental results are provided to demonstrate the advantages of our estimators over the existing ones.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"78 ","pages":"Article 101774"},"PeriodicalIF":2.6,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143903641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}