{"title":"On the approximation of vector-valued functions by volume sampling","authors":"Daniel Kressner , Tingting Ni , André Uschmajew","doi":"10.1016/j.jco.2024.101887","DOIUrl":"10.1016/j.jco.2024.101887","url":null,"abstract":"<div><p>Given a Hilbert space <span><math><mi>H</mi></math></span> and a finite measure space Ω, the approximation of a vector-valued function <span><math><mi>f</mi><mo>:</mo><mi>Ω</mi><mo>→</mo><mi>H</mi></math></span> by a <em>k</em>-dimensional subspace <span><math><mi>U</mi><mo>⊂</mo><mi>H</mi></math></span> plays an important role in dimension reduction techniques, such as reduced basis methods for solving parameter-dependent partial differential equations. For functions in the Lebesgue–Bochner space <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mi>Ω</mi><mo>;</mo><mi>H</mi><mo>)</mo></math></span>, the best possible subspace approximation error <span><math><msubsup><mrow><mi>d</mi></mrow><mrow><mi>k</mi></mrow><mrow><mo>(</mo><mn>2</mn><mo>)</mo></mrow></msubsup></math></span> is characterized by the singular values of <em>f</em>. However, for practical reasons, <span><math><mi>U</mi></math></span> is often restricted to be spanned by point samples of <em>f</em>. We show that this restriction only has a mild impact on the attainable error; there always exist <em>k</em> samples such that the resulting error is not larger than <span><math><msqrt><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msqrt><mo>⋅</mo><msubsup><mrow><mi>d</mi></mrow><mrow><mi>k</mi></mrow><mrow><mo>(</mo><mn>2</mn><mo>)</mo></mrow></msubsup></math></span>. Our work extends existing results by Binev et al. (2011) <span><span>[3]</span></span> on approximation in supremum norm and by Deshpande et al. (2006) <span><span>[8]</span></span> on column subset selection for matrices.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000645/pdfft?md5=810287a810b23405b1bc8161d82ba70e&pid=1-s2.0-S0885064X24000645-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141935983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High probability bounds on AdaGrad for constrained weakly convex optimization","authors":"Yusu Hong , Junhong Lin","doi":"10.1016/j.jco.2024.101889","DOIUrl":"10.1016/j.jco.2024.101889","url":null,"abstract":"<div><p>In this paper, we study the high probability convergence of AdaGrad-Norm for constrained, non-smooth, weakly convex optimization with bounded noise and sub-Gaussian noise cases. We also investigate a more general accelerated gradient descent (AGD) template (Ghadimi and Lan, 2016) encompassing the AdaGrad-Norm, the Nesterov's accelerated gradient descent, and the RSAG (Ghadimi and Lan, 2016) with different parameter choices. We provide a high probability convergence rate <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><mn>1</mn><mo>/</mo><msqrt><mrow><mi>T</mi></mrow></msqrt><mo>)</mo></math></span> without knowing the information of the weak convexity parameter and the gradient bound to tune the step-sizes.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000669/pdfft?md5=7c5c4999e38fd8c865761fe3213f35cf&pid=1-s2.0-S0885064X24000669-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141935984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"No existence of a linear algorithm for the one-dimensional Fourier phase retrieval","authors":"Meng Huang , Zhiqiang Xu","doi":"10.1016/j.jco.2024.101886","DOIUrl":"10.1016/j.jco.2024.101886","url":null,"abstract":"<div><p>Fourier phase retrieval, which aims to reconstruct a signal from its Fourier magnitude, is of fundamental importance in fields of engineering and science. In this paper, we provide a theoretical understanding of algorithms for the one-dimensional Fourier phase retrieval problem. Specifically, we demonstrate that if an algorithm exists which can reconstruct an arbitrary signal <span><math><mi>x</mi><mo>∈</mo><msup><mrow><mi>C</mi></mrow><mrow><mi>N</mi></mrow></msup></math></span> in <span><math><mtext>Poly</mtext><mo>(</mo><mi>N</mi><mo>)</mo><mi>log</mi><mo></mo><mo>(</mo><mn>1</mn><mo>/</mo><mi>ϵ</mi><mo>)</mo></math></span> time to reach <em>ϵ</em>-precision from its magnitude of discrete Fourier transform and its initial value <span><math><mi>x</mi><mo>(</mo><mn>0</mn><mo>)</mo></math></span>, then <span><math><mi>P</mi><mo>=</mo><mrow><mi>NP</mi></mrow></math></span>. This partially elucidates the phenomenon that, despite the fact that almost all signals are uniquely determined by their Fourier magnitude and the absolute value of their initial value <span><math><mo>|</mo><mi>x</mi><mo>(</mo><mn>0</mn><mo>)</mo><mo>|</mo></math></span>, no algorithm with theoretical guarantees has been proposed in the last few decades. Our proofs employ the result in computational complexity theory that the Product Partition problem is NP-complete in the strong sense.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000633/pdfft?md5=306cc05c455de6efb9f908455c6f3128&pid=1-s2.0-S0885064X24000633-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141637397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interpolation by decomposable univariate polynomials","authors":"Joachim von zur Gathen , Guillermo Matera","doi":"10.1016/j.jco.2024.101885","DOIUrl":"10.1016/j.jco.2024.101885","url":null,"abstract":"<div><p>The usual univariate interpolation problem of finding a monic polynomial <em>f</em> of degree <em>n</em> that interpolates <em>n</em> given values is well understood. This paper studies a variant where <em>f</em> is required to be composite, say, a composition of two polynomials of degrees <em>d</em> and <em>e</em>, respectively, with <span><math><mi>d</mi><mi>e</mi><mo>=</mo><mi>n</mi></math></span>, and with <span><math><mi>d</mi><mo>+</mo><mi>e</mi><mo>−</mo><mn>1</mn></math></span> given values. Some special cases are easy to solve, and for the general case, we construct a homotopy between it and a special case. We compute a <em>geometric solution</em> of the algebraic curve presenting this homotopy, and this also provides an answer to the interpolation task. The computing time is polynomial in the geometric data, like the degree, of this curve. A consequence is that for almost all inputs, a decomposable interpolation polynomial exists.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sharp lower bounds on the manifold widths of Sobolev and Besov spaces","authors":"Jonathan W. Siegel","doi":"10.1016/j.jco.2024.101884","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101884","url":null,"abstract":"<div><p>We study the manifold <em>n</em>-widths of Sobolev and Besov spaces with error measured in the <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span>-norm. The manifold widths measure how efficiently these spaces can be approximated by continuous non-linear parametric methods. Existing upper and lower bounds only match when the smoothness index <em>q</em> satisfies <span><math><mi>q</mi><mo>≤</mo><mi>p</mi></math></span> or <span><math><mn>1</mn><mo>≤</mo><mi>p</mi><mo>≤</mo><mn>2</mn></math></span>. We close this gap, obtaining sharp bounds for all <span><math><mn>1</mn><mo>≤</mo><mi>p</mi><mo>,</mo><mi>q</mi><mo>≤</mo><mo>∞</mo></math></span> for which a compact embedding holds. In the process, we determine the exact value of the manifold widths of finite dimensional <span><math><msubsup><mrow><mi>ℓ</mi></mrow><mrow><mi>q</mi></mrow><mrow><mi>M</mi></mrow></msubsup></math></span>-balls in the <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span>-norm when <span><math><mi>p</mi><mo>≤</mo><mi>q</mi></math></span>. Although this result is not new, we provide a new proof and apply it to lower bounding the manifold widths of Sobolev and Besov spaces. Our results show that the Bernstein widths, which are typically used to lower bound the manifold widths, decay asymptotically faster than the manifold widths in many cases.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141485089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Minimal dispersion on the cube and the torus","authors":"A. Arman , A.E. Litvak","doi":"10.1016/j.jco.2024.101883","DOIUrl":"10.1016/j.jco.2024.101883","url":null,"abstract":"<div><p>We improve some upper bounds for minimal dispersion on the cube and torus. Our new ingredient is an improvement of a probabilistic lemma used to obtain upper bounds for dispersion in several previous works. Our new lemma combines a random and non-random choice of points in the cube. This leads to better upper bounds for the minimal dispersion.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000608/pdfft?md5=8545345a37c7c8d8bd458b82060fc777&pid=1-s2.0-S0885064X24000608-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141408812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangyong Tan , Ling Peng , Heng Lian , Xiaohui Liu
{"title":"Adaptive Huber trace regression with low-rank matrix parameter via nonconvex regularization","authors":"Xiangyong Tan , Ling Peng , Heng Lian , Xiaohui Liu","doi":"10.1016/j.jco.2024.101871","DOIUrl":"10.1016/j.jco.2024.101871","url":null,"abstract":"<div><p>In this paper, we consider the adaptive Huber trace regression model with matrix covariates. A non-convex penalty function is employed to account for the low-rank structure of the unknown parameter. Under some mild conditions, we establish an upper bound for the statistical rate of convergence of the regularized matrix estimator. Theoretically, we can deal with heavy-tailed distributions with bounded <span><math><mo>(</mo><mn>1</mn><mo>+</mo><mi>δ</mi><mo>)</mo></math></span>-th moment for any <span><math><mi>δ</mi><mo>></mo><mn>0</mn></math></span>. Furthermore, we derive the effect of the adaptive parameter on the final estimator. Some simulations, as well as a real data example, are designed to show the finite sample performance of the proposed method.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141405892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kinetic Langevin MCMC sampling without gradient Lipschitz continuity - the strongly convex case","authors":"Tim Johnston , Iosif Lytras , Sotirios Sabanis","doi":"10.1016/j.jco.2024.101873","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101873","url":null,"abstract":"<div><p>In this article we consider sampling from log concave distributions in Hamiltonian setting, without assuming that the objective gradient is globally Lipschitz. We propose two algorithms based on monotone polygonal (tamed) Euler schemes, to sample from a target measure, and provide non-asymptotic 2-Wasserstein distance bounds between the law of the process of each algorithm and the target measure. Finally, we apply these results to bound the excess risk optimization error of the associated optimization problem.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000505/pdfft?md5=a3d2ab8e2d24a32d60460bf5751fc280&pid=1-s2.0-S0885064X24000505-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141423327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Randomized complexity of mean computation and the adaption problem","authors":"Stefan Heinrich","doi":"10.1016/j.jco.2024.101872","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101872","url":null,"abstract":"<div><p>Recently the adaption problem of Information-Based Complexity (IBC) for linear problems in the randomized setting was solved in Heinrich (2024) <span>[8]</span>. Several papers treating further aspects of this problem followed. However, all examples obtained so far were vector-valued. In this paper we settle the scalar-valued case. We study the complexity of mean computation in finite dimensional sequence spaces with mixed <span><math><msubsup><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow><mrow><mi>N</mi></mrow></msubsup></math></span> norms. We determine the <em>n</em>-th minimal errors in the randomized adaptive and non-adaptive settings. It turns out that among the problems considered there are examples where adaptive and non-adaptive <em>n</em>-th minimal errors deviate by a power of <em>n</em>. The gap can be (up to log factors) of the order <span><math><msup><mrow><mi>n</mi></mrow><mrow><mn>1</mn><mo>/</mo><mn>4</mn></mrow></msup></math></span>. We also show how to turn such results into infinite dimensional examples with suitable deviation for all <em>n</em> simultaneously.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141428808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the complexity of strong approximation of stochastic differential equations with a non-Lipschitz drift coefficient","authors":"Thomas Müller-Gronbach , Larisa Yaroslavtseva","doi":"10.1016/j.jco.2024.101870","DOIUrl":"https://doi.org/10.1016/j.jco.2024.101870","url":null,"abstract":"<div><p>We survey recent developments in the field of complexity of pathwise approximation in <em>p</em>-th mean of the solution of a stochastic differential equation at the final time based on finitely many evaluations of the driving Brownian motion. First, we briefly review the case of equations with globally Lipschitz continuous coefficients, for which an error rate of at least 1/2 in terms of the number of evaluations of the driving Brownian motion is always guaranteed by using the equidistant Euler-Maruyama scheme. Then we illustrate that giving up the global Lipschitz continuity of the coefficients may lead to a non-polynomial decay of the error for the Euler-Maruyama scheme or even to an arbitrary slow decay of the smallest possible error that can be achieved on the basis of finitely many evaluations of the driving Brownian motion. Finally, we turn to recent positive results for equations with a drift coefficient that is not globally Lipschitz continuous. Here we focus on scalar equations with a Lipschitz continuous diffusion coefficient and a drift coefficient that satisfies piecewise smoothness assumptions or has fractional Sobolev regularity and we present corresponding complexity results.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885064X24000475/pdfft?md5=1abf95a86603ccdc1b342109b28265f5&pid=1-s2.0-S0885064X24000475-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}