Journal of Complexity最新文献

筛选
英文 中文
Convergence of the Gauss-Newton method for convex composite optimization problems under majorant condition on Riemannian manifolds 黎曼流形上凸复合优化问题的主要条件下高斯-牛顿法的收敛性
IF 1.7 2区 数学
Journal of Complexity Pub Date : 2023-08-09 DOI: 10.1016/j.jco.2023.101788
Qamrul Hasan Ansari , Moin Uddin , Jen-Chih Yao
{"title":"Convergence of the Gauss-Newton method for convex composite optimization problems under majorant condition on Riemannian manifolds","authors":"Qamrul Hasan Ansari ,&nbsp;Moin Uddin ,&nbsp;Jen-Chih Yao","doi":"10.1016/j.jco.2023.101788","DOIUrl":"10.1016/j.jco.2023.101788","url":null,"abstract":"<div><p>In this paper, we consider convex composite optimization problems on Riemannian manifolds, and discuss the semi-local convergence of the Gauss-Newton method with quasi-regular initial point and under the majorant condition. As special cases, we also discuss the convergence of the sequence generated by the Gauss-Newton method under Lipschitz-type condition, or under <em>γ</em>-condition.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"80 ","pages":"Article 101788"},"PeriodicalIF":1.7,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42426919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The minimal radius of Galerkin information for the problem of numerical differentiation 数值微分问题的伽辽金信息的最小半径
IF 1.7 2区 数学
Journal of Complexity Pub Date : 2023-08-05 DOI: 10.1016/j.jco.2023.101787
S.G. Solodky, S.A. Stasyuk
{"title":"The minimal radius of Galerkin information for the problem of numerical differentiation","authors":"S.G. Solodky,&nbsp;S.A. Stasyuk","doi":"10.1016/j.jco.2023.101787","DOIUrl":"10.1016/j.jco.2023.101787","url":null,"abstract":"<div><p>The problem of numerical differentiation<span> for periodic functions with finite smoothness is investigated. For multivariate functions<span>, different variants of the truncation method are constructed and their approximation properties are obtained. Based on these results, sharp bounds (in the power scale) of the minimal radius of Galerkin information for the problem under study are found.</span></span></p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"81 ","pages":"Article 101787"},"PeriodicalIF":1.7,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42869659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sampling numbers of smoothness classes via ℓ1-minimization 通过1-最小化得到平滑类的采样个数
IF 1.7 2区 数学
Journal of Complexity Pub Date : 2023-08-05 DOI: 10.1016/j.jco.2023.101786
Thomas Jahn , Tino Ullrich , Felix Voigtlaender
{"title":"Sampling numbers of smoothness classes via ℓ1-minimization","authors":"Thomas Jahn ,&nbsp;Tino Ullrich ,&nbsp;Felix Voigtlaender","doi":"10.1016/j.jco.2023.101786","DOIUrl":"https://doi.org/10.1016/j.jco.2023.101786","url":null,"abstract":"<div><p>Using techniques developed recently in the field of compressed sensing we prove new upper bounds for general (nonlinear) sampling numbers of (quasi-)Banach smoothness spaces in <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span><span>. In particular, we show that in relevant cases such as mixed and isotropic weighted Wiener classes or Sobolev spaces with mixed smoothness, sampling numbers in </span><span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> can be upper bounded by best <em>n</em><span>-term trigonometric widths in </span><span><math><msup><mrow><mi>L</mi></mrow><mrow><mo>∞</mo></mrow></msup></math></span>. We describe a recovery procedure from <em>m</em> function values based on <span><math><msup><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msup></math></span>-minimization (basis pursuit denoising). With this method, a significant gain in the rate of convergence compared to recently developed linear recovery methods is achieved. In this deterministic worst-case setting we see an additional speed-up of <span><math><msup><mrow><mi>m</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup></math></span> (up to log factors) compared to linear methods in case of weighted Wiener spaces. For their quasi-Banach counterparts even arbitrary polynomial speed-up is possible. Surprisingly, our approach allows to recover mixed smoothness Sobolev functions belonging to <span><math><msubsup><mrow><mi>S</mi></mrow><mrow><mi>p</mi></mrow><mrow><mi>r</mi></mrow></msubsup><mi>W</mi><mo>(</mo><msup><mrow><mi>T</mi></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span> on the <em>d</em>-torus with a logarithmically better rate of convergence than any linear method can achieve when <span><math><mn>1</mn><mo>&lt;</mo><mi>p</mi><mo>&lt;</mo><mn>2</mn></math></span> and <em>d</em> is large. This effect is not present for isotropic Sobolev spaces.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"79 ","pages":"Article 101786"},"PeriodicalIF":1.7,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Random-prime–fixed-vector randomised lattice-based algorithm for high-dimensional integration 高维积分的随机素数-固定向量随机格算法
IF 1.7 2区 数学
Journal of Complexity Pub Date : 2023-08-02 DOI: 10.1016/j.jco.2023.101785
Frances Y. Kuo , Dirk Nuyens , Laurence Wilkes
{"title":"Random-prime–fixed-vector randomised lattice-based algorithm for high-dimensional integration","authors":"Frances Y. Kuo ,&nbsp;Dirk Nuyens ,&nbsp;Laurence Wilkes","doi":"10.1016/j.jco.2023.101785","DOIUrl":"https://doi.org/10.1016/j.jco.2023.101785","url":null,"abstract":"<div><p>We show that a very simple randomised algorithm for numerical integration can produce a near optimal rate of convergence for integrals of functions in the <em>d</em><span>-dimensional weighted Korobov space. This algorithm uses a lattice<span> rule with a fixed generating vector and the only random element is the choice of the number of function evaluations. For a given computational budget </span></span><em>n</em> of a maximum allowed number of function evaluations, we uniformly pick a prime <em>p</em> in the range <span><math><mi>n</mi><mo>/</mo><mn>2</mn><mo>&lt;</mo><mi>p</mi><mo>≤</mo><mi>n</mi></math></span>. We show error bounds for the randomised error, which is defined as the worst case expected error, of the form <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mo>−</mo><mi>α</mi><mo>−</mo><mn>1</mn><mo>/</mo><mn>2</mn><mo>+</mo><mi>δ</mi></mrow></msup><mo>)</mo></math></span>, with <span><math><mi>δ</mi><mo>&gt;</mo><mn>0</mn></math></span>, for a Korobov space with smoothness <span><math><mi>α</mi><mo>&gt;</mo><mn>1</mn><mo>/</mo><mn>2</mn></math></span> and general weights. The implied constant in the bound is dimension-independent given the usual conditions on the weights. We present an algorithm that can construct suitable generating vectors <em>offline</em> ahead of time at cost <span><math><mi>O</mi><mo>(</mo><mi>d</mi><msup><mrow><mi>n</mi></mrow><mrow><mn>4</mn></mrow></msup><mo>/</mo><mi>ln</mi><mo>⁡</mo><mi>n</mi><mo>)</mo></math></span> when the weight parameters defining the Korobov spaces are so-called product weights. For this case, numerical experiments confirm our theory that the new randomised algorithm achieves the near optimal rate of the randomised error.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"79 ","pages":"Article 101785"},"PeriodicalIF":1.7,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49877023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lower bounds for artificial neural network approximations: A proof that shallow neural networks fail to overcome the curse of dimensionality 人工神经网络近似的下界:浅层神经网络无法克服维数诅咒的证明
IF 1.7 2区 数学
Journal of Complexity Pub Date : 2023-08-01 DOI: 10.1016/j.jco.2023.101746
Philipp Grohs , Shokhrukh Ibragimov , Arnulf Jentzen , Sarah Koppensteiner
{"title":"Lower bounds for artificial neural network approximations: A proof that shallow neural networks fail to overcome the curse of dimensionality","authors":"Philipp Grohs ,&nbsp;Shokhrukh Ibragimov ,&nbsp;Arnulf Jentzen ,&nbsp;Sarah Koppensteiner","doi":"10.1016/j.jco.2023.101746","DOIUrl":"https://doi.org/10.1016/j.jco.2023.101746","url":null,"abstract":"<div><p><span>Artificial neural networks<span> (ANNs) have become a very powerful tool in the approximation of high-dimensional functions. Especially, deep ANNs, consisting of a large number of hidden layers, have been very successfully used in a series of practical relevant computational problems involving high-dimensional input data ranging from classification tasks in supervised learning to optimal decision problems in reinforcement learning. There are also a number of mathematical results in the scientific literature which study the approximation capacities of ANNs in the context of high-dimensional target functions. In particular, there are a series of mathematical results in the scientific literature which show that sufficiently deep ANNs have the capacity to overcome the curse of dimensionality in the approximation of certain target function classes in the sense that the number of parameters of the approximating ANNs grows at most polynomially in the dimension </span></span><span><math><mi>d</mi><mo>∈</mo><mi>N</mi></math></span> of the target functions under considerations. In the proofs of several of such high-dimensional approximation results it is crucial that the involved ANNs are sufficiently deep and consist a sufficiently large number of hidden layers which grows in the dimension of the considered target functions. It is the topic of this work to look a bit more detailed to the deepness of the involved ANNs in the approximation of high-dimensional target functions. In particular, the main result of this work proves that there exists a concretely specified sequence of functions which can be approximated without the curse of dimensionality by sufficiently deep ANNs but which cannot be approximated without the curse of dimensionality if the involved ANNs are shallow or not deep enough.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"77 ","pages":"Article 101746"},"PeriodicalIF":1.7,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50200237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On Huber's contaminated model 关于Huber污染模型
IF 1.7 2区 数学
Journal of Complexity Pub Date : 2023-08-01 DOI: 10.1016/j.jco.2023.101745
Weiyan Mu , Shifeng Xiong
{"title":"On Huber's contaminated model","authors":"Weiyan Mu ,&nbsp;Shifeng Xiong","doi":"10.1016/j.jco.2023.101745","DOIUrl":"https://doi.org/10.1016/j.jco.2023.101745","url":null,"abstract":"<div><p><span><span>Huber's contaminated model is a basic model for data with outliers. This paper aims at addressing several fundamental problems about this model. We first study its identifiability properties. Several theorems are presented to determine whether the model is identifiable for various situations. Based on these results, we discuss the problem of estimating the parameters with observations drawn from Huber's contaminated model. A definition of estimation consistency is introduced to handle the general case where the model may be unidentifiable. This consistency is a strong </span>robustness property. After showing that existing estimators cannot be consistent in this sense, we propose a new estimator that possesses the consistency property under mild conditions. Its adaptive version, which can simultaneously possess this consistency property and optimal </span>asymptotic efficiency, is also provided. Numerical examples show that our estimators have better overall performance than existing estimators no matter how many outliers in the data.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"77 ","pages":"Article 101745"},"PeriodicalIF":1.7,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50200306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A continuous characterization of PSPACE using polynomial ordinary differential equations 用多项式常微分方程连续刻画PSPACE
IF 1.7 2区 数学
Journal of Complexity Pub Date : 2023-08-01 DOI: 10.1016/j.jco.2023.101755
Olivier Bournez , Riccardo Gozzi , Daniel S. Graça , Amaury Pouly
{"title":"A continuous characterization of PSPACE using polynomial ordinary differential equations","authors":"Olivier Bournez ,&nbsp;Riccardo Gozzi ,&nbsp;Daniel S. Graça ,&nbsp;Amaury Pouly","doi":"10.1016/j.jco.2023.101755","DOIUrl":"https://doi.org/10.1016/j.jco.2023.101755","url":null,"abstract":"<div><p>In this paper we provide a characterization of the complexity class PSPACE by using a purely continuous model defined with polynomial ordinary differential equations.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"77 ","pages":"Article 101755"},"PeriodicalIF":1.7,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50200238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dmitriy Bilyk and Feng Dai are the winners of the 2023 Joseph F. Traub Prize for Achievement in Information-Based Complexity 德米特里·比利克和冯戴是2023年约瑟夫·F·特劳布信息复杂性成就奖的得主
IF 1.7 2区 数学
Journal of Complexity Pub Date : 2023-08-01 DOI: 10.1016/j.jco.2023.101756
Erich Novak
{"title":"Dmitriy Bilyk and Feng Dai are the winners of the 2023 Joseph F. Traub Prize for Achievement in Information-Based Complexity","authors":"Erich Novak","doi":"10.1016/j.jco.2023.101756","DOIUrl":"https://doi.org/10.1016/j.jco.2023.101756","url":null,"abstract":"","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"77 ","pages":"Article 101756"},"PeriodicalIF":1.7,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50200239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rates of approximation by ReLU shallow neural networks ReLU浅层神经网络的近似速率
IF 1.7 2区 数学
Journal of Complexity Pub Date : 2023-07-31 DOI: 10.1016/j.jco.2023.101784
Tong Mao , Ding-Xuan Zhou
{"title":"Rates of approximation by ReLU shallow neural networks","authors":"Tong Mao ,&nbsp;Ding-Xuan Zhou","doi":"10.1016/j.jco.2023.101784","DOIUrl":"https://doi.org/10.1016/j.jco.2023.101784","url":null,"abstract":"<div><p>Neural networks activated by the rectified linear unit (ReLU) play a central role in the recent development of deep learning. The topic of approximating functions from Hölder spaces by these networks is crucial for understanding the efficiency of the induced learning algorithms. Although the topic has been well investigated in the setting of deep neural networks with many layers of hidden neurons, it is still open for shallow networks having only one hidden layer. In this paper, we provide rates of uniform approximation by these networks. We show that ReLU shallow neural networks with <em>m</em> hidden neurons can uniformly approximate functions from the Hölder space <span><math><msubsup><mrow><mi>W</mi></mrow><mrow><mo>∞</mo></mrow><mrow><mi>r</mi></mrow></msubsup><mo>(</mo><msup><mrow><mo>[</mo><mo>−</mo><mn>1</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span> with rates <span><math><mi>O</mi><mo>(</mo><msup><mrow><mo>(</mo><mi>log</mi><mo>⁡</mo><mi>m</mi><mo>)</mo></mrow><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>+</mo><mi>d</mi></mrow></msup><msup><mrow><mi>m</mi></mrow><mrow><mo>−</mo><mfrac><mrow><mi>r</mi></mrow><mrow><mi>d</mi></mrow></mfrac><mfrac><mrow><mi>d</mi><mo>+</mo><mn>2</mn></mrow><mrow><mi>d</mi><mo>+</mo><mn>4</mn></mrow></mfrac></mrow></msup><mo>)</mo></math></span> when <span><math><mi>r</mi><mo>&lt;</mo><mi>d</mi><mo>/</mo><mn>2</mn><mo>+</mo><mn>2</mn></math></span>. Such rates are very close to the optimal one <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>m</mi></mrow><mrow><mo>−</mo><mfrac><mrow><mi>r</mi></mrow><mrow><mi>d</mi></mrow></mfrac></mrow></msup><mo>)</mo></math></span> in the sense that <span><math><mfrac><mrow><mi>d</mi><mo>+</mo><mn>2</mn></mrow><mrow><mi>d</mi><mo>+</mo><mn>4</mn></mrow></mfrac></math></span> is close to 1, when the dimension <em>d</em> is large.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"79 ","pages":"Article 101784"},"PeriodicalIF":1.7,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49876977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximating smooth and sparse functions by deep neural networks: Optimal approximation rates and saturation 用深度神经网络逼近平滑和稀疏函数:最优逼近率和饱和度
IF 1.7 2区 数学
Journal of Complexity Pub Date : 2023-07-27 DOI: 10.1016/j.jco.2023.101783
Xia Liu
{"title":"Approximating smooth and sparse functions by deep neural networks: Optimal approximation rates and saturation","authors":"Xia Liu","doi":"10.1016/j.jco.2023.101783","DOIUrl":"https://doi.org/10.1016/j.jco.2023.101783","url":null,"abstract":"<div><p><span><span>Constructing neural networks for function approximation is a classical and longstanding topic in </span>approximation theory. In this paper, we aim at constructing </span>deep neural networks with three hidden layers using a sigmoidal activation function to approximate smooth and sparse functions. Specifically, we prove that the constructed deep nets with controllable magnitude of free parameters can reach the optimal approximation rate in approximating both smooth and sparse functions. In particular, we prove that neural networks with three hidden layers can avoid the phenomenon of saturation, i.e., the phenomenon that for some neural network architectures, the approximation rate stops improving for functions of very high smoothness.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"79 ","pages":"Article 101783"},"PeriodicalIF":1.7,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49876980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信