{"title":"Near-optimal deep neural network approximation for Korobov functions with respect to Lp and H1 norms","authors":"","doi":"10.1016/j.neunet.2024.106702","DOIUrl":null,"url":null,"abstract":"<div><p>This paper derives the optimal rate of approximation for Korobov functions with deep neural networks in the high dimensional hypercube with respect to <span><math><msup><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msup></math></span>-norms and <span><math><msup><mrow><mi>H</mi></mrow><mrow><mn>1</mn></mrow></msup></math></span>-norm. Our approximation bounds are non-asymptotic in both the width and depth of the networks. The obtained approximation rates demonstrate a remarkable <em>super-convergence</em> feature, improving the existing convergence rates of neural networks that are continuous function approximators. Finally, using a VC-dimension argument, we show that the established rates are near-optimal.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024006269","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper derives the optimal rate of approximation for Korobov functions with deep neural networks in the high dimensional hypercube with respect to -norms and -norm. Our approximation bounds are non-asymptotic in both the width and depth of the networks. The obtained approximation rates demonstrate a remarkable super-convergence feature, improving the existing convergence rates of neural networks that are continuous function approximators. Finally, using a VC-dimension argument, we show that the established rates are near-optimal.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.