人工神经网络近似的下界:浅层神经网络无法克服维数诅咒的证明

IF 1.8 2区 数学 Q1 MATHEMATICS
Philipp Grohs , Shokhrukh Ibragimov , Arnulf Jentzen , Sarah Koppensteiner
{"title":"人工神经网络近似的下界:浅层神经网络无法克服维数诅咒的证明","authors":"Philipp Grohs ,&nbsp;Shokhrukh Ibragimov ,&nbsp;Arnulf Jentzen ,&nbsp;Sarah Koppensteiner","doi":"10.1016/j.jco.2023.101746","DOIUrl":null,"url":null,"abstract":"<div><p><span>Artificial neural networks<span> (ANNs) have become a very powerful tool in the approximation of high-dimensional functions. Especially, deep ANNs, consisting of a large number of hidden layers, have been very successfully used in a series of practical relevant computational problems involving high-dimensional input data ranging from classification tasks in supervised learning to optimal decision problems in reinforcement learning. There are also a number of mathematical results in the scientific literature which study the approximation capacities of ANNs in the context of high-dimensional target functions. In particular, there are a series of mathematical results in the scientific literature which show that sufficiently deep ANNs have the capacity to overcome the curse of dimensionality in the approximation of certain target function classes in the sense that the number of parameters of the approximating ANNs grows at most polynomially in the dimension </span></span><span><math><mi>d</mi><mo>∈</mo><mi>N</mi></math></span> of the target functions under considerations. In the proofs of several of such high-dimensional approximation results it is crucial that the involved ANNs are sufficiently deep and consist a sufficiently large number of hidden layers which grows in the dimension of the considered target functions. It is the topic of this work to look a bit more detailed to the deepness of the involved ANNs in the approximation of high-dimensional target functions. In particular, the main result of this work proves that there exists a concretely specified sequence of functions which can be approximated without the curse of dimensionality by sufficiently deep ANNs but which cannot be approximated without the curse of dimensionality if the involved ANNs are shallow or not deep enough.</p></div>","PeriodicalId":50227,"journal":{"name":"Journal of Complexity","volume":"77 ","pages":"Article 101746"},"PeriodicalIF":1.8000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Lower bounds for artificial neural network approximations: A proof that shallow neural networks fail to overcome the curse of dimensionality\",\"authors\":\"Philipp Grohs ,&nbsp;Shokhrukh Ibragimov ,&nbsp;Arnulf Jentzen ,&nbsp;Sarah Koppensteiner\",\"doi\":\"10.1016/j.jco.2023.101746\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span>Artificial neural networks<span> (ANNs) have become a very powerful tool in the approximation of high-dimensional functions. Especially, deep ANNs, consisting of a large number of hidden layers, have been very successfully used in a series of practical relevant computational problems involving high-dimensional input data ranging from classification tasks in supervised learning to optimal decision problems in reinforcement learning. There are also a number of mathematical results in the scientific literature which study the approximation capacities of ANNs in the context of high-dimensional target functions. In particular, there are a series of mathematical results in the scientific literature which show that sufficiently deep ANNs have the capacity to overcome the curse of dimensionality in the approximation of certain target function classes in the sense that the number of parameters of the approximating ANNs grows at most polynomially in the dimension </span></span><span><math><mi>d</mi><mo>∈</mo><mi>N</mi></math></span> of the target functions under considerations. In the proofs of several of such high-dimensional approximation results it is crucial that the involved ANNs are sufficiently deep and consist a sufficiently large number of hidden layers which grows in the dimension of the considered target functions. It is the topic of this work to look a bit more detailed to the deepness of the involved ANNs in the approximation of high-dimensional target functions. In particular, the main result of this work proves that there exists a concretely specified sequence of functions which can be approximated without the curse of dimensionality by sufficiently deep ANNs but which cannot be approximated without the curse of dimensionality if the involved ANNs are shallow or not deep enough.</p></div>\",\"PeriodicalId\":50227,\"journal\":{\"name\":\"Journal of Complexity\",\"volume\":\"77 \",\"pages\":\"Article 101746\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Complexity\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0885064X23000158\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Complexity","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885064X23000158","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 3

摘要

人工神经网络已经成为高维函数逼近的一个非常强大的工具。特别是,由大量隐藏层组成的深度神经网络已经非常成功地用于一系列涉及高维输入数据的实际相关计算问题,从监督学习中的分类任务到强化学习中的最优决策问题。科学文献中也有许多数学结果研究了高维目标函数背景下人工神经网络的近似能力。特别地,科学文献中的一系列数学结果表明,在某些目标函数类的近似中,足够深的Ann具有克服维数诅咒的能力,因为近似Ann的参数数量在所考虑的目标函数的维数d∈N中最多呈多项式增长。在几个这样的高维近似结果的证明中,至关重要的是,所涉及的Ann足够深,并且由足够多的隐藏层组成,这些隐藏层在所考虑的目标函数的维度上生长。这项工作的主题是在高维目标函数的近似中更详细地了解所涉及的人工神经网络的深度。特别地,这项工作的主要结果证明,存在一个具体指定的函数序列,该序列可以通过足够深的Ann在没有维度诅咒的情况下近似,但如果所涉及的Ann足够浅或不够深,则不能在没有维度咒骂的情况下近似。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Lower bounds for artificial neural network approximations: A proof that shallow neural networks fail to overcome the curse of dimensionality

Artificial neural networks (ANNs) have become a very powerful tool in the approximation of high-dimensional functions. Especially, deep ANNs, consisting of a large number of hidden layers, have been very successfully used in a series of practical relevant computational problems involving high-dimensional input data ranging from classification tasks in supervised learning to optimal decision problems in reinforcement learning. There are also a number of mathematical results in the scientific literature which study the approximation capacities of ANNs in the context of high-dimensional target functions. In particular, there are a series of mathematical results in the scientific literature which show that sufficiently deep ANNs have the capacity to overcome the curse of dimensionality in the approximation of certain target function classes in the sense that the number of parameters of the approximating ANNs grows at most polynomially in the dimension dN of the target functions under considerations. In the proofs of several of such high-dimensional approximation results it is crucial that the involved ANNs are sufficiently deep and consist a sufficiently large number of hidden layers which grows in the dimension of the considered target functions. It is the topic of this work to look a bit more detailed to the deepness of the involved ANNs in the approximation of high-dimensional target functions. In particular, the main result of this work proves that there exists a concretely specified sequence of functions which can be approximated without the curse of dimensionality by sufficiently deep ANNs but which cannot be approximated without the curse of dimensionality if the involved ANNs are shallow or not deep enough.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Complexity
Journal of Complexity 工程技术-计算机:理论方法
CiteScore
3.10
自引率
17.60%
发文量
57
审稿时长
>12 weeks
期刊介绍: The multidisciplinary Journal of Complexity publishes original research papers that contain substantial mathematical results on complexity as broadly conceived. Outstanding review papers will also be published. In the area of computational complexity, the focus is on complexity over the reals, with the emphasis on lower bounds and optimal algorithms. The Journal of Complexity also publishes articles that provide major new algorithms or make important progress on upper bounds. Other models of computation, such as the Turing machine model, are also of interest. Computational complexity results in a wide variety of areas are solicited. Areas Include: • Approximation theory • Biomedical computing • Compressed computing and sensing • Computational finance • Computational number theory • Computational stochastics • Control theory • Cryptography • Design of experiments • Differential equations • Discrete problems • Distributed and parallel computation • High and infinite-dimensional problems • Information-based complexity • Inverse and ill-posed problems • Machine learning • Markov chain Monte Carlo • Monte Carlo and quasi-Monte Carlo • Multivariate integration and approximation • Noisy data • Nonlinear and algebraic equations • Numerical analysis • Operator equations • Optimization • Quantum computing • Scientific computation • Tractability of multivariate problems • Vision and image understanding.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信