{"title":"Rates of Approximation by ReLU Shallow Neural Networks","authors":"Tong Mao, Ding-Xuan Zhou","doi":"10.48550/arXiv.2307.12461","DOIUrl":null,"url":null,"abstract":"Neural networks activated by the rectified linear unit (ReLU) play a central role in the recent development of deep learning. The topic of approximating functions from H\\\"older spaces by these networks is crucial for understanding the efficiency of the induced learning algorithms. Although the topic has been well investigated in the setting of deep neural networks with many layers of hidden neurons, it is still open for shallow networks having only one hidden layer. In this paper, we provide rates of uniform approximation by these networks. We show that ReLU shallow neural networks with $m$ hidden neurons can uniformly approximate functions from the H\\\"older space $W_\\infty^r([-1, 1]^d)$ with rates $O((\\log m)^{\\frac{1}{2} +d}m^{-\\frac{r}{d}\\frac{d+2}{d+4}})$ when $r<d/2 +2$. Such rates are very close to the optimal one $O(m^{-\\frac{r}{d}})$ in the sense that $\\frac{d+2}{d+4}$ is close to $1$, when the dimension $d$ is large.","PeriodicalId":15442,"journal":{"name":"Journal of complex networks","volume":"8 3 1","pages":"101784"},"PeriodicalIF":2.2000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of complex networks","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.48550/arXiv.2307.12461","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 4
Abstract
Neural networks activated by the rectified linear unit (ReLU) play a central role in the recent development of deep learning. The topic of approximating functions from H\"older spaces by these networks is crucial for understanding the efficiency of the induced learning algorithms. Although the topic has been well investigated in the setting of deep neural networks with many layers of hidden neurons, it is still open for shallow networks having only one hidden layer. In this paper, we provide rates of uniform approximation by these networks. We show that ReLU shallow neural networks with $m$ hidden neurons can uniformly approximate functions from the H\"older space $W_\infty^r([-1, 1]^d)$ with rates $O((\log m)^{\frac{1}{2} +d}m^{-\frac{r}{d}\frac{d+2}{d+4}})$ when $r
期刊介绍:
Journal of Complex Networks publishes original articles and reviews with a significant contribution to the analysis and understanding of complex networks and its applications in diverse fields. Complex networks are loosely defined as networks with nontrivial topology and dynamics, which appear as the skeletons of complex systems in the real-world. The journal covers everything from the basic mathematical, physical and computational principles needed for studying complex networks to their applications leading to predictive models in molecular, biological, ecological, informational, engineering, social, technological and other systems. It includes, but is not limited to, the following topics: - Mathematical and numerical analysis of networks - Network theory and computer sciences - Structural analysis of networks - Dynamics on networks - Physical models on networks - Networks and epidemiology - Social, socio-economic and political networks - Ecological networks - Technological and infrastructural networks - Brain and tissue networks - Biological and molecular networks - Spatial networks - Techno-social networks i.e. online social networks, social networking sites, social media - Other applications of networks - Evolving networks - Multilayer networks - Game theory on networks - Biomedicine related networks - Animal social networks - Climate networks - Cognitive, language and informational network