{"title":"Approximation error of single hidden layer neural networks with fixed weights","authors":"Vugar E. Ismailov","doi":"10.1016/j.ipl.2023.106467","DOIUrl":null,"url":null,"abstract":"<div><p><span><span>Neural networks with finitely many fixed weights have the universal </span>approximation property under certain conditions on compact subsets of the </span><em>d</em>-dimensional Euclidean space, where approximation process is considered. Such conditions were delineated in our paper <span>[26]</span><span>. But for many compact sets it is impossible to approximate multivariate functions with arbitrary precision and the question on estimation or efficient computation of approximation error arises. This paper provides an explicit formula for the approximation error of single hidden layer neural networks with two fixed weights.</span></p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"185 ","pages":"Article 106467"},"PeriodicalIF":0.7000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0020019023001102","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Neural networks with finitely many fixed weights have the universal approximation property under certain conditions on compact subsets of the d-dimensional Euclidean space, where approximation process is considered. Such conditions were delineated in our paper [26]. But for many compact sets it is impossible to approximate multivariate functions with arbitrary precision and the question on estimation or efficient computation of approximation error arises. This paper provides an explicit formula for the approximation error of single hidden layer neural networks with two fixed weights.
期刊介绍:
Information Processing Letters invites submission of original research articles that focus on fundamental aspects of information processing and computing. This naturally includes work in the broadly understood field of theoretical computer science; although papers in all areas of scientific inquiry will be given consideration, provided that they describe research contributions credibly motivated by applications to computing and involve rigorous methodology. High quality experimental papers that address topics of sufficiently broad interest may also be considered.
Since its inception in 1971, Information Processing Letters has served as a forum for timely dissemination of short, concise and focused research contributions. Continuing with this tradition, and to expedite the reviewing process, manuscripts are generally limited in length to nine pages when they appear in print.