{"title":"函数逼近的神经网络","authors":"H. Mhaskar, L. Khachikyan","doi":"10.1109/NNSP.1995.514875","DOIUrl":null,"url":null,"abstract":"We describe certain results of Mhaskar concerning the approximation capabilities of neural networks with one hidden layer. In particular, these results demonstrate the construction of neural networks evaluating a squashing function or a radial basis function for optimal approximation of the Sobolev spaces. We also report on the application of some of these ideas in the construction of general-purpose networks for the prediction of time series, when the number of independent variables is known in advance, such as the Mackey-Glass series or the flour data.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"96 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Neural networks for function approximation\",\"authors\":\"H. Mhaskar, L. Khachikyan\",\"doi\":\"10.1109/NNSP.1995.514875\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We describe certain results of Mhaskar concerning the approximation capabilities of neural networks with one hidden layer. In particular, these results demonstrate the construction of neural networks evaluating a squashing function or a radial basis function for optimal approximation of the Sobolev spaces. We also report on the application of some of these ideas in the construction of general-purpose networks for the prediction of time series, when the number of independent variables is known in advance, such as the Mackey-Glass series or the flour data.\",\"PeriodicalId\":403144,\"journal\":{\"name\":\"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing\",\"volume\":\"96 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1995-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NNSP.1995.514875\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.1995.514875","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We describe certain results of Mhaskar concerning the approximation capabilities of neural networks with one hidden layer. In particular, these results demonstrate the construction of neural networks evaluating a squashing function or a radial basis function for optimal approximation of the Sobolev spaces. We also report on the application of some of these ideas in the construction of general-purpose networks for the prediction of time series, when the number of independent variables is known in advance, such as the Mackey-Glass series or the flour data.