{"title":"多隐层前馈神经网络自适应回归估计的收敛速率","authors":"M. Kohler, A. Krzyżak","doi":"10.1109/ISIT.2005.1523580","DOIUrl":null,"url":null,"abstract":"We present a general bound on the expected L2 error of adaptive least squares estimates. By applying it to multiple hidden layer feedforward neural network regression function estimates we are able to obtain optimal (up to log factor) rates of convergence for Lipschitz classes and fast rates of convergence for some classes of regression functions such as additive functions","PeriodicalId":166130,"journal":{"name":"Proceedings. International Symposium on Information Theory, 2005. ISIT 2005.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Rates of convergence for adaptive regression estimates with multiple hidden layer feedforward neural networks\",\"authors\":\"M. Kohler, A. Krzyżak\",\"doi\":\"10.1109/ISIT.2005.1523580\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a general bound on the expected L2 error of adaptive least squares estimates. By applying it to multiple hidden layer feedforward neural network regression function estimates we are able to obtain optimal (up to log factor) rates of convergence for Lipschitz classes and fast rates of convergence for some classes of regression functions such as additive functions\",\"PeriodicalId\":166130,\"journal\":{\"name\":\"Proceedings. International Symposium on Information Theory, 2005. ISIT 2005.\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. International Symposium on Information Theory, 2005. ISIT 2005.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISIT.2005.1523580\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. International Symposium on Information Theory, 2005. ISIT 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT.2005.1523580","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Rates of convergence for adaptive regression estimates with multiple hidden layer feedforward neural networks
We present a general bound on the expected L2 error of adaptive least squares estimates. By applying it to multiple hidden layer feedforward neural network regression function estimates we are able to obtain optimal (up to log factor) rates of convergence for Lipschitz classes and fast rates of convergence for some classes of regression functions such as additive functions