{"title":"…毕竟感知器比它的名声要好!","authors":"A. Faessler","doi":"10.1109/ISNFS.1996.603831","DOIUrl":null,"url":null,"abstract":"A large class of functions (in one or more variables) can be approximated by a, generally, multilayer feed-forward network, in which only the weights of the last layer need to be trained. All others can be selected appropriately dependent upon the desired accuracy with which the training examples are to be approximated, but independently of the examples. Thus only a perceptron remains to be trained.","PeriodicalId":187481,"journal":{"name":"1st International Symposium on Neuro-Fuzzy Systems, AT '96. Conference Report","volume":"4 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"...and the perceptron is better than its reputation after all!\",\"authors\":\"A. Faessler\",\"doi\":\"10.1109/ISNFS.1996.603831\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A large class of functions (in one or more variables) can be approximated by a, generally, multilayer feed-forward network, in which only the weights of the last layer need to be trained. All others can be selected appropriately dependent upon the desired accuracy with which the training examples are to be approximated, but independently of the examples. Thus only a perceptron remains to be trained.\",\"PeriodicalId\":187481,\"journal\":{\"name\":\"1st International Symposium on Neuro-Fuzzy Systems, AT '96. Conference Report\",\"volume\":\"4 2\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"1st International Symposium on Neuro-Fuzzy Systems, AT '96. Conference Report\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISNFS.1996.603831\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"1st International Symposium on Neuro-Fuzzy Systems, AT '96. Conference Report","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISNFS.1996.603831","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
...and the perceptron is better than its reputation after all!
A large class of functions (in one or more variables) can be approximated by a, generally, multilayer feed-forward network, in which only the weights of the last layer need to be trained. All others can be selected appropriately dependent upon the desired accuracy with which the training examples are to be approximated, but independently of the examples. Thus only a perceptron remains to be trained.