{"title":"人工神经网络分类训练算法性能比较","authors":"F. D. Baptista, Sandy Rodrigues, F. Morgado‐Dias","doi":"10.1109/WISP.2013.6657493","DOIUrl":null,"url":null,"abstract":"The Artificial Neural Network research community has been actively working since the beginning of the 80s. Since then many existing algorithm were adapted, many new algorithms were created and many times the set of algorithms was revisited and reinvented. As a result an enormous set of algorithms exists and, even for the experienced user it is not easy to choose the best algorithm for a given task or dataset, even though many of the algorithms are available in implementations of existing tools. In this work we have chosen a set of algorithms which are tested with a few datasets and tested several times for different initial sets of weights and different numbers of hidden neurons while keeping one hidden layer for all the Feedforward Artificial Neural Networks.","PeriodicalId":350883,"journal":{"name":"2013 IEEE 8th International Symposium on Intelligent Signal Processing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"Performance comparison of ANN training algorithms for classification\",\"authors\":\"F. D. Baptista, Sandy Rodrigues, F. Morgado‐Dias\",\"doi\":\"10.1109/WISP.2013.6657493\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Artificial Neural Network research community has been actively working since the beginning of the 80s. Since then many existing algorithm were adapted, many new algorithms were created and many times the set of algorithms was revisited and reinvented. As a result an enormous set of algorithms exists and, even for the experienced user it is not easy to choose the best algorithm for a given task or dataset, even though many of the algorithms are available in implementations of existing tools. In this work we have chosen a set of algorithms which are tested with a few datasets and tested several times for different initial sets of weights and different numbers of hidden neurons while keeping one hidden layer for all the Feedforward Artificial Neural Networks.\",\"PeriodicalId\":350883,\"journal\":{\"name\":\"2013 IEEE 8th International Symposium on Intelligent Signal Processing\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 IEEE 8th International Symposium on Intelligent Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WISP.2013.6657493\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE 8th International Symposium on Intelligent Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WISP.2013.6657493","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance comparison of ANN training algorithms for classification
The Artificial Neural Network research community has been actively working since the beginning of the 80s. Since then many existing algorithm were adapted, many new algorithms were created and many times the set of algorithms was revisited and reinvented. As a result an enormous set of algorithms exists and, even for the experienced user it is not easy to choose the best algorithm for a given task or dataset, even though many of the algorithms are available in implementations of existing tools. In this work we have chosen a set of algorithms which are tested with a few datasets and tested several times for different initial sets of weights and different numbers of hidden neurons while keeping one hidden layer for all the Feedforward Artificial Neural Networks.