J. Torres-Sospedra, M. Fernández-Redondo, C. Hernández-Espinosa
{"title":"A research on combination methods for ensembles of multilayer feedforward","authors":"J. Torres-Sospedra, M. Fernández-Redondo, C. Hernández-Espinosa","doi":"10.1109/IJCNN.2005.1556011","DOIUrl":null,"url":null,"abstract":"As shown in the bibliography, training an ensemble of networks is an interesting way to improve the performance with respect to a single network. The two key factors to design an ensemble are how to train the individual networks and how to combine the different outputs of the networks to give a single output class. In this paper, we focus on the combination methods. We study the performance of fourteen different combination methods for ensembles of the type \"simple ensemble\" and \"decorrelated\". In the case of the \"simple ensemble\" and low number of networks in the ensemble, the method Zimmermann gets the best performance. When the number of networks is in the range of 9 and 20 the weighted average is the best alternative. Finally, in the case of the ensemble \"decorrelated\" the best performing method is averaging over a wide spectrum of the number of networks in the ensemble.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"4695 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2005.1556011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
As shown in the bibliography, training an ensemble of networks is an interesting way to improve the performance with respect to a single network. The two key factors to design an ensemble are how to train the individual networks and how to combine the different outputs of the networks to give a single output class. In this paper, we focus on the combination methods. We study the performance of fourteen different combination methods for ensembles of the type "simple ensemble" and "decorrelated". In the case of the "simple ensemble" and low number of networks in the ensemble, the method Zimmermann gets the best performance. When the number of networks is in the range of 9 and 20 the weighted average is the best alternative. Finally, in the case of the ensemble "decorrelated" the best performing method is averaging over a wide spectrum of the number of networks in the ensemble.