{"title":"Benchmarking of update learning strategies on digit classifier systems","authors":"D. Barbuzzi, D. Impedovo, G. Pirlo","doi":"10.1109/ICFHR.2012.186","DOIUrl":null,"url":null,"abstract":"Three different strategies in order to re-train classifiers, when new labeled data become available, are presented in a multi-expert scenario. The first method is the use of the entire new dataset. The second one is related to the consideration that each single classifier is able to select new samples starting from those on which it performs a missclassification. Finally, by inspecting the multi expert system behavior, a sample misclassified by an expert, is used to update that classifier only if it produces a miss-classification by the ensemble of classifiers. This paper provides a comparison of three approaches under different conditions on two state of the art classifiers (SVM and Naive Bayes) by taking into account four different combination techniques. Experiments have been performed by considering the CEDAR (handwritten digit) database. It is shown how results depend by the amount of the new training samples, as well as by the specific combination decision schema and by classifiers in the ensemble.","PeriodicalId":291062,"journal":{"name":"2012 International Conference on Frontiers in Handwriting Recognition","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 International Conference on Frontiers in Handwriting Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICFHR.2012.186","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Three different strategies in order to re-train classifiers, when new labeled data become available, are presented in a multi-expert scenario. The first method is the use of the entire new dataset. The second one is related to the consideration that each single classifier is able to select new samples starting from those on which it performs a missclassification. Finally, by inspecting the multi expert system behavior, a sample misclassified by an expert, is used to update that classifier only if it produces a miss-classification by the ensemble of classifiers. This paper provides a comparison of three approaches under different conditions on two state of the art classifiers (SVM and Naive Bayes) by taking into account four different combination techniques. Experiments have been performed by considering the CEDAR (handwritten digit) database. It is shown how results depend by the amount of the new training samples, as well as by the specific combination decision schema and by classifiers in the ensemble.