Stamatis Karlos, Nikos Fazakis, Konstantinos Kaleris, V. G. Kanas, S. Kotsiantis
{"title":"An incremental self-trained ensemble algorithm","authors":"Stamatis Karlos, Nikos Fazakis, Konstantinos Kaleris, V. G. Kanas, S. Kotsiantis","doi":"10.1109/EAIS.2018.8397180","DOIUrl":null,"url":null,"abstract":"Incremental learning has boosted the speed of Data Mining algorithms without sacrificing much, or sometimes none, predictive accuracy. Instead, by saving computational resources, combination of such kind of algorithms with iterative procedures that improve the learned hypothesis utilizing vast amounts of available unlabeled data could be achieved efficiently, in contrast to supervised scenario where all this information is rejected because no exploitation mechanism exists. The scope of this work is to examine the ability of a learning scheme that operates under shortage of labeled data for classification tasks, based on an incrementally updated ensemble algorithm. Comparisons against 30 state-of-the art Semi-supervised methods over 50 publicly available datasets are provided, supporting our assumptions about the learning quality of the proposed algorithm.","PeriodicalId":368737,"journal":{"name":"2018 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EAIS.2018.8397180","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Incremental learning has boosted the speed of Data Mining algorithms without sacrificing much, or sometimes none, predictive accuracy. Instead, by saving computational resources, combination of such kind of algorithms with iterative procedures that improve the learned hypothesis utilizing vast amounts of available unlabeled data could be achieved efficiently, in contrast to supervised scenario where all this information is rejected because no exploitation mechanism exists. The scope of this work is to examine the ability of a learning scheme that operates under shortage of labeled data for classification tasks, based on an incrementally updated ensemble algorithm. Comparisons against 30 state-of-the art Semi-supervised methods over 50 publicly available datasets are provided, supporting our assumptions about the learning quality of the proposed algorithm.