{"title":"Performance Augmentation of Base Classifiers Using Adaptive Boosting Framework for Medical Datasets","authors":"Durr e Nayab, Rehan Ullah Khan, A. M. Qamar","doi":"10.1155/2023/5542049","DOIUrl":null,"url":null,"abstract":"This paper investigates the performance enhancement of base classifiers within the AdaBoost framework applied to medical datasets. Adaptive boosting (AdaBoost), being an instance of boosting, combines other classifiers to enhance their performance. We conducted a comprehensive experiment to assess the efficacy of twelve base classifiers with the AdaBoost framework, namely, Bayes network, decision stump, ZeroR, decision tree, Naïve Bayes, J-48, voted perceptron, random forest, bagging, random tree, stacking, and AdaBoost itself. The experiments are carried out on five datasets from the medical domain based on various types of cancers, i.e., global cancer map (GCM), lymphoma-I, lymphoma-II, leukaemia, and embryonal tumours. The evaluation focuses on the accuracy, precision, and efficiency of the base classifiers in the AdaBoost framework. The results show that the performance of Naïve Bayes, Bayes network, and voted perceptron is highly improved compared to the rest of the base classifiers, attaining accuracies as high as 94.74%, 97.78%, and 97.78%, respectively. The results also show that in most cases, the base classifiers perform better with AdaBoost compared to their performance, i.e., for voted perceptron, the accuracy is improved up to 13.34%.For bagging, it is improved by up to 7%. This research aims to identify such base classifiers with optimal boosting capabilities within the AdaBoost framework for medical datasets. The significance of these results is that they provide insight into the performance of the base classifiers when used in the boosting framework to enhance the classification performance of classifiers in scenarios where individual classifiers do not perform up to the mark.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":"50 48","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Computational Intelligence and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2023/5542049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper investigates the performance enhancement of base classifiers within the AdaBoost framework applied to medical datasets. Adaptive boosting (AdaBoost), being an instance of boosting, combines other classifiers to enhance their performance. We conducted a comprehensive experiment to assess the efficacy of twelve base classifiers with the AdaBoost framework, namely, Bayes network, decision stump, ZeroR, decision tree, Naïve Bayes, J-48, voted perceptron, random forest, bagging, random tree, stacking, and AdaBoost itself. The experiments are carried out on five datasets from the medical domain based on various types of cancers, i.e., global cancer map (GCM), lymphoma-I, lymphoma-II, leukaemia, and embryonal tumours. The evaluation focuses on the accuracy, precision, and efficiency of the base classifiers in the AdaBoost framework. The results show that the performance of Naïve Bayes, Bayes network, and voted perceptron is highly improved compared to the rest of the base classifiers, attaining accuracies as high as 94.74%, 97.78%, and 97.78%, respectively. The results also show that in most cases, the base classifiers perform better with AdaBoost compared to their performance, i.e., for voted perceptron, the accuracy is improved up to 13.34%.For bagging, it is improved by up to 7%. This research aims to identify such base classifiers with optimal boosting capabilities within the AdaBoost framework for medical datasets. The significance of these results is that they provide insight into the performance of the base classifiers when used in the boosting framework to enhance the classification performance of classifiers in scenarios where individual classifiers do not perform up to the mark.
期刊介绍:
Applied Computational Intelligence and Soft Computing will focus on the disciplines of computer science, engineering, and mathematics. The scope of the journal includes developing applications related to all aspects of natural and social sciences by employing the technologies of computational intelligence and soft computing. The new applications of using computational intelligence and soft computing are still in development. Although computational intelligence and soft computing are established fields, the new applications of using computational intelligence and soft computing can be regarded as an emerging field, which is the focus of this journal.