Arifur Rahman , Sakib Zaman , Shahriar Parvej , Pintu Chandra Shill , Md. Shahidul Salim , Dola Das
{"title":"Fake News Detection: Exploring the Efficiency of Soft and Hard Voting Ensemble","authors":"Arifur Rahman , Sakib Zaman , Shahriar Parvej , Pintu Chandra Shill , Md. Shahidul Salim , Dola Das","doi":"10.1016/j.procs.2025.01.035","DOIUrl":null,"url":null,"abstract":"<div><div>Fake news dissemination is a critical problem, jeopardizing the reliability of information and eroding public confidence. Machine learning provides a solution by distinguishing between real and misleading news. In this study, we utilized various classifiers such as KNN, Logistic Regression, Decision Tree, LightGBM, XGBoost, Gradient Boosting, Random Forest, and Naive Bayes on an open-source dataset. Tf-idf (Term frequency-inverse document frequency) and Count vectorization were used for text preprocessing, and soft and hard voting ensembles were applied to the best three and five models to boost performance. Additionally, we employed BERT (Bidirectional Encoder Representations from Transformers) and CNN (Convolutional Neural Network) models with various optimizers (adam, rmsprop, SGD, adadelta, and adagrad) to further improve classification. The BERT model, optimized with adam, achieved the highest accuracy of 99.93%, with perfect precision (100%) and high recall (99.84%). Furthermore, using Count vectorization for feature extraction, the soft voting ensemble of the top five models (LGBM, XGB, RF, DT, LR) achieved peak performance among all ensemble models, achieving 99.87% accuracy, with precision, recall, f1-score, MAE, MSE, RMSE, RAE, and RRSE of 99.83%, 99.88%, 99.86%, 0.13%, 0.13%, 3.66%, 0.27%, and 7.80%, respectively. The exhaustive experimentation confirms the applicability and efficiency of the recommended models in identifying fake news.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"252 ","pages":"Pages 748-757"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Procedia Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877050925000353","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Fake news dissemination is a critical problem, jeopardizing the reliability of information and eroding public confidence. Machine learning provides a solution by distinguishing between real and misleading news. In this study, we utilized various classifiers such as KNN, Logistic Regression, Decision Tree, LightGBM, XGBoost, Gradient Boosting, Random Forest, and Naive Bayes on an open-source dataset. Tf-idf (Term frequency-inverse document frequency) and Count vectorization were used for text preprocessing, and soft and hard voting ensembles were applied to the best three and five models to boost performance. Additionally, we employed BERT (Bidirectional Encoder Representations from Transformers) and CNN (Convolutional Neural Network) models with various optimizers (adam, rmsprop, SGD, adadelta, and adagrad) to further improve classification. The BERT model, optimized with adam, achieved the highest accuracy of 99.93%, with perfect precision (100%) and high recall (99.84%). Furthermore, using Count vectorization for feature extraction, the soft voting ensemble of the top five models (LGBM, XGB, RF, DT, LR) achieved peak performance among all ensemble models, achieving 99.87% accuracy, with precision, recall, f1-score, MAE, MSE, RMSE, RAE, and RRSE of 99.83%, 99.88%, 99.86%, 0.13%, 0.13%, 3.66%, 0.27%, and 7.80%, respectively. The exhaustive experimentation confirms the applicability and efficiency of the recommended models in identifying fake news.