Orlando Iparraguirre-Villanueva, Melquiades Melgarejo-Graciano, Gloria Castro-Leon, Sandro Olaya-Cotera, John Ruiz-Alvarado, Andrés Epifanía-Huerta, M. Cabanillas-Carbonell, Joselyn Zapata-Paulini
{"title":"Classification of Tweets Related to Natural Disasters Using Machine Learning Algorithms","authors":"Orlando Iparraguirre-Villanueva, Melquiades Melgarejo-Graciano, Gloria Castro-Leon, Sandro Olaya-Cotera, John Ruiz-Alvarado, Andrés Epifanía-Huerta, M. Cabanillas-Carbonell, Joselyn Zapata-Paulini","doi":"10.3991/ijim.v17i14.39907","DOIUrl":null,"url":null,"abstract":"Identifying and classifying text extracted from social networks, following the traditional method, is very complex. In recent years, computer science has advanced exponentially, helping significantly to identify and classify text extracted from social networks, specifically Twitter. This work aims to identify, classify and analyze tweets related to real natural disasters through tweets with the hashtag #NaturalDisasters, using Machine learning (ML) algorithms, such as Bernoulli Naive Bayes (BNB), Multinomial Naive Bayes (MNB), Logistic Regression (LR), K-Nearest Neighbors (KNN), Decision Tree (DT), Random Forest (RF). First, tweets related to natural disasters were identified, creating a dataset of 122k geolocated tweets for training. Secondly, the data-cleaning process was carried out by applying stemming and lemmatization techniques. Third, exploratory data analysis (EDA) was performed to gain an initial understanding of the data. Fourth, the training and testing process of the BNB, MNB, L, KNN, DT, and RF models was initiated, using tools and libraries for this type of task. The results of the trained models demonstrated optimal performance: BNB, MNB, and LR models achieved a performance rate of 87% accuracy; and KNN, DT, and RF models achieved performances of 82%, 75%, and 86%, respectively. However, the BNB, MNB, and LR models performed better with respect to performance on their respective metrics, such as processing time, test accuracy, precision, and F1 score. Demonstrating, for this context and with the trained dataset that they are the best in terms of text classifiers.","PeriodicalId":13648,"journal":{"name":"Int. J. Interact. Mob. Technol.","volume":"2016 1","pages":"144-162"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Interact. Mob. Technol.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3991/ijim.v17i14.39907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Identifying and classifying text extracted from social networks, following the traditional method, is very complex. In recent years, computer science has advanced exponentially, helping significantly to identify and classify text extracted from social networks, specifically Twitter. This work aims to identify, classify and analyze tweets related to real natural disasters through tweets with the hashtag #NaturalDisasters, using Machine learning (ML) algorithms, such as Bernoulli Naive Bayes (BNB), Multinomial Naive Bayes (MNB), Logistic Regression (LR), K-Nearest Neighbors (KNN), Decision Tree (DT), Random Forest (RF). First, tweets related to natural disasters were identified, creating a dataset of 122k geolocated tweets for training. Secondly, the data-cleaning process was carried out by applying stemming and lemmatization techniques. Third, exploratory data analysis (EDA) was performed to gain an initial understanding of the data. Fourth, the training and testing process of the BNB, MNB, L, KNN, DT, and RF models was initiated, using tools and libraries for this type of task. The results of the trained models demonstrated optimal performance: BNB, MNB, and LR models achieved a performance rate of 87% accuracy; and KNN, DT, and RF models achieved performances of 82%, 75%, and 86%, respectively. However, the BNB, MNB, and LR models performed better with respect to performance on their respective metrics, such as processing time, test accuracy, precision, and F1 score. Demonstrating, for this context and with the trained dataset that they are the best in terms of text classifiers.