{"title":"Imbalanced data classification using improved synthetic minority over-sampling technique","authors":"Yamijala Anusha, R. Visalakshi, Konda Srinivas","doi":"10.3233/mgs-230007","DOIUrl":null,"url":null,"abstract":"In data mining, deep learning and machine learning models face class imbalance problems, which result in a lower detection rate for minority class samples. An improved Synthetic Minority Over-sampling Technique (SMOTE) is introduced for effective imbalanced data classification. After collecting the raw data from PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases, the pre-processing is performed using min-max normalization, cleaning, integration, and data transformation techniques to achieve data with better uniqueness, consistency, completeness and validity. An improved SMOTE algorithm is applied to the pre-processed data for proper data distribution, and then the properly distributed data is fed to the machine learning classifiers: Support Vector Machine (SVM), Random Forest, and Decision Tree for data classification. Experimental examination confirmed that the improved SMOTE algorithm with random forest attained significant classification results with Area under Curve (AUC) of 94.30%, 91%, 96.40%, and 99.40% on the PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.6000,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multiagent and Grid Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/mgs-230007","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
In data mining, deep learning and machine learning models face class imbalance problems, which result in a lower detection rate for minority class samples. An improved Synthetic Minority Over-sampling Technique (SMOTE) is introduced for effective imbalanced data classification. After collecting the raw data from PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases, the pre-processing is performed using min-max normalization, cleaning, integration, and data transformation techniques to achieve data with better uniqueness, consistency, completeness and validity. An improved SMOTE algorithm is applied to the pre-processed data for proper data distribution, and then the properly distributed data is fed to the machine learning classifiers: Support Vector Machine (SVM), Random Forest, and Decision Tree for data classification. Experimental examination confirmed that the improved SMOTE algorithm with random forest attained significant classification results with Area under Curve (AUC) of 94.30%, 91%, 96.40%, and 99.40% on the PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases.
在数据挖掘中,深度学习和机器学习模型面临着类不平衡问题,导致对少数类样本的检测率较低。提出了一种改进的合成少数派过采样技术(SMOTE),用于非平衡数据的有效分类。收集PIMA、Yeast、E.coli、Breast cancer Wisconsin数据库的原始数据后,采用min-max归一化、清洗、整合、数据转换等技术进行预处理,使数据具有更好的唯一性、一致性、完整性和有效性。采用改进的SMOTE算法对预处理数据进行适当的数据分布,然后将适当分布的数据提供给机器学习分类器:支持向量机(SVM)、随机森林(Random Forest)和决策树(Decision Tree)进行数据分类。实验验证改进的SMOTE算法在PIMA、酵母菌、大肠杆菌和乳腺癌Wisconsin数据库上取得了显著的分类效果,曲线下面积(Area under Curve, AUC)分别为94.30%、91%、96.40%和99.40%。