{"title":"An Active Under-Sampling Approach for Imbalanced Data Classification","authors":"Zeping Yang, Daqi Gao","doi":"10.1109/ISCID.2012.219","DOIUrl":null,"url":null,"abstract":"An active under-sampling approach is proposed for handling the imbalanced problem in this paper. Traditional classifiers usually assume that training examples are evenly distributed among different classes, so they are often biased to the majority class and tend to ignore the minority class. in this case, it is important to select the suitable training dataset for learning from imbalanced data. the samples of the majority class which are far away from the decision boundary should be got rid of the training dataset automatically in our algorithm, and this process doesn't change the density distribution of the whole training dataset. as a result, the ratio of majority class is decreased significantly, and the final balance training dataset is more suitable for the traditional classification algorithms. Compared with other under-sampling methods, our approach can effectively improve the classification accuracy of minority classes while maintaining the overall classification performance by the experimental results.","PeriodicalId":246432,"journal":{"name":"2012 Fifth International Symposium on Computational Intelligence and Design","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Fifth International Symposium on Computational Intelligence and Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCID.2012.219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15
Abstract
An active under-sampling approach is proposed for handling the imbalanced problem in this paper. Traditional classifiers usually assume that training examples are evenly distributed among different classes, so they are often biased to the majority class and tend to ignore the minority class. in this case, it is important to select the suitable training dataset for learning from imbalanced data. the samples of the majority class which are far away from the decision boundary should be got rid of the training dataset automatically in our algorithm, and this process doesn't change the density distribution of the whole training dataset. as a result, the ratio of majority class is decreased significantly, and the final balance training dataset is more suitable for the traditional classification algorithms. Compared with other under-sampling methods, our approach can effectively improve the classification accuracy of minority classes while maintaining the overall classification performance by the experimental results.