不平衡数据的聚类中心优化欠采样方法

Haitao Li, Mingjie Zhuang
{"title":"不平衡数据的聚类中心优化欠采样方法","authors":"Haitao Li, Mingjie Zhuang","doi":"10.17706/jsw.15.3.74-85","DOIUrl":null,"url":null,"abstract":": When the number of data in one class is significantly larger or less than the data in other class, under learning algorithm for classification, a problem of learning generalization occurs to the specific class and this is called imbalanced data problem. In this paper, a method of under-sampling based on the optimization cluster center selection (BCUSM) is proposed. First of all, the cluster center selection of K-means clustering algorithm is optimized, the initial cluster center is obtained by calculation, instead of random selection. The optimized method is called OICSK-means. And then use it to cluster the negative samples by setting the same number of clusters as positive samples. According to the cosine similarity, select the most similar samples from each cluster with cluster centers as the negative training samples, and a new training set is established with the positive samples. Finally, training with a new training set. This work selected some data from the UCI database of the University of California, Irvine, and used the support vector machine (SVM) classifier for experimental simulation, and compared the classification effects of this method with other four methods such as synthetic oversampling method (SMOTE). The experimental results demonstrate that the BCUSM has certain effectiveness. that of different data set in the experiment, which indicates that BCUSM under-sampling method is more universal than RUS random under-sampling method, and it also reflects that the RUS random under-sampling method easily loses important sample information when the training data has fewer feature attributes, resulting in poor classification. In addition, the SVM's classification effect on the balanced data set is significantly better than the direct SVM classification of the original data set. This shows that SVM is very sensitive to unbalanced data. When no processing is performed on the original training set, the classification accuracy of the SVM for the positive class is greatly reduced, but it also shows that the SVM has better classification performance when the data set is","PeriodicalId":11452,"journal":{"name":"e Informatica Softw. Eng. J.","volume":"51 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Clustering Center Optimization under-Sampling Method for Unbalanced Data\",\"authors\":\"Haitao Li, Mingjie Zhuang\",\"doi\":\"10.17706/jsw.15.3.74-85\",\"DOIUrl\":null,\"url\":null,\"abstract\":\": When the number of data in one class is significantly larger or less than the data in other class, under learning algorithm for classification, a problem of learning generalization occurs to the specific class and this is called imbalanced data problem. In this paper, a method of under-sampling based on the optimization cluster center selection (BCUSM) is proposed. First of all, the cluster center selection of K-means clustering algorithm is optimized, the initial cluster center is obtained by calculation, instead of random selection. The optimized method is called OICSK-means. And then use it to cluster the negative samples by setting the same number of clusters as positive samples. According to the cosine similarity, select the most similar samples from each cluster with cluster centers as the negative training samples, and a new training set is established with the positive samples. Finally, training with a new training set. This work selected some data from the UCI database of the University of California, Irvine, and used the support vector machine (SVM) classifier for experimental simulation, and compared the classification effects of this method with other four methods such as synthetic oversampling method (SMOTE). The experimental results demonstrate that the BCUSM has certain effectiveness. that of different data set in the experiment, which indicates that BCUSM under-sampling method is more universal than RUS random under-sampling method, and it also reflects that the RUS random under-sampling method easily loses important sample information when the training data has fewer feature attributes, resulting in poor classification. In addition, the SVM's classification effect on the balanced data set is significantly better than the direct SVM classification of the original data set. This shows that SVM is very sensitive to unbalanced data. When no processing is performed on the original training set, the classification accuracy of the SVM for the positive class is greatly reduced, but it also shows that the SVM has better classification performance when the data set is\",\"PeriodicalId\":11452,\"journal\":{\"name\":\"e Informatica Softw. Eng. J.\",\"volume\":\"51 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"e Informatica Softw. Eng. J.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.17706/jsw.15.3.74-85\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"e Informatica Softw. Eng. J.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17706/jsw.15.3.74-85","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

:在分类的学习算法中,当一个类的数据量明显大于或小于另一个类的数据量时,就会对特定的类产生学习泛化的问题,称为数据不平衡问题。提出了一种基于优化聚类中心选择(BCUSM)的欠采样方法。首先,对K-means聚类算法的聚类中心选择进行了优化,通过计算得到初始聚类中心,而不是随机选择。优化后的方法称为OICSK-means。然后通过设置与正样本相同数量的簇来对负样本进行聚类。根据余弦相似度,从每个具有聚类中心的聚类中选取最相似的样本作为负训练样本,用正样本建立新的训练集。最后,使用新的训练集进行训练。本工作从加州大学欧文分校的UCI数据库中选取部分数据,使用支持向量机(SVM)分类器进行实验模拟,并将该方法的分类效果与合成过采样法(SMOTE)等其他四种方法进行比较。实验结果表明,BCUSM具有一定的有效性。这表明BCUSM欠采样方法比RUS随机欠采样方法更具普适性,也反映了RUS随机欠采样方法在训练数据特征属性较少时容易丢失重要样本信息,导致分类效果较差。此外,SVM对平衡数据集的分类效果明显优于对原始数据集的直接SVM分类。这说明支持向量机对不平衡数据非常敏感。当对原始训练集不进行任何处理时,支持向量机对正类的分类精度大大降低,但也说明支持向量机在数据集为时具有更好的分类性能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Clustering Center Optimization under-Sampling Method for Unbalanced Data
: When the number of data in one class is significantly larger or less than the data in other class, under learning algorithm for classification, a problem of learning generalization occurs to the specific class and this is called imbalanced data problem. In this paper, a method of under-sampling based on the optimization cluster center selection (BCUSM) is proposed. First of all, the cluster center selection of K-means clustering algorithm is optimized, the initial cluster center is obtained by calculation, instead of random selection. The optimized method is called OICSK-means. And then use it to cluster the negative samples by setting the same number of clusters as positive samples. According to the cosine similarity, select the most similar samples from each cluster with cluster centers as the negative training samples, and a new training set is established with the positive samples. Finally, training with a new training set. This work selected some data from the UCI database of the University of California, Irvine, and used the support vector machine (SVM) classifier for experimental simulation, and compared the classification effects of this method with other four methods such as synthetic oversampling method (SMOTE). The experimental results demonstrate that the BCUSM has certain effectiveness. that of different data set in the experiment, which indicates that BCUSM under-sampling method is more universal than RUS random under-sampling method, and it also reflects that the RUS random under-sampling method easily loses important sample information when the training data has fewer feature attributes, resulting in poor classification. In addition, the SVM's classification effect on the balanced data set is significantly better than the direct SVM classification of the original data set. This shows that SVM is very sensitive to unbalanced data. When no processing is performed on the original training set, the classification accuracy of the SVM for the positive class is greatly reduced, but it also shows that the SVM has better classification performance when the data set is
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信