Qianqian Qiu, Min Li, Sijie Shen, Shaobo Deng, Sujie Guan
{"title":"基于属性贡献的k近邻分类器","authors":"Qianqian Qiu, Min Li, Sijie Shen, Shaobo Deng, Sujie Guan","doi":"10.1109/ACAIT56212.2022.10137909","DOIUrl":null,"url":null,"abstract":"K-nearest neighbor algorithm (KNN) is one of the most representative methods in data mining classification techniques. However, the KNN algorithm has a problem that when the traditional Euclidean distance formula is used to calculate the nearest neighbor distance, we ignore the relationship between attributes in the feature space. To tackle this issue, a covariance matrix is used to calculate the attribute contribution of the samples in order to solve the above problem. So an attribute contribution-based k-nearest neighbor classifier (ACWKNN) is proposed in this paper. The proposed algorithm is compared and experimented on the UCI standard dataset, and the results show that the method outperforms other KNN algorithms.","PeriodicalId":398228,"journal":{"name":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Attribute Contribution-Based K-Nearest Neighbor Classifier\",\"authors\":\"Qianqian Qiu, Min Li, Sijie Shen, Shaobo Deng, Sujie Guan\",\"doi\":\"10.1109/ACAIT56212.2022.10137909\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"K-nearest neighbor algorithm (KNN) is one of the most representative methods in data mining classification techniques. However, the KNN algorithm has a problem that when the traditional Euclidean distance formula is used to calculate the nearest neighbor distance, we ignore the relationship between attributes in the feature space. To tackle this issue, a covariance matrix is used to calculate the attribute contribution of the samples in order to solve the above problem. So an attribute contribution-based k-nearest neighbor classifier (ACWKNN) is proposed in this paper. The proposed algorithm is compared and experimented on the UCI standard dataset, and the results show that the method outperforms other KNN algorithms.\",\"PeriodicalId\":398228,\"journal\":{\"name\":\"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)\",\"volume\":\"145 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACAIT56212.2022.10137909\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACAIT56212.2022.10137909","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Attribute Contribution-Based K-Nearest Neighbor Classifier
K-nearest neighbor algorithm (KNN) is one of the most representative methods in data mining classification techniques. However, the KNN algorithm has a problem that when the traditional Euclidean distance formula is used to calculate the nearest neighbor distance, we ignore the relationship between attributes in the feature space. To tackle this issue, a covariance matrix is used to calculate the attribute contribution of the samples in order to solve the above problem. So an attribute contribution-based k-nearest neighbor classifier (ACWKNN) is proposed in this paper. The proposed algorithm is compared and experimented on the UCI standard dataset, and the results show that the method outperforms other KNN algorithms.