基于概率和自适应K值的改进KNN算法

Yulong Ling, Xiao Zhang, Yong Zhang
{"title":"基于概率和自适应K值的改进KNN算法","authors":"Yulong Ling, Xiao Zhang, Yong Zhang","doi":"10.1145/3456172.3456201","DOIUrl":null,"url":null,"abstract":"As one of the most classical supervised learning algorithms, the KNN algorithm is not only easy to understand but also can solve classification problems very well. Nevertheless, the KNN algorithm has a serious drawback:The voting principle used to predict the category of samples to be classified is too simple and does not take into account the proximity of the number of samples contained in each category in k near-neighbor samples. To solve this problem, this paper proposes a novel decision strategy based on probability and iterative k value to improve the KNN algorithm. By constantly adjusting the value of k to bring the probability value of the largest class in the k neighborhood to the specified threshold, the decision is sufficiently persuasive. The experimental results on several UCI public data sets show that compared with the KNN algorithm and the distance-weighted KNN algorithm, the improved algorithm in this paper improves the classification accuracy while reducing the sensitivity to hyperparameter k to a certain extent.","PeriodicalId":133908,"journal":{"name":"Proceedings of the 2021 7th International Conference on Computing and Data Engineering","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Improved KNN Algorithm based on Probability and Adaptive K Value\",\"authors\":\"Yulong Ling, Xiao Zhang, Yong Zhang\",\"doi\":\"10.1145/3456172.3456201\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As one of the most classical supervised learning algorithms, the KNN algorithm is not only easy to understand but also can solve classification problems very well. Nevertheless, the KNN algorithm has a serious drawback:The voting principle used to predict the category of samples to be classified is too simple and does not take into account the proximity of the number of samples contained in each category in k near-neighbor samples. To solve this problem, this paper proposes a novel decision strategy based on probability and iterative k value to improve the KNN algorithm. By constantly adjusting the value of k to bring the probability value of the largest class in the k neighborhood to the specified threshold, the decision is sufficiently persuasive. The experimental results on several UCI public data sets show that compared with the KNN algorithm and the distance-weighted KNN algorithm, the improved algorithm in this paper improves the classification accuracy while reducing the sensitivity to hyperparameter k to a certain extent.\",\"PeriodicalId\":133908,\"journal\":{\"name\":\"Proceedings of the 2021 7th International Conference on Computing and Data Engineering\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 7th International Conference on Computing and Data Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3456172.3456201\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 7th International Conference on Computing and Data Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3456172.3456201","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

作为最经典的监督学习算法之一,KNN算法不仅简单易懂,而且能很好地解决分类问题。然而,KNN算法有一个严重的缺点:用于预测待分类样本类别的投票原则过于简单,并且没有考虑到k个近邻样本中每个类别中包含的样本数量的接近性。为了解决这一问题,本文提出了一种基于概率和迭代k值的决策策略来改进KNN算法。通过不断调整k的值,使k邻域内最大类的概率值接近指定的阈值,该决策具有足够的说服力。在多个UCI公开数据集上的实验结果表明,与KNN算法和距离加权KNN算法相比,本文改进的算法在一定程度上提高了分类精度,同时降低了对超参数k的敏感性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improved KNN Algorithm based on Probability and Adaptive K Value
As one of the most classical supervised learning algorithms, the KNN algorithm is not only easy to understand but also can solve classification problems very well. Nevertheless, the KNN algorithm has a serious drawback:The voting principle used to predict the category of samples to be classified is too simple and does not take into account the proximity of the number of samples contained in each category in k near-neighbor samples. To solve this problem, this paper proposes a novel decision strategy based on probability and iterative k value to improve the KNN algorithm. By constantly adjusting the value of k to bring the probability value of the largest class in the k neighborhood to the specified threshold, the decision is sufficiently persuasive. The experimental results on several UCI public data sets show that compared with the KNN algorithm and the distance-weighted KNN algorithm, the improved algorithm in this paper improves the classification accuracy while reducing the sensitivity to hyperparameter k to a certain extent.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信