Modified Euclidean-Canberra blend distance metric for kNN classifier

Pub Date : 2023-05-15 DOI:10.3233/idt-220233
Gaurav Sandhu, Amandeep Singh, Puneet Singh Lamba, Deepali Virmani, Gopal Chaudhary
{"title":"Modified Euclidean-Canberra blend distance metric for kNN classifier","authors":"Gaurav Sandhu, Amandeep Singh, Puneet Singh Lamba, Deepali Virmani, Gopal Chaudhary","doi":"10.3233/idt-220233","DOIUrl":null,"url":null,"abstract":"In today’s world different data sets are available on which regression or classification algorithms of machine learning are applied. One of the classification algorithms is k-nearest neighbor (kNN) which computes distance amongst various rows in a dataset. The performance of kNN is evaluated based on K-value and distance metric used, where K is the total count of neighboring elements. Many different distance metrics have been used by researchers in literature, one of them is Canberra distance metric. In this paper the performance of kNN based on Canberra distance metric is measured on different datasets, further the proposed Canberra distance metric, namely, Modified Euclidean-Canberra Blend Distance (MECBD) metric has been applied to the kNN algorithm which led to improvement of class prediction efficiency on the same datasets measured in terms of accuracy, precision, recall, F1-score for different values of k. Further, this study depicts that MECBD metric use led to improvement in accuracy value 80.4% to 90.3%, 80.6% to 85.4% and 70.0% to 77.0% for various data sets used. Also, implementation of ROC curves and auc for k= 5 is done to show the improvement is kNN model prediction which showed increase in auc values for different data sets, for instance increase in auc values from 0.873 to 0.958 for Spine (2 Classes) dataset, 0.857 to 0.940, 0.983 to 0.983 (no change), 0.910 to 0.957 for DH, SL and NO class for Spine (3 Classes) data set and 0.651 to 0.742 for Haberman’s data set.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/idt-220233","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In today’s world different data sets are available on which regression or classification algorithms of machine learning are applied. One of the classification algorithms is k-nearest neighbor (kNN) which computes distance amongst various rows in a dataset. The performance of kNN is evaluated based on K-value and distance metric used, where K is the total count of neighboring elements. Many different distance metrics have been used by researchers in literature, one of them is Canberra distance metric. In this paper the performance of kNN based on Canberra distance metric is measured on different datasets, further the proposed Canberra distance metric, namely, Modified Euclidean-Canberra Blend Distance (MECBD) metric has been applied to the kNN algorithm which led to improvement of class prediction efficiency on the same datasets measured in terms of accuracy, precision, recall, F1-score for different values of k. Further, this study depicts that MECBD metric use led to improvement in accuracy value 80.4% to 90.3%, 80.6% to 85.4% and 70.0% to 77.0% for various data sets used. Also, implementation of ROC curves and auc for k= 5 is done to show the improvement is kNN model prediction which showed increase in auc values for different data sets, for instance increase in auc values from 0.873 to 0.958 for Spine (2 Classes) dataset, 0.857 to 0.940, 0.983 to 0.983 (no change), 0.910 to 0.957 for DH, SL and NO class for Spine (3 Classes) data set and 0.651 to 0.742 for Haberman’s data set.
分享
查看原文
kNN分类器的改进欧几里得-堪培拉混合距离度量
在今天的世界上,有不同的数据集可以用于机器学习的回归或分类算法。其中一种分类算法是k-最近邻算法(kNN),它计算数据集中不同行之间的距离。kNN的性能是基于K值和使用的距离度量来评估的,其中K是相邻元素的总数。研究人员在文献中使用了许多不同的距离度量,其中之一是堪培拉距离度量。本文对基于堪培拉距离度量的kNN算法在不同数据集上的性能进行了测试,并将提出的堪培拉距离度量,即修正欧几里德-堪培拉混合距离(MECBD)度量应用于kNN算法,在相同的数据集上,对不同k值的准确率、精密度、召回率、f1分数进行了测试,从而提高了分类预测效率。本研究表明,在使用的不同数据集上,使用MECBD度量可以将精度值提高80.4%至90.3%,80.6%至85.4%和70.0%至77.0%。此外,对k= 5的ROC曲线和auc进行了实现,以显示kNN模型预测的改进,该模型预测显示不同数据集的auc值增加,例如脊柱(2类)数据集的auc值从0.873增加到0.958,从0.857增加到0.940,从0.983增加到0.983(没有变化),脊柱(3类)数据集的DH、SL和no类的auc值从0.910增加到0.957,Haberman数据集的auc值从0.651增加到0.742。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信