Kashvi Taunk, Sanjukta De, S. Verma, A. Swetapadma
{"title":"学习与分类的最近邻算法综述","authors":"Kashvi Taunk, Sanjukta De, S. Verma, A. Swetapadma","doi":"10.1109/ICCS45141.2019.9065747","DOIUrl":null,"url":null,"abstract":"k-Nearest Neighbor (kNN) algorithm is an effortless but productive machine learning algorithm. It is effective for classification as well as regression. However, it is more widely used for classification prediction. kNN groups the data into coherent clusters or subsets and classifies the newly inputted data based on its similarity with previously trained data. The input is assigned to the class with which it shares the most nearest neighbors. Though kNN is effective, it has many weaknesses. This paper highlights the kNN method and its modified versions available in previously done researches. These variants remove the weaknesses of kNN and provide a more efficient method.","PeriodicalId":433980,"journal":{"name":"2019 International Conference on Intelligent Computing and Control Systems (ICCS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"236","resultStr":"{\"title\":\"A Brief Review of Nearest Neighbor Algorithm for Learning and Classification\",\"authors\":\"Kashvi Taunk, Sanjukta De, S. Verma, A. Swetapadma\",\"doi\":\"10.1109/ICCS45141.2019.9065747\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"k-Nearest Neighbor (kNN) algorithm is an effortless but productive machine learning algorithm. It is effective for classification as well as regression. However, it is more widely used for classification prediction. kNN groups the data into coherent clusters or subsets and classifies the newly inputted data based on its similarity with previously trained data. The input is assigned to the class with which it shares the most nearest neighbors. Though kNN is effective, it has many weaknesses. This paper highlights the kNN method and its modified versions available in previously done researches. These variants remove the weaknesses of kNN and provide a more efficient method.\",\"PeriodicalId\":433980,\"journal\":{\"name\":\"2019 International Conference on Intelligent Computing and Control Systems (ICCS)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"236\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Intelligent Computing and Control Systems (ICCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCS45141.2019.9065747\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Intelligent Computing and Control Systems (ICCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCS45141.2019.9065747","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Brief Review of Nearest Neighbor Algorithm for Learning and Classification
k-Nearest Neighbor (kNN) algorithm is an effortless but productive machine learning algorithm. It is effective for classification as well as regression. However, it is more widely used for classification prediction. kNN groups the data into coherent clusters or subsets and classifies the newly inputted data based on its similarity with previously trained data. The input is assigned to the class with which it shares the most nearest neighbors. Though kNN is effective, it has many weaknesses. This paper highlights the kNN method and its modified versions available in previously done researches. These variants remove the weaknesses of kNN and provide a more efficient method.