{"title":"Kernel ridge regression for supervised classification using Tensor Voting","authors":"Mandar Kulkarni, A. Mani, S. Venkatesan","doi":"10.1109/INDICON.2014.7030387","DOIUrl":null,"url":null,"abstract":"In this paper, we propose classifiers based on Tensor Voting (TV) framework for supervised binary and multiclass problems. Traditional classification approaches classify a test sample or point based on its proximity to classes of a training set, where proximity is generally taken as some variant of the Euclidean distance in the original or some transformed higher dimensional space. However, we may need to have more intrinsic and computable features of samples than just Euclidean position since classes or patterns usually live on generalized manifolds (that might change in time as well) and thus need an easily parametrizable and computable but powerful notion of “distance” for classification of test samples that may be closer to a class manifolds by distance as well as by orientation or curvature. Our classification approach takes a step in this direction and infers a “local orientation or structure” by computing an already known tensor representation of the training set and performs classification based on the usual distance as well as the estimated (or given) local orientation. In our paper, we describe a novel Eager classification scheme where the central idea is that a kernel ridge regression is carried out on the TV output during the training phase itself so that only evaluation using the kernel is needed during the test phase, achieving a significant speed-up in testing time on benchmark datasets when compared to the lazy learning classifier without much compromise in classification accuracy. The only variable parameter in our approach is the scale of the voting. Our experiments on benchmark datasets demonstrate the efficacy of the proposed approach.","PeriodicalId":409794,"journal":{"name":"2014 Annual IEEE India Conference (INDICON)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 Annual IEEE India Conference (INDICON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INDICON.2014.7030387","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we propose classifiers based on Tensor Voting (TV) framework for supervised binary and multiclass problems. Traditional classification approaches classify a test sample or point based on its proximity to classes of a training set, where proximity is generally taken as some variant of the Euclidean distance in the original or some transformed higher dimensional space. However, we may need to have more intrinsic and computable features of samples than just Euclidean position since classes or patterns usually live on generalized manifolds (that might change in time as well) and thus need an easily parametrizable and computable but powerful notion of “distance” for classification of test samples that may be closer to a class manifolds by distance as well as by orientation or curvature. Our classification approach takes a step in this direction and infers a “local orientation or structure” by computing an already known tensor representation of the training set and performs classification based on the usual distance as well as the estimated (or given) local orientation. In our paper, we describe a novel Eager classification scheme where the central idea is that a kernel ridge regression is carried out on the TV output during the training phase itself so that only evaluation using the kernel is needed during the test phase, achieving a significant speed-up in testing time on benchmark datasets when compared to the lazy learning classifier without much compromise in classification accuracy. The only variable parameter in our approach is the scale of the voting. Our experiments on benchmark datasets demonstrate the efficacy of the proposed approach.