Kernel ridge regression for supervised classification using Tensor Voting

Mandar Kulkarni, A. Mani, S. Venkatesan
{"title":"Kernel ridge regression for supervised classification using Tensor Voting","authors":"Mandar Kulkarni, A. Mani, S. Venkatesan","doi":"10.1109/INDICON.2014.7030387","DOIUrl":null,"url":null,"abstract":"In this paper, we propose classifiers based on Tensor Voting (TV) framework for supervised binary and multiclass problems. Traditional classification approaches classify a test sample or point based on its proximity to classes of a training set, where proximity is generally taken as some variant of the Euclidean distance in the original or some transformed higher dimensional space. However, we may need to have more intrinsic and computable features of samples than just Euclidean position since classes or patterns usually live on generalized manifolds (that might change in time as well) and thus need an easily parametrizable and computable but powerful notion of “distance” for classification of test samples that may be closer to a class manifolds by distance as well as by orientation or curvature. Our classification approach takes a step in this direction and infers a “local orientation or structure” by computing an already known tensor representation of the training set and performs classification based on the usual distance as well as the estimated (or given) local orientation. In our paper, we describe a novel Eager classification scheme where the central idea is that a kernel ridge regression is carried out on the TV output during the training phase itself so that only evaluation using the kernel is needed during the test phase, achieving a significant speed-up in testing time on benchmark datasets when compared to the lazy learning classifier without much compromise in classification accuracy. The only variable parameter in our approach is the scale of the voting. Our experiments on benchmark datasets demonstrate the efficacy of the proposed approach.","PeriodicalId":409794,"journal":{"name":"2014 Annual IEEE India Conference (INDICON)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 Annual IEEE India Conference (INDICON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INDICON.2014.7030387","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we propose classifiers based on Tensor Voting (TV) framework for supervised binary and multiclass problems. Traditional classification approaches classify a test sample or point based on its proximity to classes of a training set, where proximity is generally taken as some variant of the Euclidean distance in the original or some transformed higher dimensional space. However, we may need to have more intrinsic and computable features of samples than just Euclidean position since classes or patterns usually live on generalized manifolds (that might change in time as well) and thus need an easily parametrizable and computable but powerful notion of “distance” for classification of test samples that may be closer to a class manifolds by distance as well as by orientation or curvature. Our classification approach takes a step in this direction and infers a “local orientation or structure” by computing an already known tensor representation of the training set and performs classification based on the usual distance as well as the estimated (or given) local orientation. In our paper, we describe a novel Eager classification scheme where the central idea is that a kernel ridge regression is carried out on the TV output during the training phase itself so that only evaluation using the kernel is needed during the test phase, achieving a significant speed-up in testing time on benchmark datasets when compared to the lazy learning classifier without much compromise in classification accuracy. The only variable parameter in our approach is the scale of the voting. Our experiments on benchmark datasets demonstrate the efficacy of the proposed approach.
基于张量投票的核脊回归监督分类
本文提出了基于张量投票(Tensor Voting, TV)框架的有监督二类和多类问题分类器。传统的分类方法是根据测试样本或点与训练集类别的接近程度对其进行分类,其中接近度通常被视为原始或转换后的高维空间中的欧几里得距离的某种变体。然而,我们可能需要样本有更多的内在和可计算的特征,而不仅仅是欧几里得位置,因为类或模式通常存在于广义流形上(也可能随时间变化),因此需要一个易于参数化和可计算的,但强大的“距离”概念,用于测试样本的分类,这些测试样本可能通过距离以及方向或曲率更接近类流形。我们的分类方法在这个方向上迈出了一步,通过计算训练集的已知张量表示来推断“局部方向或结构”,并根据通常的距离以及估计的(或给定的)局部方向执行分类。在我们的论文中,我们描述了一种新的Eager分类方案,其中心思想是在训练阶段本身对TV输出进行核脊回归,因此在测试阶段只需要使用核进行评估,与懒惰学习分类器相比,在基准数据集上的测试时间大大加快,而分类精度却没有太大的妥协。我们的方法中唯一可变的参数是投票的规模。我们在基准数据集上的实验证明了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信