简洁矩阵逼近与高效k-NN分类

Rong Liu, Yong Shi
{"title":"简洁矩阵逼近与高效k-NN分类","authors":"Rong Liu, Yong Shi","doi":"10.1109/ICDM.2007.41","DOIUrl":null,"url":null,"abstract":"This work reveals that instead of the polynomial bounds in previous literatures there exists a sharper bound of exponential form for the L2 norm of an arbitrary shaped random matrix. Based on the newly elaborated bound, a nonuniform sampling method is presented to succinctly approximate a matrix with a sparse binary one, and thus relieves the computation loads of k-NN classifier in both time and storage. The method is also pass-efficient because sampling and quantizing are combined together in a single step and the whole process can be completed within one pass over the input matrix. In the evaluations on compression ratio and reconstruction error, the sampling method exhibits impressive capability in providing succinct and tight approximations for the input matrices. The most significant finding in the classification experiment is that the k-NN classifier based on the approximation can even outperform the standard one. This provides another strong evidence for the claim that our method is especially capable in capturing intrinsic characteristics.","PeriodicalId":233758,"journal":{"name":"Seventh IEEE International Conference on Data Mining (ICDM 2007)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Succinct Matrix Approximation and Efficient k-NN Classification\",\"authors\":\"Rong Liu, Yong Shi\",\"doi\":\"10.1109/ICDM.2007.41\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work reveals that instead of the polynomial bounds in previous literatures there exists a sharper bound of exponential form for the L2 norm of an arbitrary shaped random matrix. Based on the newly elaborated bound, a nonuniform sampling method is presented to succinctly approximate a matrix with a sparse binary one, and thus relieves the computation loads of k-NN classifier in both time and storage. The method is also pass-efficient because sampling and quantizing are combined together in a single step and the whole process can be completed within one pass over the input matrix. In the evaluations on compression ratio and reconstruction error, the sampling method exhibits impressive capability in providing succinct and tight approximations for the input matrices. The most significant finding in the classification experiment is that the k-NN classifier based on the approximation can even outperform the standard one. This provides another strong evidence for the claim that our method is especially capable in capturing intrinsic characteristics.\",\"PeriodicalId\":233758,\"journal\":{\"name\":\"Seventh IEEE International Conference on Data Mining (ICDM 2007)\",\"volume\":\"150 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Seventh IEEE International Conference on Data Mining (ICDM 2007)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDM.2007.41\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seventh IEEE International Conference on Data Mining (ICDM 2007)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM.2007.41","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文的工作揭示了任意形状随机矩阵的L2范数存在一个更清晰的指数形式的界,而不是以往文献中的多项式界。在此基础上,提出了一种非均匀采样方法,用稀疏二值矩阵简洁地逼近矩阵,从而减轻了k-NN分类器在时间和存储方面的计算负担。该方法还具有通过效率,因为采样和量化在一个步骤中结合在一起,整个过程可以在一次通过输入矩阵内完成。在压缩比和重构误差的评估中,采样方法在提供输入矩阵的简洁和紧密近似方面表现出令人印象深刻的能力。分类实验中最重要的发现是基于近似的k-NN分类器甚至可以优于标准分类器。这为我们的方法特别能够捕捉内在特征的说法提供了另一个强有力的证据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Succinct Matrix Approximation and Efficient k-NN Classification
This work reveals that instead of the polynomial bounds in previous literatures there exists a sharper bound of exponential form for the L2 norm of an arbitrary shaped random matrix. Based on the newly elaborated bound, a nonuniform sampling method is presented to succinctly approximate a matrix with a sparse binary one, and thus relieves the computation loads of k-NN classifier in both time and storage. The method is also pass-efficient because sampling and quantizing are combined together in a single step and the whole process can be completed within one pass over the input matrix. In the evaluations on compression ratio and reconstruction error, the sampling method exhibits impressive capability in providing succinct and tight approximations for the input matrices. The most significant finding in the classification experiment is that the k-NN classifier based on the approximation can even outperform the standard one. This provides another strong evidence for the claim that our method is especially capable in capturing intrinsic characteristics.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信