高维数据分类的随机投影与随机特征选择

Sachin Mylavarapu, A. Kabán
{"title":"高维数据分类的随机投影与随机特征选择","authors":"Sachin Mylavarapu, A. Kabán","doi":"10.1109/UKCI.2013.6651321","DOIUrl":null,"url":null,"abstract":"Random projections and random subspace methods are very simple and computationally efficient techniques to reduce dimensionality for learning from high dimensional data. Since high dimensional data tends to be prevalent in many domains, such techniques are the subject of much recent interest. Random projections (RP) are motivated by their proven ability to preserve inter-point distances. By contrary, the random selection of features (RF) appears to be a heuristic, which nevertheless exhibits good performance in previous studies. In this paper we conduct a thorough empirical comparison between these two approaches in a variety of data sets with different characteristics. We also extend our study to multi-class problems. We find that RP tends to perform better than RF in terms of the classification accuracy in small sample settings, although RF is surprisingly good as well in many cases.","PeriodicalId":106191,"journal":{"name":"2013 13th UK Workshop on Computational Intelligence (UKCI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Random projections versus random selection of features for classification of high dimensional data\",\"authors\":\"Sachin Mylavarapu, A. Kabán\",\"doi\":\"10.1109/UKCI.2013.6651321\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Random projections and random subspace methods are very simple and computationally efficient techniques to reduce dimensionality for learning from high dimensional data. Since high dimensional data tends to be prevalent in many domains, such techniques are the subject of much recent interest. Random projections (RP) are motivated by their proven ability to preserve inter-point distances. By contrary, the random selection of features (RF) appears to be a heuristic, which nevertheless exhibits good performance in previous studies. In this paper we conduct a thorough empirical comparison between these two approaches in a variety of data sets with different characteristics. We also extend our study to multi-class problems. We find that RP tends to perform better than RF in terms of the classification accuracy in small sample settings, although RF is surprisingly good as well in many cases.\",\"PeriodicalId\":106191,\"journal\":{\"name\":\"2013 13th UK Workshop on Computational Intelligence (UKCI)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 13th UK Workshop on Computational Intelligence (UKCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/UKCI.2013.6651321\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 13th UK Workshop on Computational Intelligence (UKCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UKCI.2013.6651321","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

摘要

随机投影和随机子空间方法是一种非常简单且计算效率高的降维方法,用于从高维数据中学习。由于高维数据在许多领域都很流行,所以这些技术是最近的热门话题。随机投影(RP)的动机是它们被证明具有保持点间距离的能力。相反,随机特征选择(RF)似乎是一种启发式方法,但在以往的研究中表现良好。在本文中,我们在具有不同特征的各种数据集中对这两种方法进行了彻底的实证比较。我们也将我们的研究扩展到多类问题。我们发现,在小样本设置中,RP往往比RF在分类精度方面表现得更好,尽管RF在许多情况下也令人惊讶地好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Random projections versus random selection of features for classification of high dimensional data
Random projections and random subspace methods are very simple and computationally efficient techniques to reduce dimensionality for learning from high dimensional data. Since high dimensional data tends to be prevalent in many domains, such techniques are the subject of much recent interest. Random projections (RP) are motivated by their proven ability to preserve inter-point distances. By contrary, the random selection of features (RF) appears to be a heuristic, which nevertheless exhibits good performance in previous studies. In this paper we conduct a thorough empirical comparison between these two approaches in a variety of data sets with different characteristics. We also extend our study to multi-class problems. We find that RP tends to perform better than RF in terms of the classification accuracy in small sample settings, although RF is surprisingly good as well in many cases.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信