Feature selection embedded subspace clustering with low-rank and locality constraints

Cong-Zhe You, Xiaojun Wu
{"title":"Feature selection embedded subspace clustering with low-rank and locality constraints","authors":"Cong-Zhe You, Xiaojun Wu","doi":"10.1109/ISC2.2018.8656922","DOIUrl":null,"url":null,"abstract":"Subspace clustering analysis has good performance for the clustering problem of high dimensional data. In recent years, subspace clustering analysis algorithms based on representation has also been widely concerned. But when the feature dimension is too high, it not only increases the time complexity of the operation, but also reduces the performance of the algorithm, so it is an important research topic how to use the less feature dimension to carry on the subspace clustering analysis. In this paper, the feature selection method is added to the algorithm framework of low rank representation, and the two are fused into a new single model, and a new subspace clustering algorithm is proposed by using local constraint conditions. The algorithm selects a small number of related feature dimensions to represent low rank data. This not only reduces the complexity of the algorithm, but also helps to accurately reveal the relationship between the data, because the relationship between the data is not influenced by the unrelated feature dimension through the selection of the features. In addition, locality constraints are used in the learning process. Therefore, the learning process of feature and subspace clustering promotes each other and leads to powerful data representation. A large number of experiments on the real datasets have also proved the effectiveness of this method","PeriodicalId":344652,"journal":{"name":"2018 IEEE International Smart Cities Conference (ISC2)","volume":"288 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Smart Cities Conference (ISC2)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISC2.2018.8656922","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Subspace clustering analysis has good performance for the clustering problem of high dimensional data. In recent years, subspace clustering analysis algorithms based on representation has also been widely concerned. But when the feature dimension is too high, it not only increases the time complexity of the operation, but also reduces the performance of the algorithm, so it is an important research topic how to use the less feature dimension to carry on the subspace clustering analysis. In this paper, the feature selection method is added to the algorithm framework of low rank representation, and the two are fused into a new single model, and a new subspace clustering algorithm is proposed by using local constraint conditions. The algorithm selects a small number of related feature dimensions to represent low rank data. This not only reduces the complexity of the algorithm, but also helps to accurately reveal the relationship between the data, because the relationship between the data is not influenced by the unrelated feature dimension through the selection of the features. In addition, locality constraints are used in the learning process. Therefore, the learning process of feature and subspace clustering promotes each other and leads to powerful data representation. A large number of experiments on the real datasets have also proved the effectiveness of this method
特征选择嵌入了具有低秩约束和局部性约束的子空间聚类
子空间聚类分析对于高维数据的聚类问题具有良好的性能。近年来,基于表示的子空间聚类分析算法也受到了广泛关注。但是当特征维数过高时,不仅增加了操作的时间复杂度,而且降低了算法的性能,因此如何利用较少的特征维数进行子空间聚类分析是一个重要的研究课题。本文将特征选择方法加入到低秩表示的算法框架中,将两者融合为一个新的单一模型,并利用局部约束条件提出了一种新的子空间聚类算法。该算法选择少量的相关特征维来表示低秩数据。这不仅降低了算法的复杂度,而且有助于准确揭示数据之间的关系,因为通过特征的选择,数据之间的关系不受无关特征维的影响。此外,在学习过程中还使用了局部性约束。因此,特征和子空间聚类的学习过程相互促进,从而产生强大的数据表示。在实际数据集上的大量实验也证明了该方法的有效性
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信