Kernel Subspace Clustering based on Block Diagonal Representation and Sparse Constraints

Lili Fan, Gui-Fu Lu, Ganyi Tang, Yong Wang
{"title":"Kernel Subspace Clustering based on Block Diagonal Representation and Sparse Constraints","authors":"Lili Fan, Gui-Fu Lu, Ganyi Tang, Yong Wang","doi":"10.1109/DSA56465.2022.00055","DOIUrl":null,"url":null,"abstract":"Subspace clustering is an effective method for high-dimensional data clustering. On the premise of global linearity of data, it uses data self-representation to reconstruct each sample linearly, and achieves good results. However, the actual original data structure is usually nonlinear, which makes the subspace clustering algorithm designed on the premise of linear subspace not achieve satisfactory results in dealing with nonlinear data. In order to deal with nonlinear data better, we use kernel function to introduce block diagonal structure and sparse prior into kernel feature space, and propose a kernel subspace clustering method based on block diagonal representation and sparse constraints (KSCBS). Firstly, we perform subspace learning by combining block diagonal representation and sparse constraints. In this way, the obtained coefficient matrix can maintain the block diagonal structure and better reveal the real attributes of the data. Secondly, we use the kernel technique to map the nonlinear original data space into the appropriate high-dimensional feature space, and then transform it into linear data for processing to solve the nonlinear problem of subspace data. Finally, we use the alternating minimization algorithm to solve the objective function. Compared with other advanced linear subspace and nonlinear subspace algorithms, our algorithm has better clustering performance on several common data sets.","PeriodicalId":208148,"journal":{"name":"2022 9th International Conference on Dependable Systems and Their Applications (DSA)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Dependable Systems and Their Applications (DSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSA56465.2022.00055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Subspace clustering is an effective method for high-dimensional data clustering. On the premise of global linearity of data, it uses data self-representation to reconstruct each sample linearly, and achieves good results. However, the actual original data structure is usually nonlinear, which makes the subspace clustering algorithm designed on the premise of linear subspace not achieve satisfactory results in dealing with nonlinear data. In order to deal with nonlinear data better, we use kernel function to introduce block diagonal structure and sparse prior into kernel feature space, and propose a kernel subspace clustering method based on block diagonal representation and sparse constraints (KSCBS). Firstly, we perform subspace learning by combining block diagonal representation and sparse constraints. In this way, the obtained coefficient matrix can maintain the block diagonal structure and better reveal the real attributes of the data. Secondly, we use the kernel technique to map the nonlinear original data space into the appropriate high-dimensional feature space, and then transform it into linear data for processing to solve the nonlinear problem of subspace data. Finally, we use the alternating minimization algorithm to solve the objective function. Compared with other advanced linear subspace and nonlinear subspace algorithms, our algorithm has better clustering performance on several common data sets.
基于块对角表示和稀疏约束的核子空间聚类
子空间聚类是一种有效的高维数据聚类方法。在数据全局线性的前提下,利用数据自表示对每个样本进行线性重构,取得了较好的效果。然而,实际的原始数据结构通常是非线性的,这使得以线性子空间为前提设计的子空间聚类算法在处理非线性数据时不能达到令人满意的效果。为了更好地处理非线性数据,利用核函数将块对角结构和稀疏先验引入核特征空间,提出了一种基于块对角表示和稀疏约束的核子空间聚类方法。首先,我们结合块对角表示和稀疏约束进行子空间学习。这样得到的系数矩阵可以保持块对角结构,更好地揭示数据的真实属性。其次,利用核技术将非线性原始数据空间映射到相应的高维特征空间,再将其转化为线性数据进行处理,解决子空间数据的非线性问题;最后,采用交替最小化算法求解目标函数。与其他先进的线性子空间和非线性子空间算法相比,我们的算法在几种常见数据集上具有更好的聚类性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信