Robust Self-Tuning Sparse Subspace Clustering

Guangtao Wang, Jiayu Zhou, Jingjie Ni, Tingjin Luo, Wei Long, Hai Zhen, G. Cong, Jieping Ye
{"title":"Robust Self-Tuning Sparse Subspace Clustering","authors":"Guangtao Wang, Jiayu Zhou, Jingjie Ni, Tingjin Luo, Wei Long, Hai Zhen, G. Cong, Jieping Ye","doi":"10.1109/ICDMW.2017.117","DOIUrl":null,"url":null,"abstract":"Sparse subspace clustering (SSC) is an effective approach to cluster high-dimensional data. However, how to adaptively select the number of clusters/eigenvectors for different data sets, especially when the data are corrupted by noise, is a big challenge in SSC and also an open problem in field of data mining. In this paper, considering the fact that the eigenvectors are robust to noise, we develop a self-adaptive search method to select cluster number for SSC by exploiting the cluster-separation information from eigenvectors. Our method solves the problem by identifying the cluster centers over eigenvectors. We first design a new density based metric, called centrality coefficient gap, to measure such separation information, and estimate the cluster centers by maximizing the gap. After getting the cluster centers, it is straightforward to group the remaining points into respective clusters which contain their nearest neighbors with higher density. This leads to a new clustering algorithm in which the final randomly initialized k-means stage in traditional SSC is eliminated. We theoretically verify the correctness of the proposed method on noise-free data. Extensive experiments on synthetic and real-world data corrupted by noise demonstrate the robustness and effectiveness of the proposed method comparing to the well-established competitors.","PeriodicalId":389183,"journal":{"name":"2017 IEEE International Conference on Data Mining Workshops (ICDMW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Data Mining Workshops (ICDMW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDMW.2017.117","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Sparse subspace clustering (SSC) is an effective approach to cluster high-dimensional data. However, how to adaptively select the number of clusters/eigenvectors for different data sets, especially when the data are corrupted by noise, is a big challenge in SSC and also an open problem in field of data mining. In this paper, considering the fact that the eigenvectors are robust to noise, we develop a self-adaptive search method to select cluster number for SSC by exploiting the cluster-separation information from eigenvectors. Our method solves the problem by identifying the cluster centers over eigenvectors. We first design a new density based metric, called centrality coefficient gap, to measure such separation information, and estimate the cluster centers by maximizing the gap. After getting the cluster centers, it is straightforward to group the remaining points into respective clusters which contain their nearest neighbors with higher density. This leads to a new clustering algorithm in which the final randomly initialized k-means stage in traditional SSC is eliminated. We theoretically verify the correctness of the proposed method on noise-free data. Extensive experiments on synthetic and real-world data corrupted by noise demonstrate the robustness and effectiveness of the proposed method comparing to the well-established competitors.
鲁棒自调优稀疏子空间聚类
稀疏子空间聚类(SSC)是一种有效的高维数据聚类方法。然而,如何自适应地选择不同数据集的聚类/特征向量的数量,特别是当数据被噪声破坏时,是SSC研究的一大挑战,也是数据挖掘领域的一个开放性问题。本文考虑到特征向量对噪声的鲁棒性,利用特征向量的聚类分离信息,提出了一种自适应搜索方法来选择SSC的聚类数。我们的方法通过识别特征向量上的聚类中心来解决这个问题。我们首先设计了一个新的基于密度的度量,称为中心性系数间隙,来度量这种分离信息,并通过最大化间隙来估计聚类中心。在得到聚类中心后,直接将剩余的点分组到各自的聚类中,这些聚类包含了它们最近的密度更高的邻居。这导致了一种新的聚类算法,该算法消除了传统SSC中最终随机初始化的k-means阶段。从理论上验证了该方法在无噪声数据上的正确性。大量的实验表明,与已有的竞争对手相比,该方法具有鲁棒性和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信