Joint Low-rank and Orthogonal Deep Multi-view Subspace Clustering based on Local Fusion

Guixiang Wang, Hongwei Yin, Wenjun Hu, Y. Liu, Ruiqin Wang
{"title":"Joint Low-rank and Orthogonal Deep Multi-view Subspace Clustering based on Local Fusion","authors":"Guixiang Wang, Hongwei Yin, Wenjun Hu, Y. Liu, Ruiqin Wang","doi":"10.1109/ICDMW58026.2022.00017","DOIUrl":null,"url":null,"abstract":"In recent years, a number of multi-view clustering methods have been proposed through a global fusion paradigm. These methods take the entire sample space as the fusion object, where the global complementarity between views is explored and exploited to improve the clustering performance. However, local structures with strong or weak clustering capacity could coexist in each view. The traditional global fusion paradigm ignores the differences in clustering capacity of local structures, which makes it impossible to explore and exploit local complementarity between views. In this paper, a novel deep multi view subspace clustering method based on local fusion is proposed to solve this problem. First, a low rank self-expression layer is inserted into the deep autoencoder to eliminate the influence of noises when obtaining local cluster structure. Then, the fusion object is refined from the entire sample space to the local cluster structure, where a self-weighted strategy is designed to assign contribution weight according to the clustering capacity of the local cluster structure. Meanwhile, we joint orthogonal constraint to enhance the discriminative of local cluster structure that is more suitable for downstream clustering task. Experiments on several real-world datasets show that the proposed method achieves better clustering performance than most traditional multi-view clustering methods based on global fusion.","PeriodicalId":146687,"journal":{"name":"2022 IEEE International Conference on Data Mining Workshops (ICDMW)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Data Mining Workshops (ICDMW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDMW58026.2022.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, a number of multi-view clustering methods have been proposed through a global fusion paradigm. These methods take the entire sample space as the fusion object, where the global complementarity between views is explored and exploited to improve the clustering performance. However, local structures with strong or weak clustering capacity could coexist in each view. The traditional global fusion paradigm ignores the differences in clustering capacity of local structures, which makes it impossible to explore and exploit local complementarity between views. In this paper, a novel deep multi view subspace clustering method based on local fusion is proposed to solve this problem. First, a low rank self-expression layer is inserted into the deep autoencoder to eliminate the influence of noises when obtaining local cluster structure. Then, the fusion object is refined from the entire sample space to the local cluster structure, where a self-weighted strategy is designed to assign contribution weight according to the clustering capacity of the local cluster structure. Meanwhile, we joint orthogonal constraint to enhance the discriminative of local cluster structure that is more suitable for downstream clustering task. Experiments on several real-world datasets show that the proposed method achieves better clustering performance than most traditional multi-view clustering methods based on global fusion.
基于局部融合的联合低秩正交深度多视图子空间聚类
近年来,通过全局融合范式提出了许多多视图聚类方法。这些方法以整个样本空间为融合对象,探索并利用视图之间的全局互补性来提高聚类性能。然而,具有强或弱聚类能力的局部结构可以在每个视图中共存。传统的全局融合范式忽略了局部结构聚类能力的差异,使得无法探索和利用视图之间的局部互补性。针对这一问题,提出了一种基于局部融合的深度多视图子空间聚类方法。首先,在深度自编码器中插入低秩自表达层,在获取局部簇结构时消除噪声的影响;然后,将融合对象从整个样本空间细化到局部聚类结构,并设计自加权策略,根据局部聚类结构的聚类能力分配贡献权重;同时,结合正交约束,增强局部聚类结构的判别性,使其更适合下游聚类任务。在多个真实数据集上的实验表明,该方法比大多数传统的基于全局融合的多视图聚类方法具有更好的聚类性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信