Cluster-graph convolution networks for robust multi-view clustering

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Wei Zheng , Xiao-Yuan Jing , Wei Liu , Fei Wu , Changhui Hu , Bo Du
{"title":"Cluster-graph convolution networks for robust multi-view clustering","authors":"Wei Zheng ,&nbsp;Xiao-Yuan Jing ,&nbsp;Wei Liu ,&nbsp;Fei Wu ,&nbsp;Changhui Hu ,&nbsp;Bo Du","doi":"10.1016/j.knosys.2025.114163","DOIUrl":null,"url":null,"abstract":"<div><div>Existing deep contrastive representation learning methods for unlabeled multi-view data have shown impressive performance by shrinking the cross-view discrepancy. However, most of these methods primarily focus on the procedure of common semantics extraction from multiple views, which is just one of the factors affecting the performance of unsupervised multi-view representation learning. Two additional factors are often overlooked: i) how to improve the discriminative ability of final representations. Existing unsupervised-based approaches normally perform worse on clustering as the number of categories increases. ii) how to balance the contribution of multiple views (specifically in data with more than two views). We observe that the quality of the learned representation is also influenced by certain views, i.e., the model precision may be decreased when some views are involved in the training. To address these factors, we propose a novel contrastive learning-based method, called Cluster-Graph Convolution networks for Robust Multi-view Clustering (CGC-RMC), for unlabeled multi-view data. Specifically, we design a specialized spatial-based cluster-graph convolution and a new adaptive sample-weighted strategy in a contrastive-based basic framework for the above two factors. Additionally, the proposed method adopts a communication fusion module to relieve the influence of view-private information in final view representations. Extensive experiments demonstrate that the proposed method outperforms eleven competitive unsupervised representation learning methods on six multi-view datasets based on the performance of the learned representation on the clustering task.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"327 ","pages":"Article 114163"},"PeriodicalIF":7.6000,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125012043","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Existing deep contrastive representation learning methods for unlabeled multi-view data have shown impressive performance by shrinking the cross-view discrepancy. However, most of these methods primarily focus on the procedure of common semantics extraction from multiple views, which is just one of the factors affecting the performance of unsupervised multi-view representation learning. Two additional factors are often overlooked: i) how to improve the discriminative ability of final representations. Existing unsupervised-based approaches normally perform worse on clustering as the number of categories increases. ii) how to balance the contribution of multiple views (specifically in data with more than two views). We observe that the quality of the learned representation is also influenced by certain views, i.e., the model precision may be decreased when some views are involved in the training. To address these factors, we propose a novel contrastive learning-based method, called Cluster-Graph Convolution networks for Robust Multi-view Clustering (CGC-RMC), for unlabeled multi-view data. Specifically, we design a specialized spatial-based cluster-graph convolution and a new adaptive sample-weighted strategy in a contrastive-based basic framework for the above two factors. Additionally, the proposed method adopts a communication fusion module to relieve the influence of view-private information in final view representations. Extensive experiments demonstrate that the proposed method outperforms eleven competitive unsupervised representation learning methods on six multi-view datasets based on the performance of the learned representation on the clustering task.
鲁棒多视图聚类的聚类图卷积网络
现有的深度对比表示学习方法对未标记的多视图数据进行了深度对比学习,通过缩小交叉视图差异取得了令人印象深刻的效果。然而,这些方法大多侧重于从多视图中提取共同语义的过程,这只是影响无监督多视图表示学习性能的因素之一。另外两个经常被忽视的因素是:1)如何提高最终陈述的判别能力。随着类别数量的增加,现有的基于无监督的方法通常在聚类方面表现较差。Ii)如何平衡多个视图的贡献(特别是在具有两个以上视图的数据中)。我们观察到,学习到的表示质量也受到某些视图的影响,即当某些视图参与训练时,模型精度可能会降低。为了解决这些因素,我们提出了一种新的基于对比学习的方法,称为用于鲁棒多视图聚类的聚类图卷积网络(CGC-RMC),用于未标记的多视图数据。具体而言,针对上述两个因素,我们在基于对比的基本框架中设计了一种专门的基于空间的聚类图卷积和一种新的自适应样本加权策略。此外,该方法采用通信融合模块,消除了视图私有信息对最终视图表示的影响。大量的实验表明,基于学习到的表示在聚类任务上的表现,该方法在6个多视图数据集上优于11种竞争的无监督表示学习方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信