PGFormer: A Prototype-Graph Transformer for Incomplete Multiview Clustering.

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yiming Du,Yao Wang,Ziyu Wang,Rui Ning,Lusi Li
{"title":"PGFormer: A Prototype-Graph Transformer for Incomplete Multiview Clustering.","authors":"Yiming Du,Yao Wang,Ziyu Wang,Rui Ning,Lusi Li","doi":"10.1109/tnnls.2025.3617888","DOIUrl":null,"url":null,"abstract":"Incomplete multiview clustering (IMVC) faces significant challenges due to missing data and inherent view discrepancies. While deep neural networks offer powerful representation learning capabilities for IMVC, existing methods often overlook view diversity and force representations across views to be identical, leading to 1) biased representations with distorted topologies and 2) inaccurate imputation for missing data, ultimately degrading clustering performance. To address these issues, we propose prototype-graph transformer (PGFormer), a novel IMVC framework that integrates prototype assignments, rather than direct representations, to enhance clustering performance. PGFormer leverages view-specific encoders to extract features from available samples in each view, employs a PGFormer designed to refine node embeddings, and reconstructs available samples using these refined embeddings. For each view, PGFormer utilizes a graph convolutional network (GCN) to model node-to-node topologies and generate semantic prototypes from the node embeddings. These view-specific prototypes and embeddings are then refined through dual attention mechanisms: prototype-to-prototype (P2P) self-attention and prototype-to-node (P2N) cross-attention, enabling a thorough exploration of multilevel topological relationships within each view. To address missing data, the cross-prototype imputation (CPI) module leverages the weighted prototype assignments from different views to impute missing samples using refined intraview prototypes. Building on this, the cross-view alignment module calibrates prototype assignments to ensure consistent predictions across views. Extensive experiments demonstrate that PGFormer can achieve superior performance compared with the baselines.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"20 1","pages":""},"PeriodicalIF":8.9000,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3617888","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Incomplete multiview clustering (IMVC) faces significant challenges due to missing data and inherent view discrepancies. While deep neural networks offer powerful representation learning capabilities for IMVC, existing methods often overlook view diversity and force representations across views to be identical, leading to 1) biased representations with distorted topologies and 2) inaccurate imputation for missing data, ultimately degrading clustering performance. To address these issues, we propose prototype-graph transformer (PGFormer), a novel IMVC framework that integrates prototype assignments, rather than direct representations, to enhance clustering performance. PGFormer leverages view-specific encoders to extract features from available samples in each view, employs a PGFormer designed to refine node embeddings, and reconstructs available samples using these refined embeddings. For each view, PGFormer utilizes a graph convolutional network (GCN) to model node-to-node topologies and generate semantic prototypes from the node embeddings. These view-specific prototypes and embeddings are then refined through dual attention mechanisms: prototype-to-prototype (P2P) self-attention and prototype-to-node (P2N) cross-attention, enabling a thorough exploration of multilevel topological relationships within each view. To address missing data, the cross-prototype imputation (CPI) module leverages the weighted prototype assignments from different views to impute missing samples using refined intraview prototypes. Building on this, the cross-view alignment module calibrates prototype assignments to ensure consistent predictions across views. Extensive experiments demonstrate that PGFormer can achieve superior performance compared with the baselines.
PGFormer:一个用于不完全多视图聚类的原型图转换器。
由于数据缺失和固有的视图差异,不完全多视图聚类(IMVC)面临着巨大的挑战。虽然深度神经网络为IMVC提供了强大的表示学习能力,但现有的方法往往忽略了视图的多样性,并迫使视图之间的表示相同,导致1)拓扑扭曲的偏见表示和2)缺失数据的不准确输入,最终降低了聚类性能。为了解决这些问题,我们提出了原型图转换器(PGFormer),这是一种新的IMVC框架,它集成了原型分配,而不是直接表示,以提高聚类性能。PGFormer利用特定于视图的编码器从每个视图中的可用样本中提取特征,采用PGFormer设计来改进节点嵌入,并使用这些改进的嵌入重建可用样本。对于每个视图,PGFormer利用图卷积网络(GCN)来建模节点到节点的拓扑结构,并从节点嵌入中生成语义原型。然后,这些特定于视图的原型和嵌入通过双重注意机制进行细化:原型到原型(P2P)自注意和原型到节点(P2N)交叉注意,从而能够彻底探索每个视图中的多层次拓扑关系。为了解决缺失的数据,交叉原型输入(CPI)模块利用来自不同视图的加权原型分配,使用改进的视图内原型来输入缺失的样本。在此基础上,跨视图对齐模块校准原型分配,以确保跨视图的一致预测。大量的实验表明,PGFormer与基线相比可以获得更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信