Jintang Bian;Yixiang Lin;Xiaohua Xie;Chang-Dong Wang;Lingxiao Yang;Jian-Huang Lai;Feiping Nie
{"title":"利用双重自我监督学习进行多层次对比多视图聚类","authors":"Jintang Bian;Yixiang Lin;Xiaohua Xie;Chang-Dong Wang;Lingxiao Yang;Jian-Huang Lai;Feiping Nie","doi":"10.1109/TNNLS.2025.3552969","DOIUrl":null,"url":null,"abstract":"Multiview clustering (MVC) aims to integrate multiple related but different views of data to achieve more accurate clustering performance. Contrastive learning has found many applications in MVC due to its successful performance in unsupervised visual representation learning. However, existing MVC methods based on contrastive learning overlook the potential of high similarity nearest neighbors as positive pairs. In addition, these methods do not capture the multilevel (i.e., cluster, instance, and prototype levels) representational structure that naturally exists in multiview datasets. These limitations could further hinder the structural compactness of learned multiview representations. To address these issues, we propose a novel end-to-end deep MVC method called multilevel contrastive MVC (MCMC) with dual self-supervised learning (DSL). Specifically, we first treat the nearest neighbors of an object from the latent subspace as the positive pairs for multiview contrastive loss, which improves the compactness of the representation at the instance level. Second, we perform multilevel contrastive learning (MCL) on clusters, instances, and prototypes to capture the multilevel representational structure underlying the multiview data in the latent space. In addition, we learn consistent cluster assignments for MVC by adopting a DSL method to associate different level structural representations. The evaluation experiment showed that MCMC can achieve intracluster compactness, intercluster separability, and higher accuracy (ACC) in clustering performance. Our code is available at <uri>https://github.com/bianjt-morning/MCMC</uri>.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 6","pages":"10422-10436"},"PeriodicalIF":8.9000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multilevel Contrastive Multiview Clustering With Dual Self-Supervised Learning\",\"authors\":\"Jintang Bian;Yixiang Lin;Xiaohua Xie;Chang-Dong Wang;Lingxiao Yang;Jian-Huang Lai;Feiping Nie\",\"doi\":\"10.1109/TNNLS.2025.3552969\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multiview clustering (MVC) aims to integrate multiple related but different views of data to achieve more accurate clustering performance. Contrastive learning has found many applications in MVC due to its successful performance in unsupervised visual representation learning. However, existing MVC methods based on contrastive learning overlook the potential of high similarity nearest neighbors as positive pairs. In addition, these methods do not capture the multilevel (i.e., cluster, instance, and prototype levels) representational structure that naturally exists in multiview datasets. These limitations could further hinder the structural compactness of learned multiview representations. To address these issues, we propose a novel end-to-end deep MVC method called multilevel contrastive MVC (MCMC) with dual self-supervised learning (DSL). Specifically, we first treat the nearest neighbors of an object from the latent subspace as the positive pairs for multiview contrastive loss, which improves the compactness of the representation at the instance level. Second, we perform multilevel contrastive learning (MCL) on clusters, instances, and prototypes to capture the multilevel representational structure underlying the multiview data in the latent space. In addition, we learn consistent cluster assignments for MVC by adopting a DSL method to associate different level structural representations. The evaluation experiment showed that MCMC can achieve intracluster compactness, intercluster separability, and higher accuracy (ACC) in clustering performance. Our code is available at <uri>https://github.com/bianjt-morning/MCMC</uri>.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"36 6\",\"pages\":\"10422-10436\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10963906/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10963906/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Multilevel Contrastive Multiview Clustering With Dual Self-Supervised Learning
Multiview clustering (MVC) aims to integrate multiple related but different views of data to achieve more accurate clustering performance. Contrastive learning has found many applications in MVC due to its successful performance in unsupervised visual representation learning. However, existing MVC methods based on contrastive learning overlook the potential of high similarity nearest neighbors as positive pairs. In addition, these methods do not capture the multilevel (i.e., cluster, instance, and prototype levels) representational structure that naturally exists in multiview datasets. These limitations could further hinder the structural compactness of learned multiview representations. To address these issues, we propose a novel end-to-end deep MVC method called multilevel contrastive MVC (MCMC) with dual self-supervised learning (DSL). Specifically, we first treat the nearest neighbors of an object from the latent subspace as the positive pairs for multiview contrastive loss, which improves the compactness of the representation at the instance level. Second, we perform multilevel contrastive learning (MCL) on clusters, instances, and prototypes to capture the multilevel representational structure underlying the multiview data in the latent space. In addition, we learn consistent cluster assignments for MVC by adopting a DSL method to associate different level structural representations. The evaluation experiment showed that MCMC can achieve intracluster compactness, intercluster separability, and higher accuracy (ACC) in clustering performance. Our code is available at https://github.com/bianjt-morning/MCMC.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.