TCH: A novel multi-view dimensionality reduction method based on triple contrastive heads

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Hongjie Zhang , Ruojin Zhou , Siyu Zhao , Ling Jing , Yingyi Chen
{"title":"TCH: A novel multi-view dimensionality reduction method based on triple contrastive heads","authors":"Hongjie Zhang ,&nbsp;Ruojin Zhou ,&nbsp;Siyu Zhao ,&nbsp;Ling Jing ,&nbsp;Yingyi Chen","doi":"10.1016/j.neunet.2025.107459","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-view dimensionality reduction (MvDR) is a potent approach for addressing the high-dimensional challenges in multi-view data. Recently, contrastive learning (CL) has gained considerable attention due to its superior performance. However, most CL-based methods focus on promoting consistency between any two cross views from the perspective of subspace samples, which extract features containing redundant information and fail to capture view-specific discriminative information. In this study, we propose feature- and recovery-level contrastive losses to eliminate redundant information and capture view-specific discriminative information, respectively. Based on this, we construct a novel MvDR method based on triple contrastive heads (TCH). This method combines sample-, feature-, and recovery-level contrastive losses to extract sufficient yet minimal subspace discriminative information in accordance with the information bottleneck principle. Furthermore, the relationship between TCH and mutual information is revealed, which provides the theoretical support for the outstanding performance of our method. Our experiments on five real-world datasets show that the proposed method outperforms existing methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107459"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025003387","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-view dimensionality reduction (MvDR) is a potent approach for addressing the high-dimensional challenges in multi-view data. Recently, contrastive learning (CL) has gained considerable attention due to its superior performance. However, most CL-based methods focus on promoting consistency between any two cross views from the perspective of subspace samples, which extract features containing redundant information and fail to capture view-specific discriminative information. In this study, we propose feature- and recovery-level contrastive losses to eliminate redundant information and capture view-specific discriminative information, respectively. Based on this, we construct a novel MvDR method based on triple contrastive heads (TCH). This method combines sample-, feature-, and recovery-level contrastive losses to extract sufficient yet minimal subspace discriminative information in accordance with the information bottleneck principle. Furthermore, the relationship between TCH and mutual information is revealed, which provides the theoretical support for the outstanding performance of our method. Our experiments on five real-world datasets show that the proposed method outperforms existing methods.
TCH:一种新的基于三重对比头的多视图降维方法
多视角降维(MvDR)是解决多视角数据高维难题的有效方法。最近,对比学习(CL)因其卓越的性能而备受关注。然而,大多数基于对比学习的方法都侧重于从子空间样本的角度促进任意两个交叉视图之间的一致性,从而提取出包含冗余信息的特征,而无法捕捉特定视图的判别信息。在本研究中,我们提出了特征级和恢复级对比损失,以分别消除冗余信息和捕捉特定视图的判别信息。在此基础上,我们构建了一种基于三重对比头(TCH)的新型 MvDR 方法。该方法结合了样本、特征和恢复级别的对比损失,根据信息瓶颈原理提取出足够但最小的子空间判别信息。此外,我们还揭示了 TCH 与互信息之间的关系,这为我们方法的卓越性能提供了理论支持。我们在五个真实世界数据集上的实验表明,所提出的方法优于现有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信