利用软底片进行多重图对比学习

Zhenhao Zhao, Minhong Zhu, Chen Wang, Sijia Wang, Jiqiang Zhang, Li Chen, Weiran Cai
{"title":"利用软底片进行多重图对比学习","authors":"Zhenhao Zhao, Minhong Zhu, Chen Wang, Sijia Wang, Jiqiang Zhang, Li Chen, Weiran Cai","doi":"arxiv-2409.08010","DOIUrl":null,"url":null,"abstract":"Graph Contrastive Learning (GCL) seeks to learn nodal or graph\nrepresentations that contain maximal consistent information from\ngraph-structured data. While node-level contrasting modes are dominating, some\nefforts commence to explore consistency across different scales. Yet, they tend\nto lose consistent information and be contaminated by disturbing features.\nHere, we introduce MUX-GCL, a novel cross-scale contrastive learning paradigm\nthat utilizes multiplex representations as effective patches. While this\nlearning mode minimizes contaminating noises, a commensurate contrasting\nstrategy using positional affinities further avoids information loss by\ncorrecting false negative pairs across scales. Extensive downstream experiments\ndemonstrate that MUX-GCL yields multiple state-of-the-art results on public\ndatasets. Our theoretical analysis further guarantees the new objective\nfunction as a stricter lower bound of mutual information of raw input features\nand output embeddings, which rationalizes this paradigm. Code is available at\nhttps://github.com/MUX-GCL/Code.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiplex Graph Contrastive Learning with Soft Negatives\",\"authors\":\"Zhenhao Zhao, Minhong Zhu, Chen Wang, Sijia Wang, Jiqiang Zhang, Li Chen, Weiran Cai\",\"doi\":\"arxiv-2409.08010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph Contrastive Learning (GCL) seeks to learn nodal or graph\\nrepresentations that contain maximal consistent information from\\ngraph-structured data. While node-level contrasting modes are dominating, some\\nefforts commence to explore consistency across different scales. Yet, they tend\\nto lose consistent information and be contaminated by disturbing features.\\nHere, we introduce MUX-GCL, a novel cross-scale contrastive learning paradigm\\nthat utilizes multiplex representations as effective patches. While this\\nlearning mode minimizes contaminating noises, a commensurate contrasting\\nstrategy using positional affinities further avoids information loss by\\ncorrecting false negative pairs across scales. Extensive downstream experiments\\ndemonstrate that MUX-GCL yields multiple state-of-the-art results on public\\ndatasets. Our theoretical analysis further guarantees the new objective\\nfunction as a stricter lower bound of mutual information of raw input features\\nand output embeddings, which rationalizes this paradigm. Code is available at\\nhttps://github.com/MUX-GCL/Code.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08010\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

图形对比学习(GCL)旨在从图形结构数据中学习包含最大一致性信息的节点或图形表示。虽然节点级对比模式占主导地位,但也有人开始探索不同尺度的一致性。在这里,我们引入了 MUX-GCL,这是一种新颖的跨尺度对比学习范式,它利用多重表征作为有效的补丁。这种学习模式能最大限度地减少干扰,而利用位置亲和力的相称对比策略则能通过校正跨尺度的假阴性对来进一步避免信息丢失。广泛的下游实验证明,MUX-GCL 在公共数据集上产生了多个最先进的结果。我们的理论分析进一步保证了新的目标函数是原始输入特征和输出嵌入的互信息的更严格下限,从而使这一范例更加合理。代码可在https://github.com/MUX-GCL/Code。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multiplex Graph Contrastive Learning with Soft Negatives
Graph Contrastive Learning (GCL) seeks to learn nodal or graph representations that contain maximal consistent information from graph-structured data. While node-level contrasting modes are dominating, some efforts commence to explore consistency across different scales. Yet, they tend to lose consistent information and be contaminated by disturbing features. Here, we introduce MUX-GCL, a novel cross-scale contrastive learning paradigm that utilizes multiplex representations as effective patches. While this learning mode minimizes contaminating noises, a commensurate contrasting strategy using positional affinities further avoids information loss by correcting false negative pairs across scales. Extensive downstream experiments demonstrate that MUX-GCL yields multiple state-of-the-art results on public datasets. Our theoretical analysis further guarantees the new objective function as a stricter lower bound of mutual information of raw input features and output embeddings, which rationalizes this paradigm. Code is available at https://github.com/MUX-GCL/Code.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信