Zhenhao Zhao, Minhong Zhu, Chen Wang, Sijia Wang, Jiqiang Zhang, Li Chen, Weiran Cai
{"title":"利用软底片进行多重图对比学习","authors":"Zhenhao Zhao, Minhong Zhu, Chen Wang, Sijia Wang, Jiqiang Zhang, Li Chen, Weiran Cai","doi":"arxiv-2409.08010","DOIUrl":null,"url":null,"abstract":"Graph Contrastive Learning (GCL) seeks to learn nodal or graph\nrepresentations that contain maximal consistent information from\ngraph-structured data. While node-level contrasting modes are dominating, some\nefforts commence to explore consistency across different scales. Yet, they tend\nto lose consistent information and be contaminated by disturbing features.\nHere, we introduce MUX-GCL, a novel cross-scale contrastive learning paradigm\nthat utilizes multiplex representations as effective patches. While this\nlearning mode minimizes contaminating noises, a commensurate contrasting\nstrategy using positional affinities further avoids information loss by\ncorrecting false negative pairs across scales. Extensive downstream experiments\ndemonstrate that MUX-GCL yields multiple state-of-the-art results on public\ndatasets. Our theoretical analysis further guarantees the new objective\nfunction as a stricter lower bound of mutual information of raw input features\nand output embeddings, which rationalizes this paradigm. Code is available at\nhttps://github.com/MUX-GCL/Code.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiplex Graph Contrastive Learning with Soft Negatives\",\"authors\":\"Zhenhao Zhao, Minhong Zhu, Chen Wang, Sijia Wang, Jiqiang Zhang, Li Chen, Weiran Cai\",\"doi\":\"arxiv-2409.08010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph Contrastive Learning (GCL) seeks to learn nodal or graph\\nrepresentations that contain maximal consistent information from\\ngraph-structured data. While node-level contrasting modes are dominating, some\\nefforts commence to explore consistency across different scales. Yet, they tend\\nto lose consistent information and be contaminated by disturbing features.\\nHere, we introduce MUX-GCL, a novel cross-scale contrastive learning paradigm\\nthat utilizes multiplex representations as effective patches. While this\\nlearning mode minimizes contaminating noises, a commensurate contrasting\\nstrategy using positional affinities further avoids information loss by\\ncorrecting false negative pairs across scales. Extensive downstream experiments\\ndemonstrate that MUX-GCL yields multiple state-of-the-art results on public\\ndatasets. Our theoretical analysis further guarantees the new objective\\nfunction as a stricter lower bound of mutual information of raw input features\\nand output embeddings, which rationalizes this paradigm. Code is available at\\nhttps://github.com/MUX-GCL/Code.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08010\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multiplex Graph Contrastive Learning with Soft Negatives
Graph Contrastive Learning (GCL) seeks to learn nodal or graph
representations that contain maximal consistent information from
graph-structured data. While node-level contrasting modes are dominating, some
efforts commence to explore consistency across different scales. Yet, they tend
to lose consistent information and be contaminated by disturbing features.
Here, we introduce MUX-GCL, a novel cross-scale contrastive learning paradigm
that utilizes multiplex representations as effective patches. While this
learning mode minimizes contaminating noises, a commensurate contrasting
strategy using positional affinities further avoids information loss by
correcting false negative pairs across scales. Extensive downstream experiments
demonstrate that MUX-GCL yields multiple state-of-the-art results on public
datasets. Our theoretical analysis further guarantees the new objective
function as a stricter lower bound of mutual information of raw input features
and output embeddings, which rationalizes this paradigm. Code is available at
https://github.com/MUX-GCL/Code.