Simplified Graph Contrastive Learning Model Without Augmentation

IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yuena Lin;Gengyu Lyu;Haichun Cai;Deng-Bao Wang;Haobo Wang;Zhen Yang
{"title":"Simplified Graph Contrastive Learning Model Without Augmentation","authors":"Yuena Lin;Gengyu Lyu;Haichun Cai;Deng-Bao Wang;Haobo Wang;Zhen Yang","doi":"10.1109/TKDE.2025.3590482","DOIUrl":null,"url":null,"abstract":"Burgeoning graph contrastive learning (GCL) stands out in the graph domain with low annotated costs and high model performance improvements, which is typically composed of three standard configurations: 1) graph data augmentation (GraphDA), 2) multi-branch graph neural network (GNN) encoders and projection heads, 3) and contrastive loss. Unfortunately, the diverse GraphDA may corrupt graph semantics to different extents and meanwhile greatly burdens the time complexity on hyperparameter search. Besides, the multi-branch contrastive framework also demands considerable training consumption on encoding and projecting. In this paper, we propose one simplified GCL model to simultaneously address these problems via the minimal components of a general graph contrastive framework, i.e., a GNN encoder and a projection head. The proposed model treats the node representations generated by the GNN encoder and the projection head as positive pairs while considering all other representations as negatives, which not only liberates the model from the dependency on GraphDA but also streamlines the traditional multi-branch contrastive learning framework into a more efficient single-streamlined one. Through the in-depth theoretical analysis on the objective function, the mystery of why the proposed model works is illustrated. Empirical experiments on multiple public datasets demonstrate that the proposed model still ensures performance to be comparative with current advanced self-supervised GNNs.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 10","pages":"6159-6172"},"PeriodicalIF":10.4000,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Knowledge and Data Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11084849/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Burgeoning graph contrastive learning (GCL) stands out in the graph domain with low annotated costs and high model performance improvements, which is typically composed of three standard configurations: 1) graph data augmentation (GraphDA), 2) multi-branch graph neural network (GNN) encoders and projection heads, 3) and contrastive loss. Unfortunately, the diverse GraphDA may corrupt graph semantics to different extents and meanwhile greatly burdens the time complexity on hyperparameter search. Besides, the multi-branch contrastive framework also demands considerable training consumption on encoding and projecting. In this paper, we propose one simplified GCL model to simultaneously address these problems via the minimal components of a general graph contrastive framework, i.e., a GNN encoder and a projection head. The proposed model treats the node representations generated by the GNN encoder and the projection head as positive pairs while considering all other representations as negatives, which not only liberates the model from the dependency on GraphDA but also streamlines the traditional multi-branch contrastive learning framework into a more efficient single-streamlined one. Through the in-depth theoretical analysis on the objective function, the mystery of why the proposed model works is illustrated. Empirical experiments on multiple public datasets demonstrate that the proposed model still ensures performance to be comparative with current advanced self-supervised GNNs.
无增强的简化图对比学习模型
新兴图对比学习(GCL)以标注成本低、模型性能提高高的特点在图域脱颖而出,它通常由三种标准配置组成:1)图数据增强(GraphDA), 2)多分支图神经网络(GNN)编码器和投影头,3)对比损失。然而,不同的GraphDA会在不同程度上破坏图语义,同时极大地增加了超参数搜索的时间复杂度。此外,多分支对比框架在编码和投影方面也需要大量的训练消耗。在本文中,我们提出了一个简化的GCL模型,通过一个通用图对比框架的最小组件,即一个GNN编码器和一个投影头,同时解决这些问题。该模型将GNN编码器和投影头生成的节点表示视为正对,而将其他所有表示视为负对,不仅使模型摆脱了对GraphDA的依赖,而且将传统的多分支对比学习框架简化为更高效的单流线型学习框架。通过对目标函数的深入理论分析,说明了所提出的模型为何有效的奥秘。在多个公共数据集上的经验实验表明,所提出的模型仍然保证了与当前先进的自监督gnn相比较的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering 工程技术-工程:电子与电气
CiteScore
11.70
自引率
3.40%
发文量
515
审稿时长
6 months
期刊介绍: The IEEE Transactions on Knowledge and Data Engineering encompasses knowledge and data engineering aspects within computer science, artificial intelligence, electrical engineering, computer engineering, and related fields. It provides an interdisciplinary platform for disseminating new developments in knowledge and data engineering and explores the practicality of these concepts in both hardware and software. Specific areas covered include knowledge-based and expert systems, AI techniques for knowledge and data management, tools, and methodologies, distributed processing, real-time systems, architectures, data management practices, database design, query languages, security, fault tolerance, statistical databases, algorithms, performance evaluation, and applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信