具有节点级精确差分的图对比学习

IF 6.2 3区 综合性期刊 Q1 Multidisciplinary
Pengfei Jiao , Kaiyan Yu , Qing Bao , Ying Jiang , Xuan Guo , Zhidong Zhao
{"title":"具有节点级精确差分的图对比学习","authors":"Pengfei Jiao ,&nbsp;Kaiyan Yu ,&nbsp;Qing Bao ,&nbsp;Ying Jiang ,&nbsp;Xuan Guo ,&nbsp;Zhidong Zhao","doi":"10.1016/j.fmre.2024.06.013","DOIUrl":null,"url":null,"abstract":"<div><div>Graph contrastive learning (GCL) has attracted extensive research interest due to its powerful ability to capture latent structural and semantic information of graphs in a self-supervised manner. Existing GCL methods commonly adopt predefined graph augmentations to generate two contrastive views. Subsequently, they design a contrastive pretext task between these views with the goal of maximizing their agreement. These methods assume the augmented graph can fully preserve the semantics of the original. However, typical data augmentation strategies in GCL, such as random edge dropping, may alter the properties of the original graph. As a result, previous GCL methods overlooked graph differences, potentially leading to difficulty distinguishing between graphs that are structurally similar but semantically different. Therefore, we argue that it is necessary to design a method that can quantify the dissimilarity between the original and augmented graphs to more accurately capture the relationships between samples. In this work, we propose a novel graph contrastive learning framework, named Accurate Difference-based Node-Level Graph Contrastive Learning (DNGCL), which helps the model distinguish similar graphs with slight differences by learning node-level differences between graphs. Specifically, we train the model to distinguish between original and augmented nodes via a node discriminator and employ cosine dissimilarity to accurately measure the difference between each node. Furthermore, we employ multiple types of data augmentation commonly used in current GCL methods on the original graph, aiming to learn the differences between nodes under different augmentation strategies and help the model learn richer local information. We conduct extensive experiments on six benchmark datasets and the results show that our DNGCL outperforms most state-of-the-art baselines, which strongly validates the effectiveness of our model.</div></div>","PeriodicalId":34602,"journal":{"name":"Fundamental Research","volume":"5 2","pages":"Pages 818-829"},"PeriodicalIF":6.2000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Graph contrastive learning with node-level accurate difference\",\"authors\":\"Pengfei Jiao ,&nbsp;Kaiyan Yu ,&nbsp;Qing Bao ,&nbsp;Ying Jiang ,&nbsp;Xuan Guo ,&nbsp;Zhidong Zhao\",\"doi\":\"10.1016/j.fmre.2024.06.013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Graph contrastive learning (GCL) has attracted extensive research interest due to its powerful ability to capture latent structural and semantic information of graphs in a self-supervised manner. Existing GCL methods commonly adopt predefined graph augmentations to generate two contrastive views. Subsequently, they design a contrastive pretext task between these views with the goal of maximizing their agreement. These methods assume the augmented graph can fully preserve the semantics of the original. However, typical data augmentation strategies in GCL, such as random edge dropping, may alter the properties of the original graph. As a result, previous GCL methods overlooked graph differences, potentially leading to difficulty distinguishing between graphs that are structurally similar but semantically different. Therefore, we argue that it is necessary to design a method that can quantify the dissimilarity between the original and augmented graphs to more accurately capture the relationships between samples. In this work, we propose a novel graph contrastive learning framework, named Accurate Difference-based Node-Level Graph Contrastive Learning (DNGCL), which helps the model distinguish similar graphs with slight differences by learning node-level differences between graphs. Specifically, we train the model to distinguish between original and augmented nodes via a node discriminator and employ cosine dissimilarity to accurately measure the difference between each node. Furthermore, we employ multiple types of data augmentation commonly used in current GCL methods on the original graph, aiming to learn the differences between nodes under different augmentation strategies and help the model learn richer local information. We conduct extensive experiments on six benchmark datasets and the results show that our DNGCL outperforms most state-of-the-art baselines, which strongly validates the effectiveness of our model.</div></div>\",\"PeriodicalId\":34602,\"journal\":{\"name\":\"Fundamental Research\",\"volume\":\"5 2\",\"pages\":\"Pages 818-829\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2025-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Fundamental Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667325824003455\",\"RegionNum\":3,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Multidisciplinary\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fundamental Research","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667325824003455","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
引用次数: 0

摘要

图对比学习(GCL)以一种自监督的方式捕获图的潜在结构和语义信息的能力引起了广泛的研究兴趣。现有的GCL方法通常采用预定义的图形增强来生成两个对比视图。随后,他们在这些观点之间设计了一个对比的借口任务,目的是最大化他们的一致性。这些方法假设增广图可以完全保留原始图的语义。然而,GCL中典型的数据增强策略,如随机丢边,可能会改变原始图的属性。因此,以前的GCL方法忽略了图的差异,可能导致难以区分结构相似但语义不同的图。因此,我们认为有必要设计一种方法来量化原始图和增广图之间的不相似性,以更准确地捕捉样本之间的关系。在这项工作中,我们提出了一种新的图对比学习框架,称为精确基于差异的节点级图对比学习(Accurate Difference-based Node-Level graph contrast learning, DNGCL),它通过学习图之间的节点级差异来帮助模型区分相似的、有细微差异的图。具体来说,我们通过节点鉴别器训练模型区分原始节点和增强节点,并使用余弦不相似性来精确测量每个节点之间的差异。此外,我们在原始图上使用了当前GCL方法中常用的多种类型的数据增强,旨在学习不同增强策略下节点之间的差异,帮助模型学习到更丰富的局部信息。我们在六个基准数据集上进行了广泛的实验,结果表明我们的DNGCL优于大多数最先进的基线,这有力地验证了我们模型的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Graph contrastive learning with node-level accurate difference

Graph contrastive learning with node-level accurate difference
Graph contrastive learning (GCL) has attracted extensive research interest due to its powerful ability to capture latent structural and semantic information of graphs in a self-supervised manner. Existing GCL methods commonly adopt predefined graph augmentations to generate two contrastive views. Subsequently, they design a contrastive pretext task between these views with the goal of maximizing their agreement. These methods assume the augmented graph can fully preserve the semantics of the original. However, typical data augmentation strategies in GCL, such as random edge dropping, may alter the properties of the original graph. As a result, previous GCL methods overlooked graph differences, potentially leading to difficulty distinguishing between graphs that are structurally similar but semantically different. Therefore, we argue that it is necessary to design a method that can quantify the dissimilarity between the original and augmented graphs to more accurately capture the relationships between samples. In this work, we propose a novel graph contrastive learning framework, named Accurate Difference-based Node-Level Graph Contrastive Learning (DNGCL), which helps the model distinguish similar graphs with slight differences by learning node-level differences between graphs. Specifically, we train the model to distinguish between original and augmented nodes via a node discriminator and employ cosine dissimilarity to accurately measure the difference between each node. Furthermore, we employ multiple types of data augmentation commonly used in current GCL methods on the original graph, aiming to learn the differences between nodes under different augmentation strategies and help the model learn richer local information. We conduct extensive experiments on six benchmark datasets and the results show that our DNGCL outperforms most state-of-the-art baselines, which strongly validates the effectiveness of our model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Fundamental Research
Fundamental Research Multidisciplinary-Multidisciplinary
CiteScore
4.00
自引率
1.60%
发文量
294
审稿时长
79 days
期刊介绍:
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信