Cross-domain inductive applications with unsupervised (dynamic) Graph Neural Networks (GNN): Leveraging Siamese GNN and energy-based PMI optimization

IF 2.7 3区 数学 Q1 MATHEMATICS, APPLIED
Khushnood Abbas , Shi Dong , Alireza Abbasi , Yong Tang
{"title":"Cross-domain inductive applications with unsupervised (dynamic) Graph Neural Networks (GNN): Leveraging Siamese GNN and energy-based PMI optimization","authors":"Khushnood Abbas ,&nbsp;Shi Dong ,&nbsp;Alireza Abbasi ,&nbsp;Yong Tang","doi":"10.1016/j.physd.2025.134632","DOIUrl":null,"url":null,"abstract":"<div><div>The existing body of work in graph embedding has primarily focused on Graph Neural Network (GNN) models designed for transductive settings, meaning these models can only be applied to the specific graph on which they were trained. This limitation restricts the applicability of GNN models, as training them on large graphs is computationally expensive. Additionally, there is a significant research gap in applying these models to cross-domain inductive prediction, where the goal is to train on a smaller graph and generalize to larger or different domain graphs. To address these challenges, this study proposes a novel GNN model capable of generating node representations not only within the same domain but also across different domains. To achieve this, we have explored state-of-the-art Graph Neural Networks (GNNs), including Graph Convolutional Networks, Graph Attention Networks, Graph Isomorphism Networks, and Position-Aware Graph Neural Networks. Furthermore, to effectively learn parameters from smaller graphs, we developed a Siamese Graph Neural Network trained using a novel loss function specifically designed for Graph Siamese Neural Networks. Additionally, to handle real-world sparse graphs efficiently, we provide TensorFlow code optimized for sparse graph operations, significantly reducing spatial complexity. To evaluate the performance of the proposed model, we utilized five real-world dynamic graphs. The model was trained on a smaller dataset in an unsupervised manner, and the pre-trained model was then used to generate both inter- and intra-domain graph node representations. The framework demonstrates robustness, as any state-of-the-art GNN method can be integrated into the Siamese neural network framework to learn parameters using the proposed hybrid cost function. The implementation code is publicly available online to ensure reproducibility of the model: <em>https://github.com/khushnood/UnsupervisedPretrainedCrossdomainInductive NodeRepresentationLearning</em>.</div></div>","PeriodicalId":20050,"journal":{"name":"Physica D: Nonlinear Phenomena","volume":"476 ","pages":"Article 134632"},"PeriodicalIF":2.7000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physica D: Nonlinear Phenomena","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167278925001113","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

The existing body of work in graph embedding has primarily focused on Graph Neural Network (GNN) models designed for transductive settings, meaning these models can only be applied to the specific graph on which they were trained. This limitation restricts the applicability of GNN models, as training them on large graphs is computationally expensive. Additionally, there is a significant research gap in applying these models to cross-domain inductive prediction, where the goal is to train on a smaller graph and generalize to larger or different domain graphs. To address these challenges, this study proposes a novel GNN model capable of generating node representations not only within the same domain but also across different domains. To achieve this, we have explored state-of-the-art Graph Neural Networks (GNNs), including Graph Convolutional Networks, Graph Attention Networks, Graph Isomorphism Networks, and Position-Aware Graph Neural Networks. Furthermore, to effectively learn parameters from smaller graphs, we developed a Siamese Graph Neural Network trained using a novel loss function specifically designed for Graph Siamese Neural Networks. Additionally, to handle real-world sparse graphs efficiently, we provide TensorFlow code optimized for sparse graph operations, significantly reducing spatial complexity. To evaluate the performance of the proposed model, we utilized five real-world dynamic graphs. The model was trained on a smaller dataset in an unsupervised manner, and the pre-trained model was then used to generate both inter- and intra-domain graph node representations. The framework demonstrates robustness, as any state-of-the-art GNN method can be integrated into the Siamese neural network framework to learn parameters using the proposed hybrid cost function. The implementation code is publicly available online to ensure reproducibility of the model: https://github.com/khushnood/UnsupervisedPretrainedCrossdomainInductive NodeRepresentationLearning.
无监督(动态)图神经网络(GNN)的跨域归纳应用:利用Siamese GNN和基于能量的PMI优化
图嵌入的现有工作主要集中在为转换设置设计的图神经网络(GNN)模型上,这意味着这些模型只能应用于它们所训练的特定图。这个限制限制了GNN模型的适用性,因为在大型图上训练它们在计算上是昂贵的。此外,在将这些模型应用于跨域归纳预测方面存在显著的研究空白,其目标是在较小的图上进行训练,并推广到更大或不同的域图。为了解决这些挑战,本研究提出了一种新的GNN模型,该模型不仅能够在同一域内生成节点表示,而且能够跨不同域生成节点表示。为了实现这一目标,我们探索了最先进的图神经网络(gnn),包括图卷积网络、图注意力网络、图同构网络和位置感知图神经网络。此外,为了有效地从较小的图中学习参数,我们开发了一个Siamese图神经网络,使用专门为图Siamese神经网络设计的新型损失函数进行训练。此外,为了有效地处理现实世界的稀疏图,我们提供了针对稀疏图操作优化的TensorFlow代码,显著降低了空间复杂性。为了评估所提出的模型的性能,我们使用了五个真实世界的动态图。该模型在一个较小的数据集上以无监督的方式进行训练,然后使用预训练的模型生成域间和域内的图节点表示。该框架具有鲁棒性,因为任何最先进的GNN方法都可以集成到Siamese神经网络框架中,使用所提出的混合成本函数来学习参数。实现代码可在线公开获取,以确保模型的可再现性:https://github.com/khushnood/UnsupervisedPretrainedCrossdomainInductive NodeRepresentationLearning。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Physica D: Nonlinear Phenomena
Physica D: Nonlinear Phenomena 物理-物理:数学物理
CiteScore
7.30
自引率
7.50%
发文量
213
审稿时长
65 days
期刊介绍: Physica D (Nonlinear Phenomena) publishes research and review articles reporting on experimental and theoretical works, techniques and ideas that advance the understanding of nonlinear phenomena. Topics encompass wave motion in physical, chemical and biological systems; physical or biological phenomena governed by nonlinear field equations, including hydrodynamics and turbulence; pattern formation and cooperative phenomena; instability, bifurcations, chaos, and space-time disorder; integrable/Hamiltonian systems; asymptotic analysis and, more generally, mathematical methods for nonlinear systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信