Khushnood Abbas , Shi Dong , Alireza Abbasi , Yong Tang
{"title":"无监督(动态)图神经网络(GNN)的跨域归纳应用:利用Siamese GNN和基于能量的PMI优化","authors":"Khushnood Abbas , Shi Dong , Alireza Abbasi , Yong Tang","doi":"10.1016/j.physd.2025.134632","DOIUrl":null,"url":null,"abstract":"<div><div>The existing body of work in graph embedding has primarily focused on Graph Neural Network (GNN) models designed for transductive settings, meaning these models can only be applied to the specific graph on which they were trained. This limitation restricts the applicability of GNN models, as training them on large graphs is computationally expensive. Additionally, there is a significant research gap in applying these models to cross-domain inductive prediction, where the goal is to train on a smaller graph and generalize to larger or different domain graphs. To address these challenges, this study proposes a novel GNN model capable of generating node representations not only within the same domain but also across different domains. To achieve this, we have explored state-of-the-art Graph Neural Networks (GNNs), including Graph Convolutional Networks, Graph Attention Networks, Graph Isomorphism Networks, and Position-Aware Graph Neural Networks. Furthermore, to effectively learn parameters from smaller graphs, we developed a Siamese Graph Neural Network trained using a novel loss function specifically designed for Graph Siamese Neural Networks. Additionally, to handle real-world sparse graphs efficiently, we provide TensorFlow code optimized for sparse graph operations, significantly reducing spatial complexity. To evaluate the performance of the proposed model, we utilized five real-world dynamic graphs. The model was trained on a smaller dataset in an unsupervised manner, and the pre-trained model was then used to generate both inter- and intra-domain graph node representations. The framework demonstrates robustness, as any state-of-the-art GNN method can be integrated into the Siamese neural network framework to learn parameters using the proposed hybrid cost function. The implementation code is publicly available online to ensure reproducibility of the model: <em>https://github.com/khushnood/UnsupervisedPretrainedCrossdomainInductive NodeRepresentationLearning</em>.</div></div>","PeriodicalId":20050,"journal":{"name":"Physica D: Nonlinear Phenomena","volume":"476 ","pages":"Article 134632"},"PeriodicalIF":2.7000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-domain inductive applications with unsupervised (dynamic) Graph Neural Networks (GNN): Leveraging Siamese GNN and energy-based PMI optimization\",\"authors\":\"Khushnood Abbas , Shi Dong , Alireza Abbasi , Yong Tang\",\"doi\":\"10.1016/j.physd.2025.134632\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The existing body of work in graph embedding has primarily focused on Graph Neural Network (GNN) models designed for transductive settings, meaning these models can only be applied to the specific graph on which they were trained. This limitation restricts the applicability of GNN models, as training them on large graphs is computationally expensive. Additionally, there is a significant research gap in applying these models to cross-domain inductive prediction, where the goal is to train on a smaller graph and generalize to larger or different domain graphs. To address these challenges, this study proposes a novel GNN model capable of generating node representations not only within the same domain but also across different domains. To achieve this, we have explored state-of-the-art Graph Neural Networks (GNNs), including Graph Convolutional Networks, Graph Attention Networks, Graph Isomorphism Networks, and Position-Aware Graph Neural Networks. Furthermore, to effectively learn parameters from smaller graphs, we developed a Siamese Graph Neural Network trained using a novel loss function specifically designed for Graph Siamese Neural Networks. Additionally, to handle real-world sparse graphs efficiently, we provide TensorFlow code optimized for sparse graph operations, significantly reducing spatial complexity. To evaluate the performance of the proposed model, we utilized five real-world dynamic graphs. The model was trained on a smaller dataset in an unsupervised manner, and the pre-trained model was then used to generate both inter- and intra-domain graph node representations. The framework demonstrates robustness, as any state-of-the-art GNN method can be integrated into the Siamese neural network framework to learn parameters using the proposed hybrid cost function. The implementation code is publicly available online to ensure reproducibility of the model: <em>https://github.com/khushnood/UnsupervisedPretrainedCrossdomainInductive NodeRepresentationLearning</em>.</div></div>\",\"PeriodicalId\":20050,\"journal\":{\"name\":\"Physica D: Nonlinear Phenomena\",\"volume\":\"476 \",\"pages\":\"Article 134632\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physica D: Nonlinear Phenomena\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167278925001113\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physica D: Nonlinear Phenomena","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167278925001113","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Cross-domain inductive applications with unsupervised (dynamic) Graph Neural Networks (GNN): Leveraging Siamese GNN and energy-based PMI optimization
The existing body of work in graph embedding has primarily focused on Graph Neural Network (GNN) models designed for transductive settings, meaning these models can only be applied to the specific graph on which they were trained. This limitation restricts the applicability of GNN models, as training them on large graphs is computationally expensive. Additionally, there is a significant research gap in applying these models to cross-domain inductive prediction, where the goal is to train on a smaller graph and generalize to larger or different domain graphs. To address these challenges, this study proposes a novel GNN model capable of generating node representations not only within the same domain but also across different domains. To achieve this, we have explored state-of-the-art Graph Neural Networks (GNNs), including Graph Convolutional Networks, Graph Attention Networks, Graph Isomorphism Networks, and Position-Aware Graph Neural Networks. Furthermore, to effectively learn parameters from smaller graphs, we developed a Siamese Graph Neural Network trained using a novel loss function specifically designed for Graph Siamese Neural Networks. Additionally, to handle real-world sparse graphs efficiently, we provide TensorFlow code optimized for sparse graph operations, significantly reducing spatial complexity. To evaluate the performance of the proposed model, we utilized five real-world dynamic graphs. The model was trained on a smaller dataset in an unsupervised manner, and the pre-trained model was then used to generate both inter- and intra-domain graph node representations. The framework demonstrates robustness, as any state-of-the-art GNN method can be integrated into the Siamese neural network framework to learn parameters using the proposed hybrid cost function. The implementation code is publicly available online to ensure reproducibility of the model: https://github.com/khushnood/UnsupervisedPretrainedCrossdomainInductive NodeRepresentationLearning.
期刊介绍:
Physica D (Nonlinear Phenomena) publishes research and review articles reporting on experimental and theoretical works, techniques and ideas that advance the understanding of nonlinear phenomena. Topics encompass wave motion in physical, chemical and biological systems; physical or biological phenomena governed by nonlinear field equations, including hydrodynamics and turbulence; pattern formation and cooperative phenomena; instability, bifurcations, chaos, and space-time disorder; integrable/Hamiltonian systems; asymptotic analysis and, more generally, mathematical methods for nonlinear systems.