{"title":"Exploiting the Structure of Two Graphs With Graph Neural Networks","authors":"Victor M. Tenorio;Antonio G. Marques","doi":"10.1109/TSIPN.2025.3611264","DOIUrl":null,"url":null,"abstract":"As the volume and complexity of modern datasets continue to increase, there is an urgent need to develop deep-learning architectures that can process such data efficiently. Graph neural networks (GNNs) have emerged as a promising solution for unstructured data, often outperforming traditional deep-learning models. However, most existing GNNs are designed for a single graph, which limits their applicability in real-world scenarios where multiple graphs may be involved. To address this limitation, we propose a graph-based architecture for tasks in which two sets of signals exist, each defined on a different graph. We first study the supervised and semi-supervised cases, where the input is a signal on one graph (the <italic>input graph</i>) and the output is a signal on another graph (the <italic>output graph</i>). Our three-block design (i) processes the input graph with a GNN, (ii) applies a latent-space transformation that maps representations from the input to the output graph, and (iii) uses a second GNN that operates on the output graph. Rather than fixing a single implementation for each block, we provide a flexible framework that can be adapted to a variety of problems. The second part of the paper considers a self-supervised setting. Inspired by canonical correlation analysis, we turn our attention to the latent space, seeking informative representations that benefit downstream tasks. By leveraging information from both graphs, the proposed architecture captures richer relationships among entities, leading to improved performance across synthetic and real-world benchmarks. Experiments show consistent gains over conventional deep-learning baselines, highlighting the value of exploiting the two graphs inherent to the task.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"1254-1267"},"PeriodicalIF":3.0000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11168903","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Signal and Information Processing over Networks","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11168903/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
As the volume and complexity of modern datasets continue to increase, there is an urgent need to develop deep-learning architectures that can process such data efficiently. Graph neural networks (GNNs) have emerged as a promising solution for unstructured data, often outperforming traditional deep-learning models. However, most existing GNNs are designed for a single graph, which limits their applicability in real-world scenarios where multiple graphs may be involved. To address this limitation, we propose a graph-based architecture for tasks in which two sets of signals exist, each defined on a different graph. We first study the supervised and semi-supervised cases, where the input is a signal on one graph (the input graph) and the output is a signal on another graph (the output graph). Our three-block design (i) processes the input graph with a GNN, (ii) applies a latent-space transformation that maps representations from the input to the output graph, and (iii) uses a second GNN that operates on the output graph. Rather than fixing a single implementation for each block, we provide a flexible framework that can be adapted to a variety of problems. The second part of the paper considers a self-supervised setting. Inspired by canonical correlation analysis, we turn our attention to the latent space, seeking informative representations that benefit downstream tasks. By leveraging information from both graphs, the proposed architecture captures richer relationships among entities, leading to improved performance across synthetic and real-world benchmarks. Experiments show consistent gains over conventional deep-learning baselines, highlighting the value of exploiting the two graphs inherent to the task.
期刊介绍:
The IEEE Transactions on Signal and Information Processing over Networks publishes high-quality papers that extend the classical notions of processing of signals defined over vector spaces (e.g. time and space) to processing of signals and information (data) defined over networks, potentially dynamically varying. In signal processing over networks, the topology of the network may define structural relationships in the data, or may constrain processing of the data. Topics include distributed algorithms for filtering, detection, estimation, adaptation and learning, model selection, data fusion, and diffusion or evolution of information over such networks, and applications of distributed signal processing.