用图神经网络开发二图结构

IF 3 3区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Victor M. Tenorio;Antonio G. Marques
{"title":"用图神经网络开发二图结构","authors":"Victor M. Tenorio;Antonio G. Marques","doi":"10.1109/TSIPN.2025.3611264","DOIUrl":null,"url":null,"abstract":"As the volume and complexity of modern datasets continue to increase, there is an urgent need to develop deep-learning architectures that can process such data efficiently. Graph neural networks (GNNs) have emerged as a promising solution for unstructured data, often outperforming traditional deep-learning models. However, most existing GNNs are designed for a single graph, which limits their applicability in real-world scenarios where multiple graphs may be involved. To address this limitation, we propose a graph-based architecture for tasks in which two sets of signals exist, each defined on a different graph. We first study the supervised and semi-supervised cases, where the input is a signal on one graph (the <italic>input graph</i>) and the output is a signal on another graph (the <italic>output graph</i>). Our three-block design (i) processes the input graph with a GNN, (ii) applies a latent-space transformation that maps representations from the input to the output graph, and (iii) uses a second GNN that operates on the output graph. Rather than fixing a single implementation for each block, we provide a flexible framework that can be adapted to a variety of problems. The second part of the paper considers a self-supervised setting. Inspired by canonical correlation analysis, we turn our attention to the latent space, seeking informative representations that benefit downstream tasks. By leveraging information from both graphs, the proposed architecture captures richer relationships among entities, leading to improved performance across synthetic and real-world benchmarks. Experiments show consistent gains over conventional deep-learning baselines, highlighting the value of exploiting the two graphs inherent to the task.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"1254-1267"},"PeriodicalIF":3.0000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11168903","citationCount":"0","resultStr":"{\"title\":\"Exploiting the Structure of Two Graphs With Graph Neural Networks\",\"authors\":\"Victor M. Tenorio;Antonio G. Marques\",\"doi\":\"10.1109/TSIPN.2025.3611264\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As the volume and complexity of modern datasets continue to increase, there is an urgent need to develop deep-learning architectures that can process such data efficiently. Graph neural networks (GNNs) have emerged as a promising solution for unstructured data, often outperforming traditional deep-learning models. However, most existing GNNs are designed for a single graph, which limits their applicability in real-world scenarios where multiple graphs may be involved. To address this limitation, we propose a graph-based architecture for tasks in which two sets of signals exist, each defined on a different graph. We first study the supervised and semi-supervised cases, where the input is a signal on one graph (the <italic>input graph</i>) and the output is a signal on another graph (the <italic>output graph</i>). Our three-block design (i) processes the input graph with a GNN, (ii) applies a latent-space transformation that maps representations from the input to the output graph, and (iii) uses a second GNN that operates on the output graph. Rather than fixing a single implementation for each block, we provide a flexible framework that can be adapted to a variety of problems. The second part of the paper considers a self-supervised setting. Inspired by canonical correlation analysis, we turn our attention to the latent space, seeking informative representations that benefit downstream tasks. By leveraging information from both graphs, the proposed architecture captures richer relationships among entities, leading to improved performance across synthetic and real-world benchmarks. Experiments show consistent gains over conventional deep-learning baselines, highlighting the value of exploiting the two graphs inherent to the task.\",\"PeriodicalId\":56268,\"journal\":{\"name\":\"IEEE Transactions on Signal and Information Processing over Networks\",\"volume\":\"11 \",\"pages\":\"1254-1267\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11168903\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Signal and Information Processing over Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11168903/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Signal and Information Processing over Networks","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11168903/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

随着现代数据集的数量和复杂性不断增加,迫切需要开发能够有效处理这些数据的深度学习架构。图神经网络(gnn)已经成为一种很有前途的非结构化数据解决方案,通常优于传统的深度学习模型。然而,大多数现有的gnn是为单个图设计的,这限制了它们在可能涉及多个图的现实场景中的适用性。为了解决这一限制,我们提出了一种基于图的架构,用于存在两组信号的任务,每组信号在不同的图上定义。我们首先研究了有监督和半监督情况,其中输入是一个图(输入图)上的信号,输出是另一个图(输出图)上的信号。我们的三块设计(i)使用GNN处理输入图,(ii)应用潜在空间变换,将表示从输入图映射到输出图,(iii)使用第二个GNN在输出图上操作。我们提供了一个灵活的框架,可以适应各种各样的问题,而不是为每个块固定一个实现。论文的第二部分考虑了一个自监督的设置。受典型相关分析的启发,我们将注意力转向潜在空间,寻求有利于下游任务的信息表示。通过利用来自这两个图的信息,所建议的体系结构捕获实体之间更丰富的关系,从而提高了合成基准和实际基准的性能。实验显示,与传统的深度学习基线相比,该方法的收益是一致的,突出了利用任务固有的两个图的价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Exploiting the Structure of Two Graphs With Graph Neural Networks
As the volume and complexity of modern datasets continue to increase, there is an urgent need to develop deep-learning architectures that can process such data efficiently. Graph neural networks (GNNs) have emerged as a promising solution for unstructured data, often outperforming traditional deep-learning models. However, most existing GNNs are designed for a single graph, which limits their applicability in real-world scenarios where multiple graphs may be involved. To address this limitation, we propose a graph-based architecture for tasks in which two sets of signals exist, each defined on a different graph. We first study the supervised and semi-supervised cases, where the input is a signal on one graph (the input graph) and the output is a signal on another graph (the output graph). Our three-block design (i) processes the input graph with a GNN, (ii) applies a latent-space transformation that maps representations from the input to the output graph, and (iii) uses a second GNN that operates on the output graph. Rather than fixing a single implementation for each block, we provide a flexible framework that can be adapted to a variety of problems. The second part of the paper considers a self-supervised setting. Inspired by canonical correlation analysis, we turn our attention to the latent space, seeking informative representations that benefit downstream tasks. By leveraging information from both graphs, the proposed architecture captures richer relationships among entities, leading to improved performance across synthetic and real-world benchmarks. Experiments show consistent gains over conventional deep-learning baselines, highlighting the value of exploiting the two graphs inherent to the task.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Signal and Information Processing over Networks
IEEE Transactions on Signal and Information Processing over Networks Computer Science-Computer Networks and Communications
CiteScore
5.80
自引率
12.50%
发文量
56
期刊介绍: The IEEE Transactions on Signal and Information Processing over Networks publishes high-quality papers that extend the classical notions of processing of signals defined over vector spaces (e.g. time and space) to processing of signals and information (data) defined over networks, potentially dynamically varying. In signal processing over networks, the topology of the network may define structural relationships in the data, or may constrain processing of the data. Topics include distributed algorithms for filtering, detection, estimation, adaptation and learning, model selection, data fusion, and diffusion or evolution of information over such networks, and applications of distributed signal processing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信