Unsupervised Graph Representation Learning with Inductive Shallow Node Embedding

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Richárd Kiss, Gábor Szűcs
{"title":"Unsupervised Graph Representation Learning with Inductive Shallow Node Embedding","authors":"Richárd Kiss, Gábor Szűcs","doi":"10.1007/s40747-024-01545-6","DOIUrl":null,"url":null,"abstract":"<p>Network science has witnessed a surge in popularity, driven by the transformative power of node representation learning for diverse applications like social network analysis and biological modeling. While shallow embedding algorithms excel at capturing network structure, they face a critical limitation—failing to generalize to unseen nodes. This paper addresses this challenge by introducing Inductive Shallow Node Embedding—as a main contribution—pioneering a novel approach that extends shallow embeddings to the realm of inductive learning. It has a novel encoder architecture that captures the local neighborhood structure of each node, enabling effective generalization to unseen nodes. In the generalization, robustness is essential to avoid degradation of performance arising from noise in the dataset. It has been theoretically proven that the covariance of the additive noise term in the proposed model is inversely proportional to the cardinality of a node’s neighbors. Another contribution is a mathematical lower bound to quantify the robustness of node embeddings, confirming its advantage over traditional shallow embedding methods, particularly in the presence of parameter noise. The proposed method demonstrably excels in dynamic networks, consistently achieving over 90% performance on previously unseen nodes compared to nodes encountered during training on various benchmarks. The empirical evaluation concludes that our method outperforms competing methods on the vast majority of datasets in both transductive and inductive tasks.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01545-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Network science has witnessed a surge in popularity, driven by the transformative power of node representation learning for diverse applications like social network analysis and biological modeling. While shallow embedding algorithms excel at capturing network structure, they face a critical limitation—failing to generalize to unseen nodes. This paper addresses this challenge by introducing Inductive Shallow Node Embedding—as a main contribution—pioneering a novel approach that extends shallow embeddings to the realm of inductive learning. It has a novel encoder architecture that captures the local neighborhood structure of each node, enabling effective generalization to unseen nodes. In the generalization, robustness is essential to avoid degradation of performance arising from noise in the dataset. It has been theoretically proven that the covariance of the additive noise term in the proposed model is inversely proportional to the cardinality of a node’s neighbors. Another contribution is a mathematical lower bound to quantify the robustness of node embeddings, confirming its advantage over traditional shallow embedding methods, particularly in the presence of parameter noise. The proposed method demonstrably excels in dynamic networks, consistently achieving over 90% performance on previously unseen nodes compared to nodes encountered during training on various benchmarks. The empirical evaluation concludes that our method outperforms competing methods on the vast majority of datasets in both transductive and inductive tasks.

Abstract Image

利用归纳式浅节点嵌入进行无监督图表示学习
节点表征学习在社交网络分析和生物建模等各种应用中的变革能力推动了网络科学的普及。虽然浅层嵌入算法在捕捉网络结构方面表现出色,但它们面临着一个关键的限制--无法泛化到未见过的节点。本文通过引入归纳式浅层节点嵌入(Inductive Shallow Node Embedding)解决了这一难题,其主要贡献是开创了一种将浅层嵌入扩展到归纳学习领域的新方法。它有一个新颖的编码器架构,可以捕捉每个节点的本地邻域结构,从而有效地泛化到未见过的节点。在泛化过程中,鲁棒性对于避免数据集噪音导致的性能下降至关重要。理论证明,拟议模型中的加性噪声项的协方差与节点邻居的万有引力成反比。另一个贡献是提出了量化节点嵌入鲁棒性的数学下限,证实了它相对于传统浅层嵌入方法的优势,尤其是在存在参数噪声的情况下。所提出的方法在动态网络中表现出色,在各种基准测试中,与训练过程中遇到的节点相比,在以前未见过的节点上的性能始终保持在 90% 以上。实证评估的结论是,在绝大多数数据集上,我们的方法在转导和归纳任务中都优于其他竞争方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信