网络对网络:基于位置预测的自监督网络表示学习

IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jie Liu;Chunhai Zhang;Zhicheng He;Wenzheng Zhang;Na Li
{"title":"网络对网络:基于位置预测的自监督网络表示学习","authors":"Jie Liu;Chunhai Zhang;Zhicheng He;Wenzheng Zhang;Na Li","doi":"10.1109/TKDE.2024.3493391","DOIUrl":null,"url":null,"abstract":"Network Representation Learning (NRL) has achieved remarkable success in learning low-dimensional representations for network nodes. However, most NRL methods, including Graph Neural Networks (GNNs) and their variants, face critical challenges. First, labeled network data, which are required for training most GNNs, are expensive to obtain. Second, existing methods are sub-optimal in preserving comprehensive topological information, including structural and positional information. Finally, most GNN approaches ignore the rich node content information. To address these challenges, we propose a self-supervised Network-to-Network framework (Net2Net) to learn semantically meaningful node representations. Our framework employs a pretext task of node position prediction (PosPredict) to effectively fuse the topological and content knowledge into low-dimensional embeddings for every node in a semi-supervised manner. Specifically, we regard a network as node content and position networks, where Net2Net aims to learn the mapping between them. We utilize a multi-layer recursively composable encoder to integrate the content and topological knowledge into the egocentric network node embeddings. Furthermore, we design a cross-modal decoder to map the egocentric node embeddings into their node position identities (PosIDs) in the node position network. Extensive experiments on eight diverse networks demonstrate the superiority of Net2Net over comparable methods.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 3","pages":"1354-1365"},"PeriodicalIF":8.9000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Network-to-Network: Self-Supervised Network Representation Learning via Position Prediction\",\"authors\":\"Jie Liu;Chunhai Zhang;Zhicheng He;Wenzheng Zhang;Na Li\",\"doi\":\"10.1109/TKDE.2024.3493391\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Network Representation Learning (NRL) has achieved remarkable success in learning low-dimensional representations for network nodes. However, most NRL methods, including Graph Neural Networks (GNNs) and their variants, face critical challenges. First, labeled network data, which are required for training most GNNs, are expensive to obtain. Second, existing methods are sub-optimal in preserving comprehensive topological information, including structural and positional information. Finally, most GNN approaches ignore the rich node content information. To address these challenges, we propose a self-supervised Network-to-Network framework (Net2Net) to learn semantically meaningful node representations. Our framework employs a pretext task of node position prediction (PosPredict) to effectively fuse the topological and content knowledge into low-dimensional embeddings for every node in a semi-supervised manner. Specifically, we regard a network as node content and position networks, where Net2Net aims to learn the mapping between them. We utilize a multi-layer recursively composable encoder to integrate the content and topological knowledge into the egocentric network node embeddings. Furthermore, we design a cross-modal decoder to map the egocentric node embeddings into their node position identities (PosIDs) in the node position network. Extensive experiments on eight diverse networks demonstrate the superiority of Net2Net over comparable methods.\",\"PeriodicalId\":13496,\"journal\":{\"name\":\"IEEE Transactions on Knowledge and Data Engineering\",\"volume\":\"37 3\",\"pages\":\"1354-1365\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-01-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Knowledge and Data Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10855165/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Knowledge and Data Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10855165/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

网络表示学习(NRL)在学习网络节点的低维表示方面取得了显著的成功。然而,大多数NRL方法,包括图神经网络(gnn)及其变体,都面临着严峻的挑战。首先,训练大多数gnn所需的标记网络数据的获取成本很高。其次,现有的方法在保存包括结构和位置信息在内的综合拓扑信息方面是次优的。最后,大多数GNN方法忽略了丰富的节点内容信息。为了解决这些挑战,我们提出了一个自我监督的网络对网络框架(Net2Net)来学习语义上有意义的节点表示。我们的框架采用节点位置预测的借口任务(PosPredict),以半监督的方式有效地将拓扑和内容知识融合到每个节点的低维嵌入中。具体来说,我们把一个网络看作是节点内容和位置网络,Net2Net的目的是学习它们之间的映射关系。我们利用多层递归可组合编码器将内容和拓扑知识集成到自中心网络节点嵌入中。此外,我们设计了一个跨模态解码器,将自中心节点嵌入映射到节点位置网络中的节点位置标识(posid)中。在八个不同的网络上进行的大量实验证明了Net2Net优于同类方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Network-to-Network: Self-Supervised Network Representation Learning via Position Prediction
Network Representation Learning (NRL) has achieved remarkable success in learning low-dimensional representations for network nodes. However, most NRL methods, including Graph Neural Networks (GNNs) and their variants, face critical challenges. First, labeled network data, which are required for training most GNNs, are expensive to obtain. Second, existing methods are sub-optimal in preserving comprehensive topological information, including structural and positional information. Finally, most GNN approaches ignore the rich node content information. To address these challenges, we propose a self-supervised Network-to-Network framework (Net2Net) to learn semantically meaningful node representations. Our framework employs a pretext task of node position prediction (PosPredict) to effectively fuse the topological and content knowledge into low-dimensional embeddings for every node in a semi-supervised manner. Specifically, we regard a network as node content and position networks, where Net2Net aims to learn the mapping between them. We utilize a multi-layer recursively composable encoder to integrate the content and topological knowledge into the egocentric network node embeddings. Furthermore, we design a cross-modal decoder to map the egocentric node embeddings into their node position identities (PosIDs) in the node position network. Extensive experiments on eight diverse networks demonstrate the superiority of Net2Net over comparable methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering 工程技术-工程:电子与电气
CiteScore
11.70
自引率
3.40%
发文量
515
审稿时长
6 months
期刊介绍: The IEEE Transactions on Knowledge and Data Engineering encompasses knowledge and data engineering aspects within computer science, artificial intelligence, electrical engineering, computer engineering, and related fields. It provides an interdisciplinary platform for disseminating new developments in knowledge and data engineering and explores the practicality of these concepts in both hardware and software. Specific areas covered include knowledge-based and expert systems, AI techniques for knowledge and data management, tools, and methodologies, distributed processing, real-time systems, architectures, data management practices, database design, query languages, security, fault tolerance, statistical databases, algorithms, performance evaluation, and applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信