超越Weisfeiler-Lehman与局部自我网络编码

IF 4 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Nurudin Alvarez-Gonzalez, Andreas Kaltenbrunner, Vicenç Gómez
{"title":"超越Weisfeiler-Lehman与局部自我网络编码","authors":"Nurudin Alvarez-Gonzalez, Andreas Kaltenbrunner, Vicenç Gómez","doi":"10.3390/make5040063","DOIUrl":null,"url":null,"abstract":"Identifying similar network structures is key to capturing graph isomorphisms and learning representations that exploit structural information encoded in graph data. This work shows that ego networks can produce a structural encoding scheme for arbitrary graphs with greater expressivity than the Weisfeiler–Lehman (1-WL) test. We introduce IGEL, a preprocessing step to produce features that augment node representations by encoding ego networks into sparse vectors that enrich message passing (MP) graph neural networks (GNNs) beyond 1-WL expressivity. We formally describe the relation between IGEL and 1-WL, and characterize its expressive power and limitations. Experiments show that IGEL matches the empirical expressivity of state-of-the-art methods on isomorphism detection while improving performance on nine GNN architectures and six graph machine learning tasks.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"73 1","pages":"0"},"PeriodicalIF":4.0000,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Beyond Weisfeiler–Lehman with Local Ego-Network Encodings\",\"authors\":\"Nurudin Alvarez-Gonzalez, Andreas Kaltenbrunner, Vicenç Gómez\",\"doi\":\"10.3390/make5040063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Identifying similar network structures is key to capturing graph isomorphisms and learning representations that exploit structural information encoded in graph data. This work shows that ego networks can produce a structural encoding scheme for arbitrary graphs with greater expressivity than the Weisfeiler–Lehman (1-WL) test. We introduce IGEL, a preprocessing step to produce features that augment node representations by encoding ego networks into sparse vectors that enrich message passing (MP) graph neural networks (GNNs) beyond 1-WL expressivity. We formally describe the relation between IGEL and 1-WL, and characterize its expressive power and limitations. Experiments show that IGEL matches the empirical expressivity of state-of-the-art methods on isomorphism detection while improving performance on nine GNN architectures and six graph machine learning tasks.\",\"PeriodicalId\":93033,\"journal\":{\"name\":\"Machine learning and knowledge extraction\",\"volume\":\"73 1\",\"pages\":\"0\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2023-09-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine learning and knowledge extraction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/make5040063\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning and knowledge extraction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/make5040063","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

识别相似的网络结构是捕获图同构和学习利用图数据中编码的结构信息的表示的关键。这项工作表明,自我网络可以为任意图产生具有比Weisfeiler-Lehman (1-WL)检验更大表现力的结构编码方案。我们引入IGEL,这是一个预处理步骤,通过将自我网络编码为稀疏向量来生成增强节点表示的特征,从而丰富消息传递(MP)图神经网络(gnn),超越1-WL表达能力。我们正式描述了IGEL和1-WL之间的关系,并描述了它的表达能力和局限性。实验表明,IGEL与最先进的同构检测方法的经验表达能力相匹配,同时提高了9个GNN架构和6个图机器学习任务的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Beyond Weisfeiler–Lehman with Local Ego-Network Encodings
Identifying similar network structures is key to capturing graph isomorphisms and learning representations that exploit structural information encoded in graph data. This work shows that ego networks can produce a structural encoding scheme for arbitrary graphs with greater expressivity than the Weisfeiler–Lehman (1-WL) test. We introduce IGEL, a preprocessing step to produce features that augment node representations by encoding ego networks into sparse vectors that enrich message passing (MP) graph neural networks (GNNs) beyond 1-WL expressivity. We formally describe the relation between IGEL and 1-WL, and characterize its expressive power and limitations. Experiments show that IGEL matches the empirical expressivity of state-of-the-art methods on isomorphism detection while improving performance on nine GNN architectures and six graph machine learning tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.30
自引率
0.00%
发文量
0
审稿时长
7 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信