弥合分布差距:动态图学习的不变模式发现

Yucheng Jin, Maoyi Wang, Yun Xiong, Zhizhou Ren, Cuiying Huo, Feng Zhu, Jiawei Zhang, Guangzhong Wang, Haoran Chen
{"title":"弥合分布差距:动态图学习的不变模式发现","authors":"Yucheng Jin, Maoyi Wang, Yun Xiong, Zhizhou Ren, Cuiying Huo, Feng Zhu, Jiawei Zhang, Guangzhong Wang, Haoran Chen","doi":"10.1007/s11280-024-01283-2","DOIUrl":null,"url":null,"abstract":"<p>Temporal graph networks (TGNs) have been proposed to facilitate learning on dynamic graphs which are composed of interaction events among nodes. However, existing TGNs suffer from poor generalization under distribution shifts that occur over time. It is vital to discover invariant patterns with stable predictive power across various distributions to improve the generalization ability. Invariant pattern discovery on dynamic graphs is non-trivial, as long-term history of interaction events is compressed into the memory by TGNs in an entangled way, making invariant pattern discovery difficult. Furthermore, TGNs process interaction events chronologically in batches to obtain up-to-date representations. Each batch consisting of chronologically-close events lacks diversity for identifying invariance under distribution shifts. To tackle these challenges, we propose a novel method called <span>Smile</span>, which stands for <u>S</u>tructural te<u>M</u>poral <u>I</u>nvariant <u>LE</u>arning. Specifically, we first propose the disentangled graph memory network, which selectively extracts pattern information from long-term history through the disentangled memory gating and attention network. The interaction history approximator is further introduced to provide diverse interaction distributions efficiently. <span>Smile</span> guarantees prediction stability under diverse temporal-dynamic distributions by regularizing invariance under cross-time distribution interventions. Experimental results on real-world datasets demonstrate that <span>Smile</span> outperforms baselines, yielding substantial performance improvements.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bridging distribution gaps: invariant pattern discovery for dynamic graph learning\",\"authors\":\"Yucheng Jin, Maoyi Wang, Yun Xiong, Zhizhou Ren, Cuiying Huo, Feng Zhu, Jiawei Zhang, Guangzhong Wang, Haoran Chen\",\"doi\":\"10.1007/s11280-024-01283-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Temporal graph networks (TGNs) have been proposed to facilitate learning on dynamic graphs which are composed of interaction events among nodes. However, existing TGNs suffer from poor generalization under distribution shifts that occur over time. It is vital to discover invariant patterns with stable predictive power across various distributions to improve the generalization ability. Invariant pattern discovery on dynamic graphs is non-trivial, as long-term history of interaction events is compressed into the memory by TGNs in an entangled way, making invariant pattern discovery difficult. Furthermore, TGNs process interaction events chronologically in batches to obtain up-to-date representations. Each batch consisting of chronologically-close events lacks diversity for identifying invariance under distribution shifts. To tackle these challenges, we propose a novel method called <span>Smile</span>, which stands for <u>S</u>tructural te<u>M</u>poral <u>I</u>nvariant <u>LE</u>arning. Specifically, we first propose the disentangled graph memory network, which selectively extracts pattern information from long-term history through the disentangled memory gating and attention network. The interaction history approximator is further introduced to provide diverse interaction distributions efficiently. <span>Smile</span> guarantees prediction stability under diverse temporal-dynamic distributions by regularizing invariance under cross-time distribution interventions. Experimental results on real-world datasets demonstrate that <span>Smile</span> outperforms baselines, yielding substantial performance improvements.</p>\",\"PeriodicalId\":501180,\"journal\":{\"name\":\"World Wide Web\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"World Wide Web\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s11280-024-01283-2\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Wide Web","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11280-024-01283-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

时态图网络(TGN)被提出来用于促进对动态图的学习,动态图由节点间的交互事件组成。然而,现有的 TGN 在分布随时间发生变化的情况下概括性较差。发现对各种分布具有稳定预测能力的不变模式对提高泛化能力至关重要。在动态图上发现不变模式并非易事,因为交互事件的长期历史会以纠缠的方式被 TGN 压缩到内存中,从而使不变模式的发现变得困难。此外,TGN 按时间顺序分批处理交互事件,以获得最新的表示。每个批次都由时间上相近的事件组成,缺乏多样性,难以识别分布变化下的不变性。为了应对这些挑战,我们提出了一种名为 "微笑 "的新方法。"微笑 "是 Structural teMporal Invariant LEarning 的缩写。具体来说,我们首先提出了分解图记忆网络,通过分解记忆门控和注意力网络从长期历史中选择性地提取模式信息。我们还进一步引入了交互历史近似器,以高效地提供多样化的交互分布。Smile 通过对跨时间分布干预下的不变性进行正则化处理,保证了不同时间动态分布下的预测稳定性。在真实世界数据集上的实验结果表明,Smile 的性能优于基线方法,从而大幅提高了性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Bridging distribution gaps: invariant pattern discovery for dynamic graph learning

Bridging distribution gaps: invariant pattern discovery for dynamic graph learning

Temporal graph networks (TGNs) have been proposed to facilitate learning on dynamic graphs which are composed of interaction events among nodes. However, existing TGNs suffer from poor generalization under distribution shifts that occur over time. It is vital to discover invariant patterns with stable predictive power across various distributions to improve the generalization ability. Invariant pattern discovery on dynamic graphs is non-trivial, as long-term history of interaction events is compressed into the memory by TGNs in an entangled way, making invariant pattern discovery difficult. Furthermore, TGNs process interaction events chronologically in batches to obtain up-to-date representations. Each batch consisting of chronologically-close events lacks diversity for identifying invariance under distribution shifts. To tackle these challenges, we propose a novel method called Smile, which stands for Structural teMporal Invariant LEarning. Specifically, we first propose the disentangled graph memory network, which selectively extracts pattern information from long-term history through the disentangled memory gating and attention network. The interaction history approximator is further introduced to provide diverse interaction distributions efficiently. Smile guarantees prediction stability under diverse temporal-dynamic distributions by regularizing invariance under cross-time distribution interventions. Experimental results on real-world datasets demonstrate that Smile outperforms baselines, yielding substantial performance improvements.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信