Physics-Guided Hypergraph Contrastive Learning for Dynamic Hyperedge Prediction

IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY
Zhihui Wang;Jianrui Chen;Maoguo Gong;Fei Hao
{"title":"Physics-Guided Hypergraph Contrastive Learning for Dynamic Hyperedge Prediction","authors":"Zhihui Wang;Jianrui Chen;Maoguo Gong;Fei Hao","doi":"10.1109/TNSE.2024.3501378","DOIUrl":null,"url":null,"abstract":"With the increasing magnitude and complexity of data, the importance of higher-order networks is increasingly prominent. Dynamic hyperedge prediction reveals potential higher-order patterns with time evolution in networks, thus providing beneficial insights for decision making. Nevertheless, most existing neural network-based hyperedge prediction models are limited to static hypergraphs. Furthermore, previous efforts on hypergraph contrastive learning involve augmentation strategies, with insufficient consideration of the higher-order and lower-order views carried by the hypergraph itself. To address the above issues, we propose PCL-HP, a physics-guided hypergraph contrastive learning framework for dynamic hyperedge prediction. Specifically, we simply distinguish higher-order and lower-order views of the hypergraph to perform dynamic hypergraph contrastive learning and obtain abstract and concrete feature information, respectively. For lower-order views, we propose a physics-guided desynchronization mechanism to effectively guide the encoder to fuse the physical information during feature propagation, thus alleviating the problem of feature over-smoothing. Additionally, residual loss is introduced into the optimization process to incrementally quantify the loss at different stages to enhance the learning capability of the model. Extensive experiments on 10 dynamic higher-order datasets indicate that PCL-HP outperforms state-of-the-art baselines.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 1","pages":"433-450"},"PeriodicalIF":6.7000,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10759854/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

With the increasing magnitude and complexity of data, the importance of higher-order networks is increasingly prominent. Dynamic hyperedge prediction reveals potential higher-order patterns with time evolution in networks, thus providing beneficial insights for decision making. Nevertheless, most existing neural network-based hyperedge prediction models are limited to static hypergraphs. Furthermore, previous efforts on hypergraph contrastive learning involve augmentation strategies, with insufficient consideration of the higher-order and lower-order views carried by the hypergraph itself. To address the above issues, we propose PCL-HP, a physics-guided hypergraph contrastive learning framework for dynamic hyperedge prediction. Specifically, we simply distinguish higher-order and lower-order views of the hypergraph to perform dynamic hypergraph contrastive learning and obtain abstract and concrete feature information, respectively. For lower-order views, we propose a physics-guided desynchronization mechanism to effectively guide the encoder to fuse the physical information during feature propagation, thus alleviating the problem of feature over-smoothing. Additionally, residual loss is introduced into the optimization process to incrementally quantify the loss at different stages to enhance the learning capability of the model. Extensive experiments on 10 dynamic higher-order datasets indicate that PCL-HP outperforms state-of-the-art baselines.
用于动态超边缘预测的物理引导超图对比学习
随着数据量和复杂性的增加,高阶网络的重要性日益突出。动态超边缘预测揭示了网络中随时间演变的潜在高阶模式,从而为决策提供有益的见解。然而,大多数现有的基于神经网络的超边缘预测模型仅限于静态超图。此外,以往关于超图对比学习的研究涉及增强策略,对超图本身所携带的高阶和低阶观点考虑不足。为了解决上述问题,我们提出了PCL-HP,这是一个物理引导的超图对比学习框架,用于动态超边缘预测。具体来说,我们简单地区分超图的高阶视图和低阶视图,进行动态超图对比学习,分别获得抽象和具体的特征信息。对于低阶视图,我们提出了一种物理引导的去同步机制,有效引导编码器在特征传播过程中融合物理信息,从而缓解特征过度平滑的问题。此外,在优化过程中引入残差损失,增量量化不同阶段的损失,增强模型的学习能力。在10个动态高阶数据集上进行的广泛实验表明,PCL-HP优于最先进的基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Network Science and Engineering
IEEE Transactions on Network Science and Engineering Engineering-Control and Systems Engineering
CiteScore
12.60
自引率
9.10%
发文量
393
期刊介绍: The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信