Fine-Grained Interactive Transformers for Continuous Dynamic Link Prediction.

IF 10.5 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Yajing Wu,Yongqiang Tang,Wensheng Zhang
{"title":"Fine-Grained Interactive Transformers for Continuous Dynamic Link Prediction.","authors":"Yajing Wu,Yongqiang Tang,Wensheng Zhang","doi":"10.1109/tcyb.2025.3598250","DOIUrl":null,"url":null,"abstract":"DLP plays a critical role in understanding and forecasting evolving relationships in real-world systems across various domains. However, accurately predicting future links remains challenging, as existing methods often overlook the independent modeling of dynamic interactions within individual nodes and the fine-grained characterization of latent interactions across node sequences. To address these challenges, we propose FineFormer (Fine-grained Interactive Transformer), a novel framework that alternates between self-attention and cross-attention mechanisms, enhanced with layer-wise contrastive learning. This design enables FineFormer to uncover fine-grained temporal dependencies both within single node sequences and across different node sequences. Specifically, self-attention captures temporal-spatial dynamics within the interaction sequences of individual nodes, while cross-attention focuses on the complex interactions across the sequences of pairs of nodes. Additionally, by strategically applying layer-wise contrastive learning, FineFormer refines node representations and enhances the model's ability to distinguish between connected and unconnected node pairs during feature refinement. FineFormer is evaluated on five challenging and diverse real-world dynamic link prediction (DLP) datasets. Experimental results demonstrate that FineFormer consistently outperforms state-of-the-art baselines, particularly in capturing complex, fine-grained interactions in continuous-time dynamic networks.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"37 1","pages":""},"PeriodicalIF":10.5000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tcyb.2025.3598250","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

DLP plays a critical role in understanding and forecasting evolving relationships in real-world systems across various domains. However, accurately predicting future links remains challenging, as existing methods often overlook the independent modeling of dynamic interactions within individual nodes and the fine-grained characterization of latent interactions across node sequences. To address these challenges, we propose FineFormer (Fine-grained Interactive Transformer), a novel framework that alternates between self-attention and cross-attention mechanisms, enhanced with layer-wise contrastive learning. This design enables FineFormer to uncover fine-grained temporal dependencies both within single node sequences and across different node sequences. Specifically, self-attention captures temporal-spatial dynamics within the interaction sequences of individual nodes, while cross-attention focuses on the complex interactions across the sequences of pairs of nodes. Additionally, by strategically applying layer-wise contrastive learning, FineFormer refines node representations and enhances the model's ability to distinguish between connected and unconnected node pairs during feature refinement. FineFormer is evaluated on five challenging and diverse real-world dynamic link prediction (DLP) datasets. Experimental results demonstrate that FineFormer consistently outperforms state-of-the-art baselines, particularly in capturing complex, fine-grained interactions in continuous-time dynamic networks.
用于连续动态链路预测的细粒度交互变压器。
DLP在理解和预测跨不同领域的现实世界系统中不断发展的关系方面起着关键作用。然而,准确预测未来的链接仍然具有挑战性,因为现有的方法往往忽略了单个节点内动态相互作用的独立建模和节点序列间潜在相互作用的细粒度表征。为了应对这些挑战,我们提出了FineFormer(细粒度交互转换器),这是一个在自我注意和交叉注意机制之间交替的新框架,通过分层对比学习得到增强。这种设计使FineFormer能够在单个节点序列和跨不同节点序列中发现细粒度的时间依赖性。具体而言,自注意捕获单个节点交互序列中的时空动态,而交叉注意关注跨节点对序列的复杂交互。此外,通过策略性地应用分层对比学习,FineFormer细化节点表示,并增强模型在特征细化过程中区分连接和未连接节点对的能力。FineFormer在五个具有挑战性和多样化的现实世界动态链接预测(DLP)数据集上进行了评估。实验结果表明,FineFormer始终优于最先进的基线,特别是在捕获连续时间动态网络中复杂、细粒度的交互方面。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Cybernetics
IEEE Transactions on Cybernetics COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
25.40
自引率
11.00%
发文量
1869
期刊介绍: The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信