{"title":"Fine-Grained Interactive Transformers for Continuous Dynamic Link Prediction.","authors":"Yajing Wu,Yongqiang Tang,Wensheng Zhang","doi":"10.1109/tcyb.2025.3598250","DOIUrl":null,"url":null,"abstract":"DLP plays a critical role in understanding and forecasting evolving relationships in real-world systems across various domains. However, accurately predicting future links remains challenging, as existing methods often overlook the independent modeling of dynamic interactions within individual nodes and the fine-grained characterization of latent interactions across node sequences. To address these challenges, we propose FineFormer (Fine-grained Interactive Transformer), a novel framework that alternates between self-attention and cross-attention mechanisms, enhanced with layer-wise contrastive learning. This design enables FineFormer to uncover fine-grained temporal dependencies both within single node sequences and across different node sequences. Specifically, self-attention captures temporal-spatial dynamics within the interaction sequences of individual nodes, while cross-attention focuses on the complex interactions across the sequences of pairs of nodes. Additionally, by strategically applying layer-wise contrastive learning, FineFormer refines node representations and enhances the model's ability to distinguish between connected and unconnected node pairs during feature refinement. FineFormer is evaluated on five challenging and diverse real-world dynamic link prediction (DLP) datasets. Experimental results demonstrate that FineFormer consistently outperforms state-of-the-art baselines, particularly in capturing complex, fine-grained interactions in continuous-time dynamic networks.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"37 1","pages":""},"PeriodicalIF":10.5000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tcyb.2025.3598250","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
DLP plays a critical role in understanding and forecasting evolving relationships in real-world systems across various domains. However, accurately predicting future links remains challenging, as existing methods often overlook the independent modeling of dynamic interactions within individual nodes and the fine-grained characterization of latent interactions across node sequences. To address these challenges, we propose FineFormer (Fine-grained Interactive Transformer), a novel framework that alternates between self-attention and cross-attention mechanisms, enhanced with layer-wise contrastive learning. This design enables FineFormer to uncover fine-grained temporal dependencies both within single node sequences and across different node sequences. Specifically, self-attention captures temporal-spatial dynamics within the interaction sequences of individual nodes, while cross-attention focuses on the complex interactions across the sequences of pairs of nodes. Additionally, by strategically applying layer-wise contrastive learning, FineFormer refines node representations and enhances the model's ability to distinguish between connected and unconnected node pairs during feature refinement. FineFormer is evaluated on five challenging and diverse real-world dynamic link prediction (DLP) datasets. Experimental results demonstrate that FineFormer consistently outperforms state-of-the-art baselines, particularly in capturing complex, fine-grained interactions in continuous-time dynamic networks.
期刊介绍:
The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.