Amirhossein Nouranizadeh, Fatemeh Tabatabaei Far, Mohammad Rahmati
{"title":"用于时态网络动态链接预测的对比表征学习","authors":"Amirhossein Nouranizadeh, Fatemeh Tabatabaei Far, Mohammad Rahmati","doi":"arxiv-2408.12753","DOIUrl":null,"url":null,"abstract":"Evolving networks are complex data structures that emerge in a wide range of\nsystems in science and engineering. Learning expressive representations for\nsuch networks that encode their structural connectivity and temporal evolution\nis essential for downstream data analytics and machine learning applications.\nIn this study, we introduce a self-supervised method for learning\nrepresentations of temporal networks and employ these representations in the\ndynamic link prediction task. While temporal networks are typically\ncharacterized as a sequence of interactions over the continuous time domain,\nour study focuses on their discrete-time versions. This enables us to balance\nthe trade-off between computational complexity and precise modeling of the\ninteractions. We propose a recurrent message-passing neural network\narchitecture for modeling the information flow over time-respecting paths of\ntemporal networks. The key feature of our method is the contrastive training\nobjective of the model, which is a combination of three loss functions: link\nprediction, graph reconstruction, and contrastive predictive coding losses. The\ncontrastive predictive coding objective is implemented using infoNCE losses at\nboth local and global scales of the input graphs. We empirically show that the\nadditional self-supervised losses enhance the training and improve the model's\nperformance in the dynamic link prediction task. The proposed method is tested\non Enron, COLAB, and Facebook datasets and exhibits superior results compared\nto existing models.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Contrastive Representation Learning for Dynamic Link Prediction in Temporal Networks\",\"authors\":\"Amirhossein Nouranizadeh, Fatemeh Tabatabaei Far, Mohammad Rahmati\",\"doi\":\"arxiv-2408.12753\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Evolving networks are complex data structures that emerge in a wide range of\\nsystems in science and engineering. Learning expressive representations for\\nsuch networks that encode their structural connectivity and temporal evolution\\nis essential for downstream data analytics and machine learning applications.\\nIn this study, we introduce a self-supervised method for learning\\nrepresentations of temporal networks and employ these representations in the\\ndynamic link prediction task. While temporal networks are typically\\ncharacterized as a sequence of interactions over the continuous time domain,\\nour study focuses on their discrete-time versions. This enables us to balance\\nthe trade-off between computational complexity and precise modeling of the\\ninteractions. We propose a recurrent message-passing neural network\\narchitecture for modeling the information flow over time-respecting paths of\\ntemporal networks. The key feature of our method is the contrastive training\\nobjective of the model, which is a combination of three loss functions: link\\nprediction, graph reconstruction, and contrastive predictive coding losses. The\\ncontrastive predictive coding objective is implemented using infoNCE losses at\\nboth local and global scales of the input graphs. We empirically show that the\\nadditional self-supervised losses enhance the training and improve the model's\\nperformance in the dynamic link prediction task. The proposed method is tested\\non Enron, COLAB, and Facebook datasets and exhibits superior results compared\\nto existing models.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.12753\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.12753","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Contrastive Representation Learning for Dynamic Link Prediction in Temporal Networks
Evolving networks are complex data structures that emerge in a wide range of
systems in science and engineering. Learning expressive representations for
such networks that encode their structural connectivity and temporal evolution
is essential for downstream data analytics and machine learning applications.
In this study, we introduce a self-supervised method for learning
representations of temporal networks and employ these representations in the
dynamic link prediction task. While temporal networks are typically
characterized as a sequence of interactions over the continuous time domain,
our study focuses on their discrete-time versions. This enables us to balance
the trade-off between computational complexity and precise modeling of the
interactions. We propose a recurrent message-passing neural network
architecture for modeling the information flow over time-respecting paths of
temporal networks. The key feature of our method is the contrastive training
objective of the model, which is a combination of three loss functions: link
prediction, graph reconstruction, and contrastive predictive coding losses. The
contrastive predictive coding objective is implemented using infoNCE losses at
both local and global scales of the input graphs. We empirically show that the
additional self-supervised losses enhance the training and improve the model's
performance in the dynamic link prediction task. The proposed method is tested
on Enron, COLAB, and Facebook datasets and exhibits superior results compared
to existing models.