Wen-Yuan Zhu, Ke Ruan, Jin Huang, Jing Xiao, Weihao Yu
{"title":"Dynamic Graph Representation Based on Temporal and Contextual Contrasting","authors":"Wen-Yuan Zhu, Ke Ruan, Jin Huang, Jing Xiao, Weihao Yu","doi":"10.1145/3579654.3579771","DOIUrl":null,"url":null,"abstract":"Dynamic graph representation learning is critical for graph-based downstream tasks such as link prediction, node classification, and graph reconstruction. Many graph-neural-network-based methods have emerged recently, but most are incapable of tracing graph evolution patterns over time. To solve this problem, we propose a continuous-time dynamic graph framework: dynamic graph temporal contextual contrasting (DGTCC) model, which integrates temporal and topology information to capture the latent evolution trend of graph representation. In this model, the node representation is first generated by a self-attention–based temporal encoder, which measures the importance weights of neighbor nodes in temporal sub-graphs and stores them in the contextual memory module. After sampling the node representation from the memory module, the model maximizes the mutual information of the same node that occurred in two nearby temporal views by the contrastive learning mechanism, which helps track the evolutional trend of nodes. In inductive learning settings, the results on four real datasets demonstrate the advantages of the proposed DGTCC model.","PeriodicalId":146783,"journal":{"name":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","volume":"185 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579654.3579771","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Dynamic graph representation learning is critical for graph-based downstream tasks such as link prediction, node classification, and graph reconstruction. Many graph-neural-network-based methods have emerged recently, but most are incapable of tracing graph evolution patterns over time. To solve this problem, we propose a continuous-time dynamic graph framework: dynamic graph temporal contextual contrasting (DGTCC) model, which integrates temporal and topology information to capture the latent evolution trend of graph representation. In this model, the node representation is first generated by a self-attention–based temporal encoder, which measures the importance weights of neighbor nodes in temporal sub-graphs and stores them in the contextual memory module. After sampling the node representation from the memory module, the model maximizes the mutual information of the same node that occurred in two nearby temporal views by the contrastive learning mechanism, which helps track the evolutional trend of nodes. In inductive learning settings, the results on four real datasets demonstrate the advantages of the proposed DGTCC model.