Rui Xue, Guohu Li, Xiao-ning Ma, Yifei Liu, Min Liu, Yanjun Liu
{"title":"复杂交通网络的对比学习","authors":"Rui Xue, Guohu Li, Xiao-ning Ma, Yifei Liu, Min Liu, Yanjun Liu","doi":"10.1109/PIC53636.2021.9687081","DOIUrl":null,"url":null,"abstract":"Networks in real life have been increasingly dependent on each other, and therefore, they have become more complex and intertwined, with consequences of relations that are difficult to identify, understand and represent. Besides, the coupling interactions among layers may vary in different types of complex networks. Thus, it is demanding to focus on this interdependence when the cost of taking inter-layer steps weights more in networks such as transportation. To obtain representative node embeddings in complex networks, we propose a solution collecting coupling relations among layers with contrastive learning. Specifically, we develop a framework, termed TransCL, with encoders in two aspects to embed intra-layer and inter-layer node representations. Besides, we introduce random walk betweenness centrality to the inter-layer embeddings and leverage this measurement to improve contrastive learning. The link prediction as a downstream task is followed to evaluate the embedding performance. We compare this method with other popular embedding models on the public dataset Cora and a real-world industrial dataset. This model outperforms other methods on the industrial dataset and meanwhile shows competitive performance on the public dataset. This work, in sum, allows for obtaining complex network representations with layer interdependence learned in a self-supervised manner.","PeriodicalId":297239,"journal":{"name":"2021 IEEE International Conference on Progress in Informatics and Computing (PIC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TransCL: Contrastive Learning on Complex Transportation Network\",\"authors\":\"Rui Xue, Guohu Li, Xiao-ning Ma, Yifei Liu, Min Liu, Yanjun Liu\",\"doi\":\"10.1109/PIC53636.2021.9687081\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Networks in real life have been increasingly dependent on each other, and therefore, they have become more complex and intertwined, with consequences of relations that are difficult to identify, understand and represent. Besides, the coupling interactions among layers may vary in different types of complex networks. Thus, it is demanding to focus on this interdependence when the cost of taking inter-layer steps weights more in networks such as transportation. To obtain representative node embeddings in complex networks, we propose a solution collecting coupling relations among layers with contrastive learning. Specifically, we develop a framework, termed TransCL, with encoders in two aspects to embed intra-layer and inter-layer node representations. Besides, we introduce random walk betweenness centrality to the inter-layer embeddings and leverage this measurement to improve contrastive learning. The link prediction as a downstream task is followed to evaluate the embedding performance. We compare this method with other popular embedding models on the public dataset Cora and a real-world industrial dataset. This model outperforms other methods on the industrial dataset and meanwhile shows competitive performance on the public dataset. This work, in sum, allows for obtaining complex network representations with layer interdependence learned in a self-supervised manner.\",\"PeriodicalId\":297239,\"journal\":{\"name\":\"2021 IEEE International Conference on Progress in Informatics and Computing (PIC)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Progress in Informatics and Computing (PIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PIC53636.2021.9687081\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Progress in Informatics and Computing (PIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PIC53636.2021.9687081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
TransCL: Contrastive Learning on Complex Transportation Network
Networks in real life have been increasingly dependent on each other, and therefore, they have become more complex and intertwined, with consequences of relations that are difficult to identify, understand and represent. Besides, the coupling interactions among layers may vary in different types of complex networks. Thus, it is demanding to focus on this interdependence when the cost of taking inter-layer steps weights more in networks such as transportation. To obtain representative node embeddings in complex networks, we propose a solution collecting coupling relations among layers with contrastive learning. Specifically, we develop a framework, termed TransCL, with encoders in two aspects to embed intra-layer and inter-layer node representations. Besides, we introduce random walk betweenness centrality to the inter-layer embeddings and leverage this measurement to improve contrastive learning. The link prediction as a downstream task is followed to evaluate the embedding performance. We compare this method with other popular embedding models on the public dataset Cora and a real-world industrial dataset. This model outperforms other methods on the industrial dataset and meanwhile shows competitive performance on the public dataset. This work, in sum, allows for obtaining complex network representations with layer interdependence learned in a self-supervised manner.