Kaiwei Xu , Yongquan Fan , Jing Tang , Xianyong Li , Yajun Du , Xiaomin Wang
{"title":"序列推荐的分布引导图学习","authors":"Kaiwei Xu , Yongquan Fan , Jing Tang , Xianyong Li , Yajun Du , Xiaomin Wang","doi":"10.1016/j.ipm.2025.104119","DOIUrl":null,"url":null,"abstract":"<div><div>Sequential recommendations predict the next item by capturing behavioral patterns from a user’s sequence. Graph neural networks (GNN) have recently gained popularity in sequential recommendation for effectively capturing high-order information, which notably improves recommendation performance. However, some existing GNN-based methods represent the node embeddings in the graph as fixed vectors, which fails to capture the uncertainty generated by the transition of relations between nodes. To cope with the above challenge, we propose <u><strong>D</strong></u>istribution-guided <u><strong>G</strong></u>raph <u><strong>L</strong></u>earning for <u><strong>S</strong></u>equential <u><strong>R</strong></u>ecommendation (DGLSR). Specifically, it utilizes the Gaussian distribution (i.e., mean and covariance embeddings) to represent nodes in the user–item bipartite graph, modeling the node uncertainty while preserving the graph structure. Subsequently, we use a graph convolutional network to update user and item node distribution embeddings, and then introduce a personalization distribution embedding fusion operation to integrate them, which generates the final sequence representation. Furthermore, we design a temporal Wasserstein self-attention mechanism. This mechanism utilizes the Wasserstein distance to measure the distributional differences between any two items in the sequence while enhancing the model’s sensitivity to temporal dynamics, thereby improving the accuracy of the next item prediction. Experiments involving four real-world datasets reveal that the DGLSR we developed exceeds the performance of SOTA methods on benchmark metrics.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 5","pages":"Article 104119"},"PeriodicalIF":7.4000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Distribution-guided Graph Learning for Sequential Recommendation\",\"authors\":\"Kaiwei Xu , Yongquan Fan , Jing Tang , Xianyong Li , Yajun Du , Xiaomin Wang\",\"doi\":\"10.1016/j.ipm.2025.104119\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Sequential recommendations predict the next item by capturing behavioral patterns from a user’s sequence. Graph neural networks (GNN) have recently gained popularity in sequential recommendation for effectively capturing high-order information, which notably improves recommendation performance. However, some existing GNN-based methods represent the node embeddings in the graph as fixed vectors, which fails to capture the uncertainty generated by the transition of relations between nodes. To cope with the above challenge, we propose <u><strong>D</strong></u>istribution-guided <u><strong>G</strong></u>raph <u><strong>L</strong></u>earning for <u><strong>S</strong></u>equential <u><strong>R</strong></u>ecommendation (DGLSR). Specifically, it utilizes the Gaussian distribution (i.e., mean and covariance embeddings) to represent nodes in the user–item bipartite graph, modeling the node uncertainty while preserving the graph structure. Subsequently, we use a graph convolutional network to update user and item node distribution embeddings, and then introduce a personalization distribution embedding fusion operation to integrate them, which generates the final sequence representation. Furthermore, we design a temporal Wasserstein self-attention mechanism. This mechanism utilizes the Wasserstein distance to measure the distributional differences between any two items in the sequence while enhancing the model’s sensitivity to temporal dynamics, thereby improving the accuracy of the next item prediction. Experiments involving four real-world datasets reveal that the DGLSR we developed exceeds the performance of SOTA methods on benchmark metrics.</div></div>\",\"PeriodicalId\":50365,\"journal\":{\"name\":\"Information Processing & Management\",\"volume\":\"62 5\",\"pages\":\"Article 104119\"},\"PeriodicalIF\":7.4000,\"publicationDate\":\"2025-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing & Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306457325000615\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325000615","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Distribution-guided Graph Learning for Sequential Recommendation
Sequential recommendations predict the next item by capturing behavioral patterns from a user’s sequence. Graph neural networks (GNN) have recently gained popularity in sequential recommendation for effectively capturing high-order information, which notably improves recommendation performance. However, some existing GNN-based methods represent the node embeddings in the graph as fixed vectors, which fails to capture the uncertainty generated by the transition of relations between nodes. To cope with the above challenge, we propose Distribution-guided Graph Learning for Sequential Recommendation (DGLSR). Specifically, it utilizes the Gaussian distribution (i.e., mean and covariance embeddings) to represent nodes in the user–item bipartite graph, modeling the node uncertainty while preserving the graph structure. Subsequently, we use a graph convolutional network to update user and item node distribution embeddings, and then introduce a personalization distribution embedding fusion operation to integrate them, which generates the final sequence representation. Furthermore, we design a temporal Wasserstein self-attention mechanism. This mechanism utilizes the Wasserstein distance to measure the distributional differences between any two items in the sequence while enhancing the model’s sensitivity to temporal dynamics, thereby improving the accuracy of the next item prediction. Experiments involving four real-world datasets reveal that the DGLSR we developed exceeds the performance of SOTA methods on benchmark metrics.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.