Yanlin Zhang , Yuchen Shi , Deqing Yang , Xiaodong Gu
{"title":"Exploiting explicit item–item correlations from knowledge graphs for enhanced sequential recommendation","authors":"Yanlin Zhang , Yuchen Shi , Deqing Yang , Xiaodong Gu","doi":"10.1016/j.is.2024.102470","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, the research of employing knowledge graphs (KGs) in sequential recommendation (SR) has received a lot of attention, since the side information extracted from KGs, especially the information of the correlations between items, indeed helps the SR models achieve better performance. However, many previous KG-based SR models tend to introduce some noise information when learning item embeddings, or insufficiently fuse item–item correlations into their sequential modeling, thus limiting their performance improvements. In this paper, we propose a <strong>D</strong>istance-<strong>A</strong>ware <strong>K</strong>nowledge-based <strong>S</strong>equential <strong>R</strong>ecommendation model (<strong>DAKSR</strong>), which exploits the explicit item–item correlations from KGs to achieve enhanced SR. Specifically, as one critical component in our DAKSR, the <em>distance score matrix</em> (DSM) is first obtained to indicate the correlations between items, and then leveraged in the following three major modules of DAKSR. First, in the Item-Set Embedding layer (ISE) all item embeddings are learned based on DSM, in which the noise information is eliminated effectively. Meanwhile, the Knowledge-Infused Transformer (KIT) incorporates DSM into its attention mechanism to improve the feature extraction. Furthermore, the Knowledge Contrastive Learning module (KCL) also leverages the item–item correlations presented in DSM to generate two credible sequence views, which are used to refine sample representations through a contrastive learning strategy, and thus improve the model’s robustness. Our extensive experiments on three SR benchmarks obviously demonstrate our DAKSR’s superior performance over the state-of-the-art (SOTA) KG-based recommendation models. The implementation of our DAKSR is available at <span><span>https://github.com/Easonsi/DAKSR</span><svg><path></path></svg></span> for reproducing our experiment results conveniently.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"128 ","pages":"Article 102470"},"PeriodicalIF":3.0000,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306437924001285","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, the research of employing knowledge graphs (KGs) in sequential recommendation (SR) has received a lot of attention, since the side information extracted from KGs, especially the information of the correlations between items, indeed helps the SR models achieve better performance. However, many previous KG-based SR models tend to introduce some noise information when learning item embeddings, or insufficiently fuse item–item correlations into their sequential modeling, thus limiting their performance improvements. In this paper, we propose a Distance-Aware Knowledge-based Sequential Recommendation model (DAKSR), which exploits the explicit item–item correlations from KGs to achieve enhanced SR. Specifically, as one critical component in our DAKSR, the distance score matrix (DSM) is first obtained to indicate the correlations between items, and then leveraged in the following three major modules of DAKSR. First, in the Item-Set Embedding layer (ISE) all item embeddings are learned based on DSM, in which the noise information is eliminated effectively. Meanwhile, the Knowledge-Infused Transformer (KIT) incorporates DSM into its attention mechanism to improve the feature extraction. Furthermore, the Knowledge Contrastive Learning module (KCL) also leverages the item–item correlations presented in DSM to generate two credible sequence views, which are used to refine sample representations through a contrastive learning strategy, and thus improve the model’s robustness. Our extensive experiments on three SR benchmarks obviously demonstrate our DAKSR’s superior performance over the state-of-the-art (SOTA) KG-based recommendation models. The implementation of our DAKSR is available at https://github.com/Easonsi/DAKSR for reproducing our experiment results conveniently.
期刊介绍:
Information systems are the software and hardware systems that support data-intensive applications. The journal Information Systems publishes articles concerning the design and implementation of languages, data models, process models, algorithms, software and hardware for information systems.
Subject areas include data management issues as presented in the principal international database conferences (e.g., ACM SIGMOD/PODS, VLDB, ICDE and ICDT/EDBT) as well as data-related issues from the fields of data mining/machine learning, information retrieval coordinated with structured data, internet and cloud data management, business process management, web semantics, visual and audio information systems, scientific computing, and data science. Implementation papers having to do with massively parallel data management, fault tolerance in practice, and special purpose hardware for data-intensive systems are also welcome. Manuscripts from application domains, such as urban informatics, social and natural science, and Internet of Things, are also welcome. All papers should highlight innovative solutions to data management problems such as new data models, performance enhancements, and show how those innovations contribute to the goals of the application.