Qiang Guo , Jialong Hai , Zhongchuan Sun , Bin Wu , Yangdong Ye
{"title":"DCIB: Dual contrastive information bottleneck for knowledge-aware recommendation","authors":"Qiang Guo , Jialong Hai , Zhongchuan Sun , Bin Wu , Yangdong Ye","doi":"10.1016/j.ipm.2024.103980","DOIUrl":null,"url":null,"abstract":"<div><div>Knowledge-aware recommendations effectively enhance model performance by integrating rich external information from the knowledge graphs. Graph contrastive learning methods have recently demonstrated superior results in such recommendations. However, they still face two limitations: (1) the disruption of intrinsic semantic structures caused by stochastic or predefined augmentations for constructing contrastive views, and (2) the neglect of the extrinsic semantic gap arising from the different semantic information in the user-item bipartite graph and the knowledge graph during their incorporation. To address these issues, we propose a novel Dual Contrastive Information Bottleneck (DCIB) method for the knowledge-aware recommendation, which can well preserve the intrinsic semantic structures and bridge the semantic gap to obtain complementary conducive information for learning enhanced representations. Specifically, DCIB implements contrastive learning with the information bottleneck principle (CIB) upon a collaborative view and a knowledge view. View-specific CIB is formalized to suppress the noise and distill high-quality information within each view using a devised learnable denoising module. Cross-view CIB is developed to bridge the semantic gap and fully leverage the different semantics of both views, thereby obtaining complementary information to enrich the representations. Extensive experimental results on the Last.FM, Book-Crossing, and MovieLens-1M show that DCIB outperforms existing state-of-the-art methods. Specifically, in terms of the NDCG@10 metric, DCIB obtains performance improvements of 5.78%, 7.67%, and 5.67% over the second-best methods across the three benchmarks, respectively.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 2","pages":"Article 103980"},"PeriodicalIF":7.4000,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S030645732400339X","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Knowledge-aware recommendations effectively enhance model performance by integrating rich external information from the knowledge graphs. Graph contrastive learning methods have recently demonstrated superior results in such recommendations. However, they still face two limitations: (1) the disruption of intrinsic semantic structures caused by stochastic or predefined augmentations for constructing contrastive views, and (2) the neglect of the extrinsic semantic gap arising from the different semantic information in the user-item bipartite graph and the knowledge graph during their incorporation. To address these issues, we propose a novel Dual Contrastive Information Bottleneck (DCIB) method for the knowledge-aware recommendation, which can well preserve the intrinsic semantic structures and bridge the semantic gap to obtain complementary conducive information for learning enhanced representations. Specifically, DCIB implements contrastive learning with the information bottleneck principle (CIB) upon a collaborative view and a knowledge view. View-specific CIB is formalized to suppress the noise and distill high-quality information within each view using a devised learnable denoising module. Cross-view CIB is developed to bridge the semantic gap and fully leverage the different semantics of both views, thereby obtaining complementary information to enrich the representations. Extensive experimental results on the Last.FM, Book-Crossing, and MovieLens-1M show that DCIB outperforms existing state-of-the-art methods. Specifically, in terms of the NDCG@10 metric, DCIB obtains performance improvements of 5.78%, 7.67%, and 5.67% over the second-best methods across the three benchmarks, respectively.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.