LMACL: Improving Graph Collaborative Filtering with Learnable Model Augmentation Contrastive Learning

IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Xinru Liu, Yongjing Hao, Lei Zhao, Guanfeng Liu, Victor S. Sheng, Pengpeng Zhao
{"title":"LMACL: Improving Graph Collaborative Filtering with Learnable Model Augmentation Contrastive Learning","authors":"Xinru Liu, Yongjing Hao, Lei Zhao, Guanfeng Liu, Victor S. Sheng, Pengpeng Zhao","doi":"10.1145/3657302","DOIUrl":null,"url":null,"abstract":"<p>Graph collaborative filtering (GCF) has achieved exciting recommendation performance with its ability to aggregate high-order graph structure information. Recently, contrastive learning (CL) has been incorporated into GCF to alleviate data sparsity and noise issues. However, most of the existing methods employ random or manual augmentation to produce contrastive views that may destroy the original topology and amplify the noisy effects. We argue that such augmentation is insufficient to produce the optimal contrastive view, leading to suboptimal recommendation results. In this paper, we proposed a <b>L</b>earnable <b>M</b>odel <b>A</b>ugmentation <b>C</b>ontrastive <b>L</b>earning (LMACL) framework for recommendation, which effectively combines graph-level and node-level collaborative relations to enhance the expressiveness of collaborative filtering (CF) paradigm. Specifically, we first use the graph convolution network (GCN) as a backbone encoder to incorporate multi-hop neighbors into graph-level original node representations by leveraging the high-order connectivity in user-item interaction graphs. At the same time, we treat the multi-head graph attention network (GAT) as an augmentation view generator to adaptively generate high-quality node-level augmented views. Finally, joint learning endows the end-to-end training fashion. In this case, the mutual supervision and collaborative cooperation of GCN and GAT achieves learnable model augmentation. Extensive experiments on several benchmark datasets demonstrate that LMACL provides a significant improvement over the strongest baseline in terms of <i>Recall</i> and <i>NDCG</i> by 2.5-3.8% and 1.6-4.0%, respectively. Our model implementation code is available at https://github.com/LiuHsinx/LMACL.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"81 1","pages":""},"PeriodicalIF":4.0000,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Knowledge Discovery from Data","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3657302","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Graph collaborative filtering (GCF) has achieved exciting recommendation performance with its ability to aggregate high-order graph structure information. Recently, contrastive learning (CL) has been incorporated into GCF to alleviate data sparsity and noise issues. However, most of the existing methods employ random or manual augmentation to produce contrastive views that may destroy the original topology and amplify the noisy effects. We argue that such augmentation is insufficient to produce the optimal contrastive view, leading to suboptimal recommendation results. In this paper, we proposed a Learnable Model Augmentation Contrastive Learning (LMACL) framework for recommendation, which effectively combines graph-level and node-level collaborative relations to enhance the expressiveness of collaborative filtering (CF) paradigm. Specifically, we first use the graph convolution network (GCN) as a backbone encoder to incorporate multi-hop neighbors into graph-level original node representations by leveraging the high-order connectivity in user-item interaction graphs. At the same time, we treat the multi-head graph attention network (GAT) as an augmentation view generator to adaptively generate high-quality node-level augmented views. Finally, joint learning endows the end-to-end training fashion. In this case, the mutual supervision and collaborative cooperation of GCN and GAT achieves learnable model augmentation. Extensive experiments on several benchmark datasets demonstrate that LMACL provides a significant improvement over the strongest baseline in terms of Recall and NDCG by 2.5-3.8% and 1.6-4.0%, respectively. Our model implementation code is available at https://github.com/LiuHsinx/LMACL.

LMACL:利用可学习模型增强对比学习改进图协同过滤技术
图协同过滤(Graph collaborative filtering,GCF)凭借其聚合高阶图结构信息的能力,取得了令人振奋的推荐性能。最近,对比学习(CL)被纳入 GCF,以缓解数据稀疏性和噪音问题。然而,大多数现有方法都采用随机或手动增强的方式来生成对比视图,这可能会破坏原始拓扑结构并放大噪声效应。我们认为,这种增强不足以产生最佳的对比视图,从而导致次优的推荐结果。在本文中,我们提出了一种用于推荐的可学习模型增强对比学习(LMACL)框架,它有效地结合了图层和节点层的协作关系,增强了协同过滤(CF)范式的表现力。具体来说,我们首先使用图卷积网络(GCN)作为骨干编码器,利用用户-物品交互图中的高阶连通性,将多跳邻居纳入图级原始节点表示。同时,我们将多头图注意力网络(GAT)视为增强视图生成器,以自适应地生成高质量的节点级增强视图。最后,联合学习赋予了端到端的训练方式。在这种情况下,GCN 和 GAT 的相互监督和协同合作实现了可学习的模型增强。在多个基准数据集上进行的广泛实验表明,LMACL 在 Recall 和 NDCG 方面比最强基线有显著提高,分别提高了 2.5-3.8% 和 1.6-4.0%。我们的模型实现代码见 https://github.com/LiuHsinx/LMACL。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ACM Transactions on Knowledge Discovery from Data
ACM Transactions on Knowledge Discovery from Data COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, SOFTWARE ENGINEERING
CiteScore
6.70
自引率
5.60%
发文量
172
审稿时长
3 months
期刊介绍: TKDD welcomes papers on a full range of research in the knowledge discovery and analysis of diverse forms of data. Such subjects include, but are not limited to: scalable and effective algorithms for data mining and big data analysis, mining brain networks, mining data streams, mining multi-media data, mining high-dimensional data, mining text, Web, and semi-structured data, mining spatial and temporal data, data mining for community generation, social network analysis, and graph structured data, security and privacy issues in data mining, visual, interactive and online data mining, pre-processing and post-processing for data mining, robust and scalable statistical methods, data mining languages, foundations of data mining, KDD framework and process, and novel applications and infrastructures exploiting data mining technology including massively parallel processing and cloud computing platforms. TKDD encourages papers that explore the above subjects in the context of large distributed networks of computers, parallel or multiprocessing computers, or new data devices. TKDD also encourages papers that describe emerging data mining applications that cannot be satisfied by the current data mining technology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信