A Local context enhanced Consistency-aware Mamba-based Sequential Recommendation model

IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Zhu Zhang, Bo Yang, Yimeng Lu
{"title":"A Local context enhanced Consistency-aware Mamba-based Sequential Recommendation model","authors":"Zhu Zhang,&nbsp;Bo Yang,&nbsp;Yimeng Lu","doi":"10.1016/j.ipm.2025.104076","DOIUrl":null,"url":null,"abstract":"<div><div>Sequential recommendation (SR) focuses on capturing users’ interests from their historical behaviors. Transformer-based SR models have demonstrated promising performance by leveraging self-attention for sequential modeling. Recently, Mamba, a novel sequential model, has shown competitive performance compared to Transformers. In SR tasks, item representation learning involves both global and local context information. While several existing SR models attempt to address this integration, they suffer from inferior performance or computational inefficiency. Moreover, existing Mamba-based SR model appears to capture only the global context information. Given Mamba’s merits in enhancing model performance and efficiency, there is substantial potential to more effectively integrate both global and local context information within a Mamba-based framework. Additionally, consistency training, which is pivotal for enhancing model performance, remains underexplored in existing SR models.</div><div>To tackle these challenges, we propose a Local Context Enhanced Consistency-aware Mamba-based Sequential Recommendation Model (LC-Mamba). LC-Mamba captures both global and local context information to improve recommendation performance. Specifically, LC-Mamba leverages a GNN-based sequence encoder to extract information from local neighbors for each item (local context information) in a graph view, while utilizing a Mamba-based sequence encoder to capture dependencies between items in the sequence (global context information) in a sequential view. Furthermore, we introduce consistency training, including model-level and representation-level consistency, to further enhance performance. Specifically, we incorporate R-Drop regularization into the Mamba-based sequence encoder to mitigate the inconsistency between training and inference caused by random dropout (model-level consistency). Additionally, we leverage contrastive learning to enhance consistency between the item representations learned from the sequential and graph views (representation-level consistency). Extensive experiments on three widely used datasets illustrate that LC-Mamba outperforms baseline models in HR and NDCG, achieving up to a 31.03% improvement in NDCG. LC-Mamba can be applied to real-world applications such as e-commerce and content platforms.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104076"},"PeriodicalIF":7.4000,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325000184","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Sequential recommendation (SR) focuses on capturing users’ interests from their historical behaviors. Transformer-based SR models have demonstrated promising performance by leveraging self-attention for sequential modeling. Recently, Mamba, a novel sequential model, has shown competitive performance compared to Transformers. In SR tasks, item representation learning involves both global and local context information. While several existing SR models attempt to address this integration, they suffer from inferior performance or computational inefficiency. Moreover, existing Mamba-based SR model appears to capture only the global context information. Given Mamba’s merits in enhancing model performance and efficiency, there is substantial potential to more effectively integrate both global and local context information within a Mamba-based framework. Additionally, consistency training, which is pivotal for enhancing model performance, remains underexplored in existing SR models.
To tackle these challenges, we propose a Local Context Enhanced Consistency-aware Mamba-based Sequential Recommendation Model (LC-Mamba). LC-Mamba captures both global and local context information to improve recommendation performance. Specifically, LC-Mamba leverages a GNN-based sequence encoder to extract information from local neighbors for each item (local context information) in a graph view, while utilizing a Mamba-based sequence encoder to capture dependencies between items in the sequence (global context information) in a sequential view. Furthermore, we introduce consistency training, including model-level and representation-level consistency, to further enhance performance. Specifically, we incorporate R-Drop regularization into the Mamba-based sequence encoder to mitigate the inconsistency between training and inference caused by random dropout (model-level consistency). Additionally, we leverage contrastive learning to enhance consistency between the item representations learned from the sequential and graph views (representation-level consistency). Extensive experiments on three widely used datasets illustrate that LC-Mamba outperforms baseline models in HR and NDCG, achieving up to a 31.03% improvement in NDCG. LC-Mamba can be applied to real-world applications such as e-commerce and content platforms.
本地上下文增强的基于一致性感知mamba的顺序推荐模型
顺序推荐(SR)侧重于从用户的历史行为中捕捉他们的兴趣。基于变压器的SR模型通过利用序列建模的自关注已经证明了有希望的性能。最近,与变形金刚相比,新型序列模型“曼巴”表现出了竞争力。在SR任务中,项目表示学习涉及全局和局部上下文信息。虽然一些现有的SR模型试图解决这种集成问题,但它们的性能较差或计算效率低下。此外,现有的基于mamba的SR模型似乎只捕获全局上下文信息。考虑到Mamba在提高模型性能和效率方面的优点,在基于Mamba的框架中,有很大的潜力可以更有效地集成全球和本地上下文信息。此外,一致性训练是提高模型性能的关键,但在现有的SR模型中仍未得到充分的探索。为了解决这些挑战,我们提出了一种基于本地上下文增强的一致性感知mamba的顺序推荐模型(LC-Mamba)。LC-Mamba捕获全局和本地上下文信息,以提高推荐性能。具体来说,LC-Mamba利用基于gnn的序列编码器从图形视图中每个项目的本地邻居中提取信息(本地上下文信息),同时利用基于mamba的序列编码器在序列视图中捕获序列中项目之间的依赖关系(全局上下文信息)。此外,我们引入一致性训练,包括模型级和表示级一致性,以进一步提高性能。具体来说,我们将R-Drop正则化纳入基于mamba的序列编码器中,以减轻随机dropout(模型级一致性)引起的训练和推理之间的不一致。此外,我们利用对比学习来增强从顺序视图和图视图中学习到的项目表示之间的一致性(表示级一致性)。在三个广泛使用的数据集上进行的大量实验表明,LC-Mamba在HR和NDCG方面优于基线模型,NDCG提高了31.03%。LC-Mamba可以应用于电子商务和内容平台等现实世界的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Processing & Management
Information Processing & Management 工程技术-计算机:信息系统
CiteScore
17.00
自引率
11.60%
发文量
276
审稿时长
39 days
期刊介绍: Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing. We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信