Wei Jiang , Yongquan Fan , Jing Tang , Xianyong Li , Yajun Du , Xiaomin Wang
{"title":"基于去噪曼巴的序列推荐长短期偏好分层建模","authors":"Wei Jiang , Yongquan Fan , Jing Tang , Xianyong Li , Yajun Du , Xiaomin Wang","doi":"10.1016/j.ipm.2025.104425","DOIUrl":null,"url":null,"abstract":"<div><div>Recent advancements in Mamba-based models have shown promising potential for sequential recommendation due to their linear scalability. However, existing Mamba-based approaches still suffer from three key limitations: (1) insufficient capability in modeling short-term user preference transitions, (2) limited robustness to noise in long interaction sequences, and (3) insufficient exploitation of rich side information (e.g., item attributes). To address these challenges, we propose HLSDMRec, a hierarchical preference modeling model that integrates a denoised Mamba module for capturing robust long-term preferences and a Local LSTM module for learning fine-grained short-term preferences. HLSDMRec adopts a hierarchical dual-path architecture that jointly models item ID and side information sequences, extracting both long and short-term preferences from each. To ensure representation consistency, a hierarchical alignment module is applied and a motivation-aware gating mechanism adaptively fuses the extracted signals based on user intent. Experiments on four datasets, including Amazon Beauty (0.19M interactions), Sports (0.29M interactions), ML-1M (1M interactions), and ML-10M (10M interactions), demonstrate average improvements of 6.06% in HR@5, 4.75% in HR@10, 11.25% in NDCG@5, and 10.66% in NDCG@10 over the baseline models. The source code for our model is publicly available at <span><span>https://github.com/rookie2568/hlsdmrec</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 2","pages":"Article 104425"},"PeriodicalIF":6.9000,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hierarchical long and short-term preference modeling with denoising Mamba for sequential recommendation\",\"authors\":\"Wei Jiang , Yongquan Fan , Jing Tang , Xianyong Li , Yajun Du , Xiaomin Wang\",\"doi\":\"10.1016/j.ipm.2025.104425\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent advancements in Mamba-based models have shown promising potential for sequential recommendation due to their linear scalability. However, existing Mamba-based approaches still suffer from three key limitations: (1) insufficient capability in modeling short-term user preference transitions, (2) limited robustness to noise in long interaction sequences, and (3) insufficient exploitation of rich side information (e.g., item attributes). To address these challenges, we propose HLSDMRec, a hierarchical preference modeling model that integrates a denoised Mamba module for capturing robust long-term preferences and a Local LSTM module for learning fine-grained short-term preferences. HLSDMRec adopts a hierarchical dual-path architecture that jointly models item ID and side information sequences, extracting both long and short-term preferences from each. To ensure representation consistency, a hierarchical alignment module is applied and a motivation-aware gating mechanism adaptively fuses the extracted signals based on user intent. Experiments on four datasets, including Amazon Beauty (0.19M interactions), Sports (0.29M interactions), ML-1M (1M interactions), and ML-10M (10M interactions), demonstrate average improvements of 6.06% in HR@5, 4.75% in HR@10, 11.25% in NDCG@5, and 10.66% in NDCG@10 over the baseline models. The source code for our model is publicly available at <span><span>https://github.com/rookie2568/hlsdmrec</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50365,\"journal\":{\"name\":\"Information Processing & Management\",\"volume\":\"63 2\",\"pages\":\"Article 104425\"},\"PeriodicalIF\":6.9000,\"publicationDate\":\"2025-10-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing & Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306457325003668\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325003668","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Hierarchical long and short-term preference modeling with denoising Mamba for sequential recommendation
Recent advancements in Mamba-based models have shown promising potential for sequential recommendation due to their linear scalability. However, existing Mamba-based approaches still suffer from three key limitations: (1) insufficient capability in modeling short-term user preference transitions, (2) limited robustness to noise in long interaction sequences, and (3) insufficient exploitation of rich side information (e.g., item attributes). To address these challenges, we propose HLSDMRec, a hierarchical preference modeling model that integrates a denoised Mamba module for capturing robust long-term preferences and a Local LSTM module for learning fine-grained short-term preferences. HLSDMRec adopts a hierarchical dual-path architecture that jointly models item ID and side information sequences, extracting both long and short-term preferences from each. To ensure representation consistency, a hierarchical alignment module is applied and a motivation-aware gating mechanism adaptively fuses the extracted signals based on user intent. Experiments on four datasets, including Amazon Beauty (0.19M interactions), Sports (0.29M interactions), ML-1M (1M interactions), and ML-10M (10M interactions), demonstrate average improvements of 6.06% in HR@5, 4.75% in HR@10, 11.25% in NDCG@5, and 10.66% in NDCG@10 over the baseline models. The source code for our model is publicly available at https://github.com/rookie2568/hlsdmrec.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.