High-level preferences as positive examples in contrastive learning for multi-interest sequential recommendation

Zizhong Zhu, Shuang Li, Yaokun Liu, Xiaowang Zhang, Zhiyong Feng, Yuexian Hou
{"title":"High-level preferences as positive examples in contrastive learning for multi-interest sequential recommendation","authors":"Zizhong Zhu, Shuang Li, Yaokun Liu, Xiaowang Zhang, Zhiyong Feng, Yuexian Hou","doi":"10.1007/s11280-024-01263-6","DOIUrl":null,"url":null,"abstract":"<p>The sequential recommendation task based on the multi-interest framework aims to model multiple interests of users from different aspects to predict their future interactions. However, researchers rarely consider the differences in features between the interests generated by the model. In extreme cases, all interest capsules have the same meaning, leading to the failure of modeling users with multiple interests. To address this issue, we propose the High-level Preferences as positive examples in Contrastive Learning for multi-interest Sequence Recommendation framework (HPCL4SR), which uses contrastive learning to distinguish differences in interests based on user item interaction information. In order to find high-quality comparative examples, this paper introduces the category information to construct a global graph, learning the association between categories for high-level preference interest of users. Then, a multi-layer perceptron is used to adaptively fuse the low-level preference interest features of the user’s items and the high-level preference interest features of the categories. Finally, user multi-interest contrastive samples are obtained through item sequence information and corresponding categories, which are fed into contrastive learning to optimize model parameters and generate multi-interest representations that are more in line with the user sequence. In addition, when modeling the user’s item sequence information, in order to increase the differentiation between item representations, the category of the item is used to supervise the learning process. Extensive experiments on three real datasets demonstrate that our method outperforms existing multi-interest recommendation models.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Wide Web","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11280-024-01263-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The sequential recommendation task based on the multi-interest framework aims to model multiple interests of users from different aspects to predict their future interactions. However, researchers rarely consider the differences in features between the interests generated by the model. In extreme cases, all interest capsules have the same meaning, leading to the failure of modeling users with multiple interests. To address this issue, we propose the High-level Preferences as positive examples in Contrastive Learning for multi-interest Sequence Recommendation framework (HPCL4SR), which uses contrastive learning to distinguish differences in interests based on user item interaction information. In order to find high-quality comparative examples, this paper introduces the category information to construct a global graph, learning the association between categories for high-level preference interest of users. Then, a multi-layer perceptron is used to adaptively fuse the low-level preference interest features of the user’s items and the high-level preference interest features of the categories. Finally, user multi-interest contrastive samples are obtained through item sequence information and corresponding categories, which are fed into contrastive learning to optimize model parameters and generate multi-interest representations that are more in line with the user sequence. In addition, when modeling the user’s item sequence information, in order to increase the differentiation between item representations, the category of the item is used to supervise the learning process. Extensive experiments on three real datasets demonstrate that our method outperforms existing multi-interest recommendation models.

Abstract Image

在多兴趣顺序推荐的对比学习中将高层次偏好作为正面示例
基于多兴趣框架的顺序推荐任务旨在从不同方面对用户的多种兴趣进行建模,以预测他们未来的互动。然而,研究人员很少考虑模型生成的兴趣之间的特征差异。在极端情况下,所有兴趣胶囊都具有相同的含义,导致无法对具有多重兴趣的用户进行建模。为了解决这个问题,我们提出了多兴趣序列推荐对比学习框架(HPCL4SR)中的高层次偏好作为正例,该框架使用对比学习来区分基于用户项目交互信息的兴趣差异。为了找到高质量的对比实例,本文引入了类别信息来构建全局图,学习用户高层次偏好兴趣的类别间关联。然后,使用多层感知器自适应地融合用户物品的低层次偏好兴趣特征和类别的高层次偏好兴趣特征。最后,通过物品序列信息和相应类别获得用户多兴趣对比样本,并将其输入对比学习,以优化模型参数,生成更符合用户序列的多兴趣表征。此外,在对用户的项目序列信息建模时,为了提高项目表征之间的区分度,项目的类别被用来监督学习过程。在三个真实数据集上进行的大量实验证明,我们的方法优于现有的多兴趣推荐模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信