A Preference Learning Decoupling Framework for User Cold-Start Recommendation

Chunyang Wang, Yanmin Zhu, Aixin Sun, Zhaobo Wang, K. Wang
{"title":"A Preference Learning Decoupling Framework for User Cold-Start Recommendation","authors":"Chunyang Wang, Yanmin Zhu, Aixin Sun, Zhaobo Wang, K. Wang","doi":"10.1145/3539618.3591627","DOIUrl":null,"url":null,"abstract":"The issue of user cold-start poses a long-standing challenge to recommendation systems, due to the scarce interactions of new users. Recently, meta-learning based studies treat each cold-start user as a user-specific few-shot task and then derive meta-knowledge about fast model adaptation across training users. However, existing solutions mostly do not clearly distinguish the concept of new users and the concept of novel preferences, leading to over-reliance on meta-learning based adaptability to novel patterns. In addition, we also argue that the existing meta-training task construction inherently suffers from the memorization overfitting issue, which inevitably hinders meta-generalization to new users. In response to the aforementioned issues, we propose a preference learning decoupling framework, which is enhanced with meta-augmentation (PDMA), for user cold-start recommendation. To rescue the meta-learning from unnecessary adaptation to common patterns, our framework decouples preference learning for a cold-start user into two complementary aspects: common preference transfer, and novel preference adaptation. To handle the memorization overfitting issue, we further propose to augment meta-training users by injecting attribute-based noises, to achieve mutually-exclusive tasks. Extensive experiments on benchmark datasets demonstrate that our framework achieves superior performance improvements against state-of-the-art methods. We also show that our proposed framework is effective in alleviating memorization overfitting.","PeriodicalId":425056,"journal":{"name":"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3539618.3591627","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The issue of user cold-start poses a long-standing challenge to recommendation systems, due to the scarce interactions of new users. Recently, meta-learning based studies treat each cold-start user as a user-specific few-shot task and then derive meta-knowledge about fast model adaptation across training users. However, existing solutions mostly do not clearly distinguish the concept of new users and the concept of novel preferences, leading to over-reliance on meta-learning based adaptability to novel patterns. In addition, we also argue that the existing meta-training task construction inherently suffers from the memorization overfitting issue, which inevitably hinders meta-generalization to new users. In response to the aforementioned issues, we propose a preference learning decoupling framework, which is enhanced with meta-augmentation (PDMA), for user cold-start recommendation. To rescue the meta-learning from unnecessary adaptation to common patterns, our framework decouples preference learning for a cold-start user into two complementary aspects: common preference transfer, and novel preference adaptation. To handle the memorization overfitting issue, we further propose to augment meta-training users by injecting attribute-based noises, to achieve mutually-exclusive tasks. Extensive experiments on benchmark datasets demonstrate that our framework achieves superior performance improvements against state-of-the-art methods. We also show that our proposed framework is effective in alleviating memorization overfitting.
用户冷启动推荐的偏好学习解耦框架
由于新用户的交互很少,用户冷启动问题对推荐系统提出了一个长期的挑战。最近,基于元学习的研究将每个冷启动用户视为特定于用户的少量任务,然后推导出跨训练用户的快速模型自适应元知识。然而,现有的解决方案大多没有明确区分新用户的概念和新偏好的概念,导致过度依赖基于元学习的对新模式的适应性。此外,我们还认为现有的元训练任务构建固有地存在记忆过拟合问题,这不可避免地阻碍了对新用户的元推广。针对上述问题,本文提出了一种基于元增强(meta-augmentation, PDMA)的偏好学习解耦框架,用于用户冷启动推荐。为了避免元学习对共同模式的不必要适应,我们的框架将冷启动用户的偏好学习解耦为两个互补的方面:共同偏好迁移和新偏好适应。为了解决记忆过拟合问题,我们进一步提出通过注入基于属性的噪声来增加元训练用户,以实现互斥任务。在基准数据集上的大量实验表明,我们的框架与最先进的方法相比实现了卓越的性能改进。我们还表明,我们提出的框架在缓解记忆过拟合方面是有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信