基于多渠道用户反馈的贝叶斯个性化排名

B. Loni, Roberto Pagano, M. Larson, A. Hanjalic
{"title":"基于多渠道用户反馈的贝叶斯个性化排名","authors":"B. Loni, Roberto Pagano, M. Larson, A. Hanjalic","doi":"10.1145/2959100.2959163","DOIUrl":null,"url":null,"abstract":"Pairwise learning-to-rank algorithms have been shown to allow recommender systems to leverage unary user feedback. We propose Multi-feedback Bayesian Personalized Ranking (MF-BPR), a pairwise method that exploits different types of feedback with an extended sampling method. The feedback types are drawn from different \"channels\", in which users interact with items (e.g., clicks, likes, listens, follows, and purchases). We build on the insight that different kinds of feedback, e.g., a click versus a like, reflect different levels of commitment or preference. Our approach differs from previous work in that it exploits multiple sources of feedback simultaneously during the training process. The novelty of MF-BPR is an extended sampling method that equates feedback sources with \"levels\" that reflect the expected contribution of the signal. We demonstrate the effectiveness of our approach with a series of experiments carried out on three datasets containing multiple types of feedback. Our experimental results demonstrate that with a right sampling method, MF-BPR outperforms BPR in terms of accuracy. We find that the advantage of MF-BPR lies in its ability to leverage level information when sampling negative items.","PeriodicalId":315651,"journal":{"name":"Proceedings of the 10th ACM Conference on Recommender Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"116","resultStr":"{\"title\":\"Bayesian Personalized Ranking with Multi-Channel User Feedback\",\"authors\":\"B. Loni, Roberto Pagano, M. Larson, A. Hanjalic\",\"doi\":\"10.1145/2959100.2959163\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Pairwise learning-to-rank algorithms have been shown to allow recommender systems to leverage unary user feedback. We propose Multi-feedback Bayesian Personalized Ranking (MF-BPR), a pairwise method that exploits different types of feedback with an extended sampling method. The feedback types are drawn from different \\\"channels\\\", in which users interact with items (e.g., clicks, likes, listens, follows, and purchases). We build on the insight that different kinds of feedback, e.g., a click versus a like, reflect different levels of commitment or preference. Our approach differs from previous work in that it exploits multiple sources of feedback simultaneously during the training process. The novelty of MF-BPR is an extended sampling method that equates feedback sources with \\\"levels\\\" that reflect the expected contribution of the signal. We demonstrate the effectiveness of our approach with a series of experiments carried out on three datasets containing multiple types of feedback. Our experimental results demonstrate that with a right sampling method, MF-BPR outperforms BPR in terms of accuracy. We find that the advantage of MF-BPR lies in its ability to leverage level information when sampling negative items.\",\"PeriodicalId\":315651,\"journal\":{\"name\":\"Proceedings of the 10th ACM Conference on Recommender Systems\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"116\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 10th ACM Conference on Recommender Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2959100.2959163\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 10th ACM Conference on Recommender Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2959100.2959163","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 116

摘要

成对学习排序算法已被证明允许推荐系统利用单一用户反馈。我们提出了多反馈贝叶斯个性化排名(MF-BPR),这是一种利用扩展采样方法的不同类型反馈的两两方法。反馈类型来自不同的“渠道”,用户在其中与物品进行交互(例如,点击、点赞、收听、关注和购买)。我们认为,不同类型的反馈(例如,点击与点赞)反映了不同程度的承诺或偏好。我们的方法不同于以前的工作,因为它在训练过程中同时利用多个反馈来源。MF-BPR的新颖之处在于一种扩展的采样方法,它将反馈源与反映信号预期贡献的“电平”等同起来。我们通过在包含多种类型反馈的三个数据集上进行的一系列实验证明了我们方法的有效性。我们的实验结果表明,在正确的采样方法下,MF-BPR在精度方面优于BPR。我们发现MF-BPR的优势在于它能够在采样负面项目时利用水平信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Bayesian Personalized Ranking with Multi-Channel User Feedback
Pairwise learning-to-rank algorithms have been shown to allow recommender systems to leverage unary user feedback. We propose Multi-feedback Bayesian Personalized Ranking (MF-BPR), a pairwise method that exploits different types of feedback with an extended sampling method. The feedback types are drawn from different "channels", in which users interact with items (e.g., clicks, likes, listens, follows, and purchases). We build on the insight that different kinds of feedback, e.g., a click versus a like, reflect different levels of commitment or preference. Our approach differs from previous work in that it exploits multiple sources of feedback simultaneously during the training process. The novelty of MF-BPR is an extended sampling method that equates feedback sources with "levels" that reflect the expected contribution of the signal. We demonstrate the effectiveness of our approach with a series of experiments carried out on three datasets containing multiple types of feedback. Our experimental results demonstrate that with a right sampling method, MF-BPR outperforms BPR in terms of accuracy. We find that the advantage of MF-BPR lies in its ability to leverage level information when sampling negative items.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信