Explanations and User Control in Recommender Systems

D. Jannach, Michael Jugovac, Ingrid Nunes
{"title":"Explanations and User Control in Recommender Systems","authors":"D. Jannach, Michael Jugovac, Ingrid Nunes","doi":"10.1145/3345002.3349293","DOIUrl":null,"url":null,"abstract":"1 BACKGROUND The personalized selection and presentation of content have become common in today’s online world, for example on media streaming sites, e-commerce shops, and social networks. This automated personalization is often accomplished by recommender systems, which continuously collect and interpret information about the individual user. To determine which information items should be presented, these systems typically rely on machine learning. Over the last decades, a large variety of machine learning techniques of increasing complexity have been applied for building recommender systems. The recommendation models that are learned by such modern algorithms are, however, usually seen as black boxes. Technically, they often consist of values for hundreds or thousands of variables, making it impossible to provide a humanunderstandable rationale why a certain item is recommended to a particular user. Providing users with an explanation or at least with an intuition why an item is recommended can, however, be crucial, both for the acceptance of an individual recommendation and for the establishment of user trust towards the system as a whole [3]. Furthermore, such system-provided explanations can not only contribute to the acceptance of the system, but also serve as entry points for interactive approaches that allow users to give feedback as a means to correct system assumptions and, thus, take control of the recommendation process.","PeriodicalId":153835,"journal":{"name":"Proceedings of the 23rd International Workshop on Personalization and Recommendation on the Web and Beyond","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 23rd International Workshop on Personalization and Recommendation on the Web and Beyond","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3345002.3349293","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

1 BACKGROUND The personalized selection and presentation of content have become common in today’s online world, for example on media streaming sites, e-commerce shops, and social networks. This automated personalization is often accomplished by recommender systems, which continuously collect and interpret information about the individual user. To determine which information items should be presented, these systems typically rely on machine learning. Over the last decades, a large variety of machine learning techniques of increasing complexity have been applied for building recommender systems. The recommendation models that are learned by such modern algorithms are, however, usually seen as black boxes. Technically, they often consist of values for hundreds or thousands of variables, making it impossible to provide a humanunderstandable rationale why a certain item is recommended to a particular user. Providing users with an explanation or at least with an intuition why an item is recommended can, however, be crucial, both for the acceptance of an individual recommendation and for the establishment of user trust towards the system as a whole [3]. Furthermore, such system-provided explanations can not only contribute to the acceptance of the system, but also serve as entry points for interactive approaches that allow users to give feedback as a means to correct system assumptions and, thus, take control of the recommendation process.
推荐系统中的解释和用户控制
内容的个性化选择和呈现在当今的网络世界中已经变得很普遍,例如在流媒体网站、电子商务商店和社交网络上。这种自动化的个性化通常是由推荐系统完成的,它不断地收集和解释关于单个用户的信息。为了确定应该呈现哪些信息项,这些系统通常依赖于机器学习。在过去的几十年里,越来越复杂的各种机器学习技术被应用于构建推荐系统。然而,由这种现代算法学习的推荐模型通常被视为黑盒。从技术上讲,它们通常由数百或数千个变量的值组成,因此不可能提供一个人类可以理解的理由,为什么某个项目被推荐给特定用户。然而,为用户提供一个解释,或者至少是一种直觉,为什么一个项目被推荐,对于接受个人推荐和建立用户对整个系统的信任都是至关重要的[3]。此外,这种系统提供的解释不仅有助于系统的接受,而且还可以作为交互式方法的切入点,允许用户提供反馈,作为纠正系统假设的一种手段,从而控制推荐过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信