Learning User Preferences by Observing User-Items Interactions in an IoT Augmented Space

David Massimo, Mehdi Elahi, F. Ricci
{"title":"Learning User Preferences by Observing User-Items Interactions in an IoT Augmented Space","authors":"David Massimo, Mehdi Elahi, F. Ricci","doi":"10.1145/3099023.3099070","DOIUrl":null,"url":null,"abstract":"Recommender systems generate recommendations by analysing which items the user consumes or likes. Moreover, in many scenarios, e.g., when a user is visiting an exhibition or a city, users are faced with a sequence of decisions, and the recommender should therefore suggest, at each decision step, a set of viable recommendations (attractions). In these scenarios the order and the context of the past user choices is a valuable source of data, and the recommender has to effectively exploit this information for understanding the user preferences in order to recommend compelling items. For addressing these scenarios, this paper proposes a novel preference learning model that takes into account the sequential nature of item consumption. The model is based on Inverse Reinforcement Learning, which enables to exploit observations of users' behaviours, when they are making decisions and taking actions, i.e., choosing the items to consume. The results of a proof of concept experiment show that the proposed model can effectively capture the user preferences, the rationale of users decision making process when consuming items in a sequential manner, and can replicate the observed user behaviours.","PeriodicalId":219391,"journal":{"name":"Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3099023.3099070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

Abstract

Recommender systems generate recommendations by analysing which items the user consumes or likes. Moreover, in many scenarios, e.g., when a user is visiting an exhibition or a city, users are faced with a sequence of decisions, and the recommender should therefore suggest, at each decision step, a set of viable recommendations (attractions). In these scenarios the order and the context of the past user choices is a valuable source of data, and the recommender has to effectively exploit this information for understanding the user preferences in order to recommend compelling items. For addressing these scenarios, this paper proposes a novel preference learning model that takes into account the sequential nature of item consumption. The model is based on Inverse Reinforcement Learning, which enables to exploit observations of users' behaviours, when they are making decisions and taking actions, i.e., choosing the items to consume. The results of a proof of concept experiment show that the proposed model can effectively capture the user preferences, the rationale of users decision making process when consuming items in a sequential manner, and can replicate the observed user behaviours.
通过观察物联网增强空间中的用户-项目交互来学习用户偏好
推荐系统通过分析用户消费或喜欢的商品来生成推荐。此外,在许多情况下,例如,当用户正在参观一个展览或一个城市时,用户面临着一系列的决策,因此推荐人应该在每个决策步骤中提出一组可行的建议(景点)。在这些场景中,过去用户选择的顺序和上下文是一个有价值的数据来源,推荐者必须有效地利用这些信息来理解用户的偏好,以便推荐引人注目的商品。为了解决这些问题,本文提出了一种新的偏好学习模型,该模型考虑了物品消费的顺序性。该模型基于逆强化学习,它可以利用用户行为的观察,当他们做出决定和采取行动时,即选择要消费的物品。概念验证实验的结果表明,所提出的模型可以有效地捕获用户偏好,用户以顺序方式消费商品时决策过程的基本原理,并可以复制观察到的用户行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信