Human Strategic Steering Improves Performance of Interactive Optimization

Fabio Colella, Pedram Daee, Jussi P. P. Jokinen, Antti Oulasvirta, Samuel Kaski
{"title":"Human Strategic Steering Improves Performance of Interactive Optimization","authors":"Fabio Colella, Pedram Daee, Jussi P. P. Jokinen, Antti Oulasvirta, Samuel Kaski","doi":"10.1145/3340631.3394883","DOIUrl":null,"url":null,"abstract":"A central concern in an interactive intelligent system is optimization of its actions, to be maximally helpful to its human user. In recommender systems for instance, the action is to choose what to recommend, and the optimization task is to recommend items the user prefers. The optimization is done based on earlier user's feedback (e.g. \"likes\" and \"dislikes\"), and the algorithms assume the feedback to be faithful. That is, when the user clicks \"like,\" they actually prefer the item. We argue that this fundamental assumption can be extensively violated by human users, who are not passive feedback sources. Instead, they are in control, actively steering the system towards their goal. To verify this hypothesis, that humans steer and are able to improve performance by steering, we designed a function optimization task where a human and an optimization algorithm collaborate to find the maximum of a 1-dimensional function. At each iteration, the optimization algorithm queries the user for the value of a hidden function f at a point x, and the user, who sees the hidden function, provides an answer about f(x). Our study on 21 participants shows that users who understand how the optimization works, strategically provide biased answers (answers not equal to f(x)), which results in the algorithm finding the optimum significantly faster. Our work highlights that next-generation intelligent systems will need user models capable of helping users who steer systems to pursue their goals.","PeriodicalId":417607,"journal":{"name":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3340631.3394883","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

A central concern in an interactive intelligent system is optimization of its actions, to be maximally helpful to its human user. In recommender systems for instance, the action is to choose what to recommend, and the optimization task is to recommend items the user prefers. The optimization is done based on earlier user's feedback (e.g. "likes" and "dislikes"), and the algorithms assume the feedback to be faithful. That is, when the user clicks "like," they actually prefer the item. We argue that this fundamental assumption can be extensively violated by human users, who are not passive feedback sources. Instead, they are in control, actively steering the system towards their goal. To verify this hypothesis, that humans steer and are able to improve performance by steering, we designed a function optimization task where a human and an optimization algorithm collaborate to find the maximum of a 1-dimensional function. At each iteration, the optimization algorithm queries the user for the value of a hidden function f at a point x, and the user, who sees the hidden function, provides an answer about f(x). Our study on 21 participants shows that users who understand how the optimization works, strategically provide biased answers (answers not equal to f(x)), which results in the algorithm finding the optimum significantly faster. Our work highlights that next-generation intelligent systems will need user models capable of helping users who steer systems to pursue their goals.
人的战略导向提高交互优化的性能
交互式智能系统的核心问题是优化其操作,最大限度地为人类用户提供帮助。例如,在推荐系统中,操作是选择要推荐的内容,而优化任务是推荐用户喜欢的项目。优化是基于早期用户的反馈(例如:“喜欢”和“不喜欢”),算法假设反馈是忠实的。也就是说,当用户点击“喜欢”时,他们实际上更喜欢这个项目。我们认为,这一基本假设可能会被人类用户广泛违反,因为他们不是被动反馈的来源。相反,他们在控制中,积极地引导系统朝着他们的目标前进。为了验证这一假设,即人类驾驶并能够通过驾驶来提高性能,我们设计了一个函数优化任务,其中人类和优化算法合作找到一维函数的最大值。在每次迭代中,优化算法向用户查询隐藏函数f在点x处的值,看到隐藏函数的用户提供关于f(x)的答案。我们对21名参与者的研究表明,了解优化如何工作的用户,有策略地提供有偏见的答案(不等于f(x)的答案),这导致算法明显更快地找到最优解。我们的工作强调,下一代智能系统将需要能够帮助用户引导系统追求目标的用户模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信