MARLUI: Multi-Agent Reinforcement Learning for Adaptive Point-and-Click UIs

Q1 Social Sciences
Thomas Langerak, Eth Zürich, Switzerland, Christoph Gebhardt, Christian Holz, Sammy Christen, Mert Albaba
{"title":"MARLUI: Multi-Agent Reinforcement Learning for Adaptive Point-and-Click UIs","authors":"Thomas Langerak, Eth Zürich, Switzerland, Christoph Gebhardt, Christian Holz, Sammy Christen, Mert Albaba","doi":"10.1145/3661147","DOIUrl":null,"url":null,"abstract":"As the number of selectable items increases, point-and-click interfaces rapidly become complex, leading to a decrease in usability. Adaptive user interfaces can reduce this complexity by automatically adjusting an interface to only display the most relevant items. A core challenge for developing adaptive interfaces is to infer user intent and chose adaptations accordingly. Current methods rely on tediously hand-crafted rules or carefully collected user data. Furthermore, heuristics need to be recrafted and data regathered for every new task and interface. To address this issue, we formulate interface adaptation as a multi-agent reinforcement learning problem. Our approach learns adaptation policies without relying on heuristics or real user data, facilitating the development of adaptive interfaces across various tasks with minimal adjustments needed. In our formulation, a user agent mimics a real user and learns to interact with an interface via point-and-click actions. Simultaneously, an interface agent learns interface adaptations, to maximize the user agent's efficiency, by observing the user agent's behavior. For our evaluation, we substituted the simulated user agent with actual users. Our study involved twelve participants and concentrated on automatic toolbar item assignment. The results show that the policies we developed in simulation effectively apply to real users. These users were able to complete tasks with fewer actions and in similar times compared to methods trained with real data. Additionally, we demonstrated our method's efficiency and generalizability across four different interfaces and tasks.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"1 2","pages":"1 - 27"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM on Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3661147","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

As the number of selectable items increases, point-and-click interfaces rapidly become complex, leading to a decrease in usability. Adaptive user interfaces can reduce this complexity by automatically adjusting an interface to only display the most relevant items. A core challenge for developing adaptive interfaces is to infer user intent and chose adaptations accordingly. Current methods rely on tediously hand-crafted rules or carefully collected user data. Furthermore, heuristics need to be recrafted and data regathered for every new task and interface. To address this issue, we formulate interface adaptation as a multi-agent reinforcement learning problem. Our approach learns adaptation policies without relying on heuristics or real user data, facilitating the development of adaptive interfaces across various tasks with minimal adjustments needed. In our formulation, a user agent mimics a real user and learns to interact with an interface via point-and-click actions. Simultaneously, an interface agent learns interface adaptations, to maximize the user agent's efficiency, by observing the user agent's behavior. For our evaluation, we substituted the simulated user agent with actual users. Our study involved twelve participants and concentrated on automatic toolbar item assignment. The results show that the policies we developed in simulation effectively apply to real users. These users were able to complete tasks with fewer actions and in similar times compared to methods trained with real data. Additionally, we demonstrated our method's efficiency and generalizability across four different interfaces and tasks.
MARLUI:自适应点击式用户界面的多代理强化学习
随着可选项目的增加,点击式界面会迅速变得复杂,导致可用性降低。自适应用户界面可以自动调整界面,只显示最相关的项目,从而降低这种复杂性。开发自适应界面的一个核心挑战是推断用户意图并选择相应的调整。目前的方法依赖于繁琐的手工规则或精心收集的用户数据。此外,对于每项新任务和界面,启发式方法都需要重新起草并重新收集数据。为了解决这个问题,我们将界面适应设计为一个多代理强化学习问题。我们的方法无需依赖启发式方法或真实用户数据就能学习适应策略,从而在各种任务中以最小的调整量开发出适应性界面。在我们的方案中,用户代理模仿真实用户,学习通过点击操作与界面进行交互。与此同时,界面代理通过观察用户代理的行为来学习界面适应性,以最大限度地提高用户代理的效率。在评估中,我们用实际用户代替了模拟用户代理。我们的研究有 12 名参与者参与,主要研究工具栏项目的自动分配。结果表明,我们在模拟中开发的策略有效地适用于真实用户。与使用真实数据训练的方法相比,这些用户能够以更少的操作和更短的时间完成任务。此外,我们还证明了我们的方法在四种不同界面和任务中的效率和通用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction Social Sciences-Social Sciences (miscellaneous)
CiteScore
5.90
自引率
0.00%
发文量
257
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信