将实时人群建议与强化学习相结合

G. V. D. L. Cruz, Bei Peng, Walter S. Lasecki, Matthew E. Taylor
{"title":"将实时人群建议与强化学习相结合","authors":"G. V. D. L. Cruz, Bei Peng, Walter S. Lasecki, Matthew E. Taylor","doi":"10.1145/2732158.2732180","DOIUrl":null,"url":null,"abstract":"Reinforcement learning is a powerful machine learning paradigm that allows agents to autonomously learn to maximize a scalar reward. However, it often suffers from poor initial performance and long learning times. This paper discusses how collecting on-line human feedback, both in real time and post hoc, can potentially improve the performance of such learning systems. We use the game Pac-Man to simulate a navigation setting and show that workers are able to accurately identify both when a sub-optimal action is executed, and what action should have been performed instead. Demonstrating that the crowd is capable of generating this input, and discussing the types of errors that occur, serves as a critical first step in designing systems that use this real-time feedback to improve systems' learning performance on-the-fly.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Towards Integrating Real-Time Crowd Advice with Reinforcement Learning\",\"authors\":\"G. V. D. L. Cruz, Bei Peng, Walter S. Lasecki, Matthew E. Taylor\",\"doi\":\"10.1145/2732158.2732180\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement learning is a powerful machine learning paradigm that allows agents to autonomously learn to maximize a scalar reward. However, it often suffers from poor initial performance and long learning times. This paper discusses how collecting on-line human feedback, both in real time and post hoc, can potentially improve the performance of such learning systems. We use the game Pac-Man to simulate a navigation setting and show that workers are able to accurately identify both when a sub-optimal action is executed, and what action should have been performed instead. Demonstrating that the crowd is capable of generating this input, and discussing the types of errors that occur, serves as a critical first step in designing systems that use this real-time feedback to improve systems' learning performance on-the-fly.\",\"PeriodicalId\":177570,\"journal\":{\"name\":\"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-03-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2732158.2732180\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2732158.2732180","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

强化学习是一种强大的机器学习范式,它允许代理自主学习最大化标量奖励。然而,它的初始表现往往很差,学习时间也很长。本文讨论了如何收集在线人类反馈,无论是实时的还是事后的,都可以潜在地提高这种学习系统的性能。我们使用游戏《吃豆人》来模拟导航设置,并展示工人能够准确地识别何时执行次优操作,以及应该执行哪些操作。证明人群有能力产生这种输入,并讨论发生的错误类型,是设计使用这种实时反馈来提高系统动态学习性能的系统的关键的第一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Towards Integrating Real-Time Crowd Advice with Reinforcement Learning
Reinforcement learning is a powerful machine learning paradigm that allows agents to autonomously learn to maximize a scalar reward. However, it often suffers from poor initial performance and long learning times. This paper discusses how collecting on-line human feedback, both in real time and post hoc, can potentially improve the performance of such learning systems. We use the game Pac-Man to simulate a navigation setting and show that workers are able to accurately identify both when a sub-optimal action is executed, and what action should have been performed instead. Demonstrating that the crowd is capable of generating this input, and discussing the types of errors that occur, serves as a critical first step in designing systems that use this real-time feedback to improve systems' learning performance on-the-fly.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信