A Comparison of Dynamical Perceptual-Motor Primitives and Deep Reinforcement Learning for Human-Artificial Agent Training Systems

IF 2.2 Q3 ENGINEERING, INDUSTRIAL
Lillian M. Rigoli, Gaurav Patil, Patrick Nalepka, Rachel W. Kallen, S. Hosking, Christopher J. Best, Michael J. Richardson
{"title":"A Comparison of Dynamical Perceptual-Motor Primitives and Deep Reinforcement Learning for Human-Artificial Agent Training Systems","authors":"Lillian M. Rigoli, Gaurav Patil, Patrick Nalepka, Rachel W. Kallen, S. Hosking, Christopher J. Best, Michael J. Richardson","doi":"10.1177/15553434221092930","DOIUrl":null,"url":null,"abstract":"Effective team performance often requires that individuals engage in team training exercises. However, organizing team-training scenarios presents economic and logistical challenges and can be prone to trainer bias and fatigue. Accordingly, a growing body of research is investigating the effectiveness of employing artificial agents (AAs) as synthetic teammates in team training simulations, and, relatedly, how to best develop AAs capable of robust, human-like behavioral interaction. Motivated by these challenges, the current study examined whether task dynamical models of expert human herding behavior could be embedded in the control architecture of AAs to train novice actors to perform a complex multiagent herding task. Training outcomes were compared to human-expert trainers, novice baseline performance, and AAs developed using deep reinforcement learning (DRL). Participants’ subjective preferences for the AAs developed using DRL or dynamical models of human performance were also investigated. The results revealed that AAs controlled by dynamical models of human expert performance could train novice actors at levels equivalent to expert human trainers and were also preferred over AAs developed using DRL. The implications for the development of AAs for robust human-AA interaction and training are discussed, including the potential benefits of employing hybrid Dynamical-DRL techniques for AA development.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"79 - 100"},"PeriodicalIF":2.2000,"publicationDate":"2022-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cognitive Engineering and Decision Making","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15553434221092930","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 4

Abstract

Effective team performance often requires that individuals engage in team training exercises. However, organizing team-training scenarios presents economic and logistical challenges and can be prone to trainer bias and fatigue. Accordingly, a growing body of research is investigating the effectiveness of employing artificial agents (AAs) as synthetic teammates in team training simulations, and, relatedly, how to best develop AAs capable of robust, human-like behavioral interaction. Motivated by these challenges, the current study examined whether task dynamical models of expert human herding behavior could be embedded in the control architecture of AAs to train novice actors to perform a complex multiagent herding task. Training outcomes were compared to human-expert trainers, novice baseline performance, and AAs developed using deep reinforcement learning (DRL). Participants’ subjective preferences for the AAs developed using DRL or dynamical models of human performance were also investigated. The results revealed that AAs controlled by dynamical models of human expert performance could train novice actors at levels equivalent to expert human trainers and were also preferred over AAs developed using DRL. The implications for the development of AAs for robust human-AA interaction and training are discussed, including the potential benefits of employing hybrid Dynamical-DRL techniques for AA development.
动态感知-运动原语与深度强化学习在人机智能体训练系统中的比较
有效的团队绩效通常需要个人参与团队训练。然而,组织团队训练场景会带来经济和后勤方面的挑战,并且容易产生训练师偏见和疲劳。因此,越来越多的研究机构正在调查在团队训练模拟中使用人工代理(AAs)作为合成队友的有效性,以及相关的,如何最好地开发具有强大的,类似人类行为交互能力的AAs。在这些挑战的激励下,本研究探讨了是否可以将专家人类羊群行为的任务动态模型嵌入到人工智能系统的控制体系结构中,以训练新手执行复杂的多智能体羊群任务。将训练结果与人类专家训练师、新手基线表现和使用深度强化学习(DRL)开发的人工智能(AAs)进行比较。参与者对使用DRL或人类表现动态模型开发的AAs的主观偏好也进行了调查。结果表明,由人类专家表演动态模型控制的人工智能系统可以训练新手达到与人类专家训练师相当的水平,并且比使用DRL开发的人工智能系统更受欢迎。本文讨论了人工智能开发对人类与人工智能交互和训练的影响,包括采用混合动态- drl技术进行人工智能开发的潜在好处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.60
自引率
10.00%
发文量
21
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信