Measurement and Manipulation in Human-Agent Teams: A Review

Maartje Hidalgo, S. Rebensky, Daniel Nguyen, Myke C. Cohen, Lauren Temple, Brent D. Fegley
{"title":"Measurement and Manipulation in Human-Agent Teams: A Review","authors":"Maartje Hidalgo, S. Rebensky, Daniel Nguyen, Myke C. Cohen, Lauren Temple, Brent D. Fegley","doi":"10.54941/ahfe1003559","DOIUrl":null,"url":null,"abstract":"In this era of the Fourth Industrial Revolution, increasingly autonomous and intelligent artificial agents become more integrated into our daily lives. As such, these agents are capable of conducting independent tasks within a teaming setting, while also becoming more socially invested in the team space. While ample human-teaming theories help understand, explain, and predict the outcome of team endeavors, such theories are not yet existent for human-agent teaming. Furthermore, the development and evaluations of agents are constantly evolving. As a result, many developers utilize their own test plans and their own measures making it difficult to compare findings across agent developers. Many agent developers looking to capture human-team behaviors may not sufficiently understand the benefits of specific team processes and the challenges of measuring these constructs. Ineffective team scenarios and measures could lead to unrepresentative training datasets, prolonged agent development timelines, and less effective agent predictions. With the appropriate measures and conditions, an agent would be able to determine deficits in team processes early enough to intervene during performance. This paper is a step in the direction toward the formulation of a theory of human-agent teaming, wherein we conducted a literature review of team processes that are measurable in order to predict team performance and outcomes. The frameworks presented leverage multiple teaming frameworks such as Marks et al.’s (2001) team process model, the IMOI model (Ilgen, 20005), Salas et al.’s big five model (2005) as well as more modern frameworks on human agent teaming such as Carter-Browne et al. (2021). Specific constructs and measures within the “input” and “process” stages of these models were pulled and then searched within the team’s literature to find specific measurements of team processes. However, the measures are only half of the requirement for an effective team-testing scenario. Teams that are given unlimited amount of time should all complete a task, but only the most effective coordinative and communicative teams can do so in a time efficient manner. As a result, we also identified experimental manipulations that have shown to cause effects in team processes. This paper aims to present the measurement and manipulation frameworks developed under a DARPA effort along with the benefits and costs associated with each measurement and manipulation category.","PeriodicalId":102446,"journal":{"name":"Human Factors and Simulation","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors and Simulation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1003559","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this era of the Fourth Industrial Revolution, increasingly autonomous and intelligent artificial agents become more integrated into our daily lives. As such, these agents are capable of conducting independent tasks within a teaming setting, while also becoming more socially invested in the team space. While ample human-teaming theories help understand, explain, and predict the outcome of team endeavors, such theories are not yet existent for human-agent teaming. Furthermore, the development and evaluations of agents are constantly evolving. As a result, many developers utilize their own test plans and their own measures making it difficult to compare findings across agent developers. Many agent developers looking to capture human-team behaviors may not sufficiently understand the benefits of specific team processes and the challenges of measuring these constructs. Ineffective team scenarios and measures could lead to unrepresentative training datasets, prolonged agent development timelines, and less effective agent predictions. With the appropriate measures and conditions, an agent would be able to determine deficits in team processes early enough to intervene during performance. This paper is a step in the direction toward the formulation of a theory of human-agent teaming, wherein we conducted a literature review of team processes that are measurable in order to predict team performance and outcomes. The frameworks presented leverage multiple teaming frameworks such as Marks et al.’s (2001) team process model, the IMOI model (Ilgen, 20005), Salas et al.’s big five model (2005) as well as more modern frameworks on human agent teaming such as Carter-Browne et al. (2021). Specific constructs and measures within the “input” and “process” stages of these models were pulled and then searched within the team’s literature to find specific measurements of team processes. However, the measures are only half of the requirement for an effective team-testing scenario. Teams that are given unlimited amount of time should all complete a task, but only the most effective coordinative and communicative teams can do so in a time efficient manner. As a result, we also identified experimental manipulations that have shown to cause effects in team processes. This paper aims to present the measurement and manipulation frameworks developed under a DARPA effort along with the benefits and costs associated with each measurement and manipulation category.
人类智能体团队中的测量和操作:综述
在这个第四次工业革命的时代,越来越自主和智能的人工智能越来越融入我们的日常生活。因此,这些代理能够在团队设置中执行独立任务,同时也在团队空间中变得更具社交性。虽然大量的人类团队理论有助于理解、解释和预测团队努力的结果,但这些理论尚不存在于人类代理团队中。此外,代理人的发展和评价也在不断发展。因此,许多开发人员使用他们自己的测试计划和他们自己的度量,使得在代理开发人员之间比较结果变得困难。许多试图捕捉人类团队行为的代理开发人员可能没有充分理解特定团队过程的好处和测量这些构造的挑战。无效的团队场景和措施可能导致不具代表性的训练数据集,延长代理开发时间表,以及不太有效的代理预测。通过适当的措施和条件,代理将能够及早确定团队流程中的缺陷,从而在执行过程中进行干预。这篇论文是朝着制定人类代理团队理论的方向迈出的一步,其中我们对可测量的团队过程进行了文献综述,以预测团队绩效和结果。所提出的框架利用了多个团队框架,如Marks等人(2001年)的团队过程模型、IMOI模型(Ilgen, 2005年)、Salas等人的五大模型(2005年)以及更现代的人类代理团队框架,如Carter-Browne等人(2021年)。这些模型的“输入”和“过程”阶段中的特定构造和度量被提取出来,然后在团队的文献中搜索,以找到团队过程的特定度量。然而,度量仅仅是一个有效的团队测试场景的一半需求。被给予无限时间的团队应该全部完成一项任务,但只有最有效的协调和沟通团队才能以有效的方式完成任务。因此,我们还确定了在团队过程中显示会导致影响的实验操作。本文旨在介绍在DARPA的努力下开发的测量和操作框架,以及与每个测量和操作类别相关的收益和成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信