Sebastian S. Rodriguez, Erin G. Zaroukian, Jeff Hoye, Derrik E. Asher
{"title":"Mediating Agent Reliability with Human Trust, Situation Awareness, and Performance in Autonomously-Collaborative Human-Agent Teams","authors":"Sebastian S. Rodriguez, Erin G. Zaroukian, Jeff Hoye, Derrik E. Asher","doi":"10.1177/15553434221129166","DOIUrl":null,"url":null,"abstract":"When teaming with humans, the reliability of intelligent agents may sporadically change due to failure or environmental constraints. Alternatively, an agent may be more reliable than a human because their performance is less likely to degrade (e.g., due to fatigue). Research often investigates human-agent interactions under little to no time constraints, such as discrete decision-making tasks where the automation is relegated to the role of an assistant. This paper conducts a quantitative investigation towards varying reliability in human-agent teams in a time-pressured continuous pursuit task, and it interconnects individual differences, perceptual factors, and task performance through structural equation modeling. Results indicate that reducing reliability may generate a more effective agent imperceptibly different from a fully reliable agent, while contributing to overall team performance. The mediation analysis shows replication of factors studied in the trust and situation awareness literature while providing new insights: agents with an active stake in the task (i.e., success is dependent on team performance) offset loss of situation awareness, differing from the usual notion of overtrust. We conclude with generalizing implications from an abstract pursuit task, and we highlight challenges when conducting research in time-pressured continuous domains.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"17 1","pages":"3 - 25"},"PeriodicalIF":2.2000,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cognitive Engineering and Decision Making","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15553434221129166","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 1
Abstract
When teaming with humans, the reliability of intelligent agents may sporadically change due to failure or environmental constraints. Alternatively, an agent may be more reliable than a human because their performance is less likely to degrade (e.g., due to fatigue). Research often investigates human-agent interactions under little to no time constraints, such as discrete decision-making tasks where the automation is relegated to the role of an assistant. This paper conducts a quantitative investigation towards varying reliability in human-agent teams in a time-pressured continuous pursuit task, and it interconnects individual differences, perceptual factors, and task performance through structural equation modeling. Results indicate that reducing reliability may generate a more effective agent imperceptibly different from a fully reliable agent, while contributing to overall team performance. The mediation analysis shows replication of factors studied in the trust and situation awareness literature while providing new insights: agents with an active stake in the task (i.e., success is dependent on team performance) offset loss of situation awareness, differing from the usual notion of overtrust. We conclude with generalizing implications from an abstract pursuit task, and we highlight challenges when conducting research in time-pressured continuous domains.