Effects of Explanation Types on User Satisfaction and Performance in Human-agent Teams

IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Bryan Lavender, Sami Abuhaimed, Sandip Sen
{"title":"Effects of Explanation Types on User Satisfaction and Performance in Human-agent Teams","authors":"Bryan Lavender, Sami Abuhaimed, Sandip Sen","doi":"10.1142/s0218213024600042","DOIUrl":null,"url":null,"abstract":"Automated agents, with rapidly increasing capabilities and ease of deployment, will assume more key and decisive roles in our societies. We will encounter and work together with such agents in diverse domains and even in peer roles. To be trusted and for seamless coordination, these agents would be expected and required to explain their decision making, behaviors, and recommendations. We are interested in developing mechanisms that can be used by human-agent teams to maximally leverage relative strengths of human and automated reasoners. We are interested in ad hoc teams in which team members start to collaborate, often to respond to emergencies or short-term opportunities, without significant prior knowledge about each other. In this study, we use virtual ad hoc teams, consisting of a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from available task types. Team members are initially unaware of the capabilities of their partners for the available task types, and the agent task allocator must adapt the allocation process to maximize team performance. It is important in collaborative teams of humans and agents to establish user confidence and satisfaction, as well as to produce effective team performance. Explanations can increase user trust in agent team members and in team decisions. The focus of this paper is on analyzing how explanations of task allocation decisions can influence both user performance and the human workers’ perspective, including factors such as motivation and satisfaction. We evaluate different types of explanation, such as positive, strength-based explanations and negative, weakness-based explanations, to understand (a) how satisfaction and performance are improved when explanations are presented, and (b) how factors such as confidence, understandability, motivation, and explanatory power correlate with satisfaction and performance. We run experiments on the CHATboard platform that allows virtual collaboration over multiple episodes of task assignments, with MTurk workers. We present our analysis of the results and conclusions related to our research hypotheses.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal on Artificial Intelligence Tools","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1142/s0218213024600042","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Automated agents, with rapidly increasing capabilities and ease of deployment, will assume more key and decisive roles in our societies. We will encounter and work together with such agents in diverse domains and even in peer roles. To be trusted and for seamless coordination, these agents would be expected and required to explain their decision making, behaviors, and recommendations. We are interested in developing mechanisms that can be used by human-agent teams to maximally leverage relative strengths of human and automated reasoners. We are interested in ad hoc teams in which team members start to collaborate, often to respond to emergencies or short-term opportunities, without significant prior knowledge about each other. In this study, we use virtual ad hoc teams, consisting of a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from available task types. Team members are initially unaware of the capabilities of their partners for the available task types, and the agent task allocator must adapt the allocation process to maximize team performance. It is important in collaborative teams of humans and agents to establish user confidence and satisfaction, as well as to produce effective team performance. Explanations can increase user trust in agent team members and in team decisions. The focus of this paper is on analyzing how explanations of task allocation decisions can influence both user performance and the human workers’ perspective, including factors such as motivation and satisfaction. We evaluate different types of explanation, such as positive, strength-based explanations and negative, weakness-based explanations, to understand (a) how satisfaction and performance are improved when explanations are presented, and (b) how factors such as confidence, understandability, motivation, and explanatory power correlate with satisfaction and performance. We run experiments on the CHATboard platform that allows virtual collaboration over multiple episodes of task assignments, with MTurk workers. We present our analysis of the results and conclusions related to our research hypotheses.
解释类型对用户满意度和人类-代理团队绩效的影响
自动代理的能力迅速增强,而且易于部署,它们将在我们的社会中发挥更加关键和决定性的作用。我们将在不同的领域,甚至在同行角色中遇到这些代理并与之合作。为了获得信任和实现无缝协调,我们期望并要求这些代理解释其决策、行为和建议。我们有兴趣开发可用于人类-代理团队的机制,以最大限度地利用人类和自动推理者的相对优势。我们对临时团队很感兴趣,在这些团队中,团队成员开始合作,通常是为了应对紧急情况或短期机会,而事先并不了解彼此。在这项研究中,我们使用了由人类和代理组成的虚拟临时团队,他们在几个事件中进行合作,每个事件要求他们完成从现有任务类型中选择的一组任务。团队成员最初并不知道他们的伙伴在可用任务类型方面的能力,因此代理任务分配者必须调整任务分配过程,以最大限度地提高团队绩效。在由人类和代理组成的协作团队中,建立用户信心和满意度以及有效的团队绩效非常重要。解释可以增加用户对代理团队成员和团队决策的信任。本文的重点是分析任务分配决策的解释如何影响用户绩效和人类工作者的观点,包括动机和满意度等因素。我们评估了不同类型的解释,如积极的、基于优势的解释和消极的、基于劣势的解释,以了解:(a) 解释如何提高满意度和绩效;(b) 信心、可理解性、动机和解释力等因素如何与满意度和绩效相关联。我们在 CHATboard 平台上进行了实验,该平台允许与 MTurk 工作者在多个任务分配集上进行虚拟协作。我们将介绍对结果的分析以及与我们的研究假设相关的结论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal on Artificial Intelligence Tools
International Journal on Artificial Intelligence Tools 工程技术-计算机:跨学科应用
CiteScore
2.10
自引率
9.10%
发文量
66
审稿时长
8.5 months
期刊介绍: The International Journal on Artificial Intelligence Tools (IJAIT) provides an interdisciplinary forum in which AI scientists and professionals can share their research results and report new advances on AI tools or tools that use AI. Tools refer to architectures, languages or algorithms, which constitute the means connecting theory with applications. So, IJAIT is a medium for promoting general and/or special purpose tools, which are very important for the evolution of science and manipulation of knowledge. IJAIT can also be used as a test ground for new AI tools. Topics covered by IJAIT include but are not limited to: AI in Bioinformatics, AI for Service Engineering, AI for Software Engineering, AI for Ubiquitous Computing, AI for Web Intelligence Applications, AI Parallel Processing Tools (hardware/software), AI Programming Languages, AI Tools for CAD and VLSI Analysis/Design/Testing, AI Tools for Computer Vision and Speech Understanding, AI Tools for Multimedia, Cognitive Informatics, Data Mining and Machine Learning Tools, Heuristic and AI Planning Strategies and Tools, Image Understanding, Integrated/Hybrid AI Approaches, Intelligent System Architectures, Knowledge-Based/Expert Systems, Knowledge Management and Processing Tools, Knowledge Representation Languages, Natural Language Understanding, Neural Networks for AI, Object-Oriented Programming for AI, Reasoning and Evolution of Knowledge Bases, Self-Healing and Autonomous Systems, and Software Engineering for AI.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信