Carolin Wienrich, Astrid Carolus, David Roth-Isigkeit, A. Hotho
{"title":"Inhibitors and Enablers to Explainable AI Success: A Systematic Examination of Explanation Complexity and Individual Characteristics","authors":"Carolin Wienrich, Astrid Carolus, David Roth-Isigkeit, A. Hotho","doi":"10.3390/mti6120106","DOIUrl":null,"url":null,"abstract":"With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on the significance of human dimensions (gender, age, trust disposition, need for cognition, affinity for technology, self-efficacy, attitudes, and mind attribution) as well as their interplay with different explanation modes (no, simple, or complex explanation). Participants played the game Deal or No Deal while interacting with an AI-based agent. The agent gave advice to the participants on whether they should accept or reject the deals offered to them. As expected, giving an explanation had a positive influence on the explanation objectives. However, the users’ individual characteristics particularly reinforced the fulfillment of the objectives. The strongest predictor of objective fulfillment was the degree of attribution of human characteristics. The more human characteristics were attributed, the more trust was placed in the agent, advice was more likely to be accepted and understood, and important needs were satisfied during the interaction. Thus, the current work contributes to a better understanding of the design of explanations of an AI-based agent system that takes into account individual characteristics and meets the demand for both explainable and human-centered agent systems.","PeriodicalId":408374,"journal":{"name":"Multimodal Technol. Interact.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multimodal Technol. Interact.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/mti6120106","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
随着基于咨询人工智能(AI)的代理的适应性和复杂性的增加,可解释的人工智能和以人为中心的人工智能的主题正在紧密地联系在一起。人们对这种解释本身的变化进行了广泛的研究,并得出了一些相互矛盾的结果。这可能是由于用户的个体差异,很少有系统地研究他们对实现解释目标(如信任、理解或工作量)的抑制或促进作用。本文旨在揭示人的维度(性别、年龄、信任倾向、认知需求、技术亲和力、自我效能、态度和心理归因)的重要性及其与不同解释模式(无解释、简单解释和复杂解释)的相互作用。参与者在与人工智能代理互动的同时玩“Deal or No Deal”游戏。代理人就是否接受或拒绝提供给他们的交易向参与者提出建议。正如预期的那样,给出解释对解释目标有积极的影响。然而,用户的个人特征特别加强了目标的实现。对人类特征的归因程度是最能预测目标实现的因素。人的特征被归因于越多,对代理人的信任就越高,建议更容易被接受和理解,在互动过程中重要的需求也得到了满足。因此,当前的工作有助于更好地理解基于人工智能的智能体系统的解释设计,该系统考虑了个体特征,并满足了对可解释和以人为中心的智能体系统的需求。
Inhibitors and Enablers to Explainable AI Success: A Systematic Examination of Explanation Complexity and Individual Characteristics
With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on the significance of human dimensions (gender, age, trust disposition, need for cognition, affinity for technology, self-efficacy, attitudes, and mind attribution) as well as their interplay with different explanation modes (no, simple, or complex explanation). Participants played the game Deal or No Deal while interacting with an AI-based agent. The agent gave advice to the participants on whether they should accept or reject the deals offered to them. As expected, giving an explanation had a positive influence on the explanation objectives. However, the users’ individual characteristics particularly reinforced the fulfillment of the objectives. The strongest predictor of objective fulfillment was the degree of attribution of human characteristics. The more human characteristics were attributed, the more trust was placed in the agent, advice was more likely to be accepted and understood, and important needs were satisfied during the interaction. Thus, the current work contributes to a better understanding of the design of explanations of an AI-based agent system that takes into account individual characteristics and meets the demand for both explainable and human-centered agent systems.