"Do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design

Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, E. André
{"title":"\"Do you trust me?\": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design","authors":"Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, E. André","doi":"10.1145/3308532.3329441","DOIUrl":null,"url":null,"abstract":"While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility. This development led to an on-going resurgence of the research area of explainable artificial intelligence (XAI) which aims to reduce the opaqueness of those black-box-models. However, much of the current XAI-Research is focused on machine learning practitioners and engineers while omitting the specific needs of end-users. In this paper, we examine the impact of virtual agents within the field of XAI on the perceived trustworthiness of autonomous intelligent systems. To assess the practicality of this concept, we conducted a user study based on a simple speech recognition task. As a result of this experiment, we found significant evidence suggesting that the integration of virtual agents into XAI interaction design leads to an increase of trust in the autonomous intelligent system.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"7 3-4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"76","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3308532.3329441","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 76

Abstract

While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility. This development led to an on-going resurgence of the research area of explainable artificial intelligence (XAI) which aims to reduce the opaqueness of those black-box-models. However, much of the current XAI-Research is focused on machine learning practitioners and engineers while omitting the specific needs of end-users. In this paper, we examine the impact of virtual agents within the field of XAI on the perceived trustworthiness of autonomous intelligent systems. To assess the practicality of this concept, we conducted a user study based on a simple speech recognition task. As a result of this experiment, we found significant evidence suggesting that the integration of virtual agents into XAI interaction design leads to an increase of trust in the autonomous intelligent system.
“你相信我吗?”:通过在可解释的AI交互设计中集成虚拟代理来增加用户信任
近年来,虽然人工智能的研究领域受益于日益复杂的机器学习技术,但由此产生的系统却缺乏透明度和可理解性。这一发展导致了可解释人工智能(XAI)研究领域的持续复苏,该领域旨在减少那些黑盒模型的不透明性。然而,目前大部分的xai研究都集中在机器学习从业者和工程师身上,而忽略了最终用户的具体需求。在本文中,我们研究了XAI领域内的虚拟代理对自主智能系统感知可信度的影响。为了评估这个概念的实用性,我们基于一个简单的语音识别任务进行了一个用户研究。作为这个实验的结果,我们发现了重要的证据表明,将虚拟代理集成到XAI交互设计中会增加对自主智能系统的信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信