How level of explanation detail affects human performance in interpretable intelligent systems: A study on explainable fact checking

Applied AI letters Pub Date : 2021-11-08 DOI:10.1002/ail2.49
Rhema Linder, Sina Mohseni, Fan Yang, Shiva K. Pentyala, Eric D. Ragan, Xia Ben Hu
{"title":"How level of explanation detail affects human performance in interpretable intelligent systems: A study on explainable fact checking","authors":"Rhema Linder,&nbsp;Sina Mohseni,&nbsp;Fan Yang,&nbsp;Shiva K. Pentyala,&nbsp;Eric D. Ragan,&nbsp;Xia Ben Hu","doi":"10.1002/ail2.49","DOIUrl":null,"url":null,"abstract":"<p>Explainable artificial intelligence (XAI) systems aim to provide users with information to help them better understand computational models and reason about why outputs were generated. However, there are many different ways an XAI interface might present explanations, which makes designing an appropriate and effective interface an important and challenging task. Our work investigates how different types and amounts of explanatory information affect user ability to utilize explanations to understand system behavior and improve task performance. The presented research employs a system for detecting the truthfulness of news statements. In a controlled experiment, participants were tasked with using the system to assess news statements as well as to learn to predict the output of the AI. Our experiment compares various levels of explanatory information to contribute empirical data about how explanation detail can influence utility. The results show that more explanation information improves participant understanding of AI models, but the benefits come at the cost of time and attention needed to make sense of the explanation.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.49","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied AI letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ail2.49","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Explainable artificial intelligence (XAI) systems aim to provide users with information to help them better understand computational models and reason about why outputs were generated. However, there are many different ways an XAI interface might present explanations, which makes designing an appropriate and effective interface an important and challenging task. Our work investigates how different types and amounts of explanatory information affect user ability to utilize explanations to understand system behavior and improve task performance. The presented research employs a system for detecting the truthfulness of news statements. In a controlled experiment, participants were tasked with using the system to assess news statements as well as to learn to predict the output of the AI. Our experiment compares various levels of explanatory information to contribute empirical data about how explanation detail can influence utility. The results show that more explanation information improves participant understanding of AI models, but the benefits come at the cost of time and attention needed to make sense of the explanation.

Abstract Image

在可解释的智能系统中,解释细节的水平如何影响人类的表现:一项关于可解释事实核查的研究
可解释的人工智能(XAI)系统旨在为用户提供信息,帮助他们更好地理解计算模型和产生输出的原因。然而,XAI界面可能有许多不同的解释方式,这使得设计一个适当而有效的界面成为一项重要而具有挑战性的任务。我们的工作调查了不同类型和数量的解释信息如何影响用户利用解释来理解系统行为和提高任务性能的能力。本研究采用了一种检测新闻陈述真实性的系统。在一项对照实验中,参与者的任务是使用该系统评估新闻声明,并学习预测人工智能的输出。我们的实验比较了不同层次的解释信息,以提供关于解释细节如何影响效用的经验数据。结果表明,更多的解释信息可以提高参与者对人工智能模型的理解,但这些好处是以理解解释所需的时间和注意力为代价的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信