Explainability pitfalls: Beyond dark patterns in explainable AI

IF 6.7 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Upol Ehsan, Mark O. Riedl
{"title":"Explainability pitfalls: Beyond dark patterns in explainable AI","authors":"Upol Ehsan, Mark O. Riedl","doi":"10.1016/j.patter.2024.100971","DOIUrl":null,"url":null,"abstract":"<p>To make explainable artificial intelligence (XAI) systems trustworthy, understanding harmful effects is important. In this paper, we address an important yet unarticulated type of negative effect in XAI. We introduce explainability pitfalls (EPs), unanticipated negative downstream effects from AI explanations manifesting even when there is no intention to manipulate users. EPs are different from dark patterns, which are intentionally deceptive practices. We articulate the concept of EPs by demarcating it from dark patterns and highlighting the challenges arising from uncertainties around pitfalls. We situate and operationalize the concept using a case study that showcases how, despite best intentions, unsuspecting negative effects, such as unwarranted trust in numerical explanations, can emerge. We propose proactive and preventative strategies to address EPs at three interconnected levels: research, design, and organizational. We discuss design and societal implications around reframing AI adoption, recalibrating stakeholder empowerment, and resisting the “move fast and break things” mindset.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":"14 1","pages":""},"PeriodicalIF":6.7000,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patterns","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.patter.2024.100971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

To make explainable artificial intelligence (XAI) systems trustworthy, understanding harmful effects is important. In this paper, we address an important yet unarticulated type of negative effect in XAI. We introduce explainability pitfalls (EPs), unanticipated negative downstream effects from AI explanations manifesting even when there is no intention to manipulate users. EPs are different from dark patterns, which are intentionally deceptive practices. We articulate the concept of EPs by demarcating it from dark patterns and highlighting the challenges arising from uncertainties around pitfalls. We situate and operationalize the concept using a case study that showcases how, despite best intentions, unsuspecting negative effects, such as unwarranted trust in numerical explanations, can emerge. We propose proactive and preventative strategies to address EPs at three interconnected levels: research, design, and organizational. We discuss design and societal implications around reframing AI adoption, recalibrating stakeholder empowerment, and resisting the “move fast and break things” mindset.

可解释性陷阱:超越可解释人工智能的黑暗模式
要使可解释人工智能(XAI)系统值得信赖,了解有害效应非常重要。在本文中,我们将讨论 XAI 中一种重要但尚未阐明的负面效应。我们引入了可解释性陷阱(EPs),这是人工智能解释所产生的意料之外的负面下游效应,即使在无意操纵用户的情况下也会表现出来。EPs不同于黑暗模式,后者是有意的欺骗行为。我们阐明了EPs的概念,将其与黑暗模式区分开来,并强调了陷阱的不确定性所带来的挑战。我们通过一个案例研究来定位和操作这一概念,该案例研究展示了尽管用心良苦,但还是会出现意想不到的负面影响,例如对数字解释的无端信任。我们从研究、设计和组织三个相互关联的层面提出了应对 EPs 的积极预防策略。我们将围绕重新构建人工智能的采用、重新调整利益相关者的授权以及抵制 "快速行动、打破常规 "的思维方式,讨论设计和社会影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Patterns
Patterns Decision Sciences-Decision Sciences (all)
CiteScore
10.60
自引率
4.60%
发文量
153
审稿时长
19 weeks
期刊介绍:
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信