Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable

Auste Simkute , Ewa Luger , Bronwyn Jones , Michael Evans , Rhianne Jones
{"title":"Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable","authors":"Auste Simkute ,&nbsp;Ewa Luger ,&nbsp;Bronwyn Jones ,&nbsp;Michael Evans ,&nbsp;Rhianne Jones","doi":"10.1016/j.jrt.2021.100017","DOIUrl":null,"url":null,"abstract":"<div><p>Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependant on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100010X/pdfft?md5=209e9bba6d0a6ab1de48f2f469aae35b&pid=1-s2.0-S266665962100010X-main.pdf","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of responsible technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266665962100010X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependant on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.

专家可解释性:使支持专家决策的算法更具可解释性的设计框架
算法决策支持系统广泛应用于从医疗保健到新闻等领域。为了确保这些系统是公平和负责任的,人类必须保持有意义的代理,理解和监督算法过程。可解释性通常被视为一种有希望的机制,使人在循环中,然而,目前的方法是无效的,并可能导致各种偏见。我们认为,可解释性应该被量身定制,以支持领域专家和新手采用的自然主义决策和意义制定策略。在认知心理学和人因文献综述的基础上,我们绘制了依赖于专业知识、风险和时间动态的潜在决策策略,并提出了概念性的专业知识、风险和时间可解释性框架,旨在作为可解释性设计指南。最后,我们给出了一个新闻工作的例子来说明我们的框架在实践中的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of responsible technology
Journal of responsible technology Information Systems, Artificial Intelligence, Human-Computer Interaction
CiteScore
3.60
自引率
0.00%
发文量
0
审稿时长
168 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信