C-XAI: A conceptual framework for designing XAI tools that support trust calibration

Mohammad Naiseh , Auste Simkute , Baraa Zieni , Nan Jiang , Raian Ali
{"title":"C-XAI: A conceptual framework for designing XAI tools that support trust calibration","authors":"Mohammad Naiseh ,&nbsp;Auste Simkute ,&nbsp;Baraa Zieni ,&nbsp;Nan Jiang ,&nbsp;Raian Ali","doi":"10.1016/j.jrt.2024.100076","DOIUrl":null,"url":null,"abstract":"<div><p>Recent advancements in machine learning have spurred an increased integration of AI in critical sectors such as healthcare and criminal justice. The ethical and legal concerns surrounding fully autonomous AI highlight the importance of combining human oversight with AI to elevate decision-making quality. However, trust calibration errors in human-AI collaboration, encompassing instances of over-trust or under-trust in AI recommendations, pose challenges to overall performance. Addressing trust calibration in the design process is essential, and eXplainable AI (XAI) emerges as a valuable tool by providing transparent AI explanations. This paper introduces Calibrated-XAI (C-XAI), a participatory design framework specifically crafted to tackle both technical and human factors in the creation of XAI interfaces geared towards trust calibration in Human-AI collaboration. The primary objective of the C-XAI framework is to assist designers of XAI interfaces in minimising trust calibration errors at the design level. This is achieved through the adoption of a participatory design approach, which includes providing templates, guidance, and involving diverse stakeholders in the design process. The efficacy of C-XAI is evaluated through a two-stage evaluation study, demonstrating its potential to aid designers in constructing user interfaces with trust calibration in mind. Through this work, we aspire to offer systematic guidance to practitioners, fostering a responsible approach to eXplainable AI at the user interface level.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000027/pdfft?md5=b4038e407ec2450c0ab8e0c8949eebfe&pid=1-s2.0-S2666659624000027-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of responsible technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666659624000027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advancements in machine learning have spurred an increased integration of AI in critical sectors such as healthcare and criminal justice. The ethical and legal concerns surrounding fully autonomous AI highlight the importance of combining human oversight with AI to elevate decision-making quality. However, trust calibration errors in human-AI collaboration, encompassing instances of over-trust or under-trust in AI recommendations, pose challenges to overall performance. Addressing trust calibration in the design process is essential, and eXplainable AI (XAI) emerges as a valuable tool by providing transparent AI explanations. This paper introduces Calibrated-XAI (C-XAI), a participatory design framework specifically crafted to tackle both technical and human factors in the creation of XAI interfaces geared towards trust calibration in Human-AI collaboration. The primary objective of the C-XAI framework is to assist designers of XAI interfaces in minimising trust calibration errors at the design level. This is achieved through the adoption of a participatory design approach, which includes providing templates, guidance, and involving diverse stakeholders in the design process. The efficacy of C-XAI is evaluated through a two-stage evaluation study, demonstrating its potential to aid designers in constructing user interfaces with trust calibration in mind. Through this work, we aspire to offer systematic guidance to practitioners, fostering a responsible approach to eXplainable AI at the user interface level.

C-XAI:设计支持信任校准的 XAI 工具的概念框架
机器学习的最新进展促使人工智能越来越多地融入医疗保健和刑事司法等关键领域。围绕完全自主的人工智能的伦理和法律问题凸显了将人类监督与人工智能相结合以提高决策质量的重要性。然而,人类与人工智能合作中的信任校准误差,包括对人工智能建议的过度信任或信任不足,给整体性能带来了挑战。在设计过程中解决信任校准问题至关重要,而可解释人工智能(XAI)通过提供透明的人工智能解释,成为一种有价值的工具。本文介绍了校准人工智能(Calibrated-XAI,C-XAI),这是一个参与式设计框架,专门用于在创建 XAI 界面时解决技术和人为因素,以实现人机协作中的信任校准。C-XAI 框架的主要目标是帮助 XAI 界面的设计者在设计层面最大限度地减少信任校准误差。这是通过采用参与式设计方法来实现的,其中包括提供模板、指导以及让不同的利益相关者参与设计过程。C-XAI 的功效通过两个阶段的评估研究进行了评估,证明了它在帮助设计者在构建用户界面时考虑信任校准的潜力。通过这项工作,我们希望为从业人员提供系统性指导,在用户界面层面促进负责任的可解释人工智能方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of responsible technology
Journal of responsible technology Information Systems, Artificial Intelligence, Human-Computer Interaction
CiteScore
3.60
自引率
0.00%
发文量
0
审稿时长
168 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信