IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Francisco Herrera
{"title":"Reflections and attentiveness on eXplainable Artificial Intelligence (XAI). The journey ahead from criticisms to human–AI collaboration","authors":"Francisco Herrera","doi":"10.1016/j.inffus.2025.103133","DOIUrl":null,"url":null,"abstract":"<div><div>The emergence of deep learning over the past decade has driven the development of increasingly complex AI models, amplifying the need for Explainable Artificial Intelligence (XAI). As AI systems grow in size and complexity, ensuring interpretability and transparency becomes essential, especially in high-stakes applications. With the rapid expansion of XAI research, addressing emerging debates and criticisms requires a comprehensive examination. This paper explores the complexities of XAI from multiple perspectives, proposing six key axes that shed light on its role in human–AI interaction and collaboration. First, it examines the imperative of XAI under the dominance of black-box AI models. Given the lack of definitional cohesion, the paper argues that XAI must be framed through the lens of audience and understanding, highlighting its different uses in AI–human interaction. The recent BLUE vs. RED XAI distinction is analyzed through this perspective. The study then addresses the criticisms of XAI, evaluating its maturity, current trajectory, and limitations in handling complex problems. The discussion then shifts to explanations as a bridge between AI models and human understanding, emphasizing the importance of usability of explanations in human–AI decision making. Key aspects such as AI reliance, human intuition, and emerging collaboration theories — including the human-algorithm centaur and co-intelligence paradigms — are explored in connection with XAI. The medical field is considered as a case study, given its extensive research on collaboration between doctors and AI through explainability. The paper proposes a framework to evaluate the maturity of XAI using three dimensions: practicality, auditability, and AI governance. Provide the final lessons learned focused on trends and questions to tackle in the near future. This is an in-depth exploration of the impact and urgency of XAI in the era of pervasive expansion of AI. Three Key reflections from this study include: (a) XAI must enhance cognitive engagement with explanations, (b) it must evolve to fully address why, what, and for what purpose explanations are needed, and (c) it plays a crucial role in building societal trust in AI. By advancing XAI in these directions, we can ensure that AI remains transparent, auditable, and accountable, and aligned with human needs.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"121 ","pages":"Article 103133"},"PeriodicalIF":14.7000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525002064","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

过去十年中,深度学习的出现推动了日益复杂的人工智能模型的发展,扩大了对可解释人工智能(XAI)的需求。随着人工智能系统规模和复杂性的增长,确保可解释性和透明度变得至关重要,尤其是在高风险应用领域。随着 XAI 研究的迅速扩展,解决新出现的争论和批评需要进行全面的研究。本文从多个角度探讨了 XAI 的复杂性,提出了六个关键轴心,揭示了 XAI 在人机交互与协作中的作用。首先,本文探讨了在黑盒人工智能模型主导下 XAI 的必要性。鉴于定义缺乏一致性,本文认为必须从受众和理解的角度来界定 XAI,强调其在人工智能与人类互动中的不同用途。本文从这一角度分析了最近出现的 "蓝 "XAI 与 "红 "XAI 的区别。然后,本研究讨论了对 XAI 的批评,评估了其成熟度、当前发展轨迹以及在处理复杂问题方面的局限性。然后,讨论转向作为人工智能模型与人类理解之间桥梁的解释,强调解释的可用性在人类-人工智能决策中的重要性。文章结合 XAI 探讨了人工智能依赖、人类直觉和新兴协作理论(包括人类-算法半人马和共同智能范式)等关键方面。鉴于医学领域通过可解释性对医生与人工智能之间的协作进行了广泛研究,本文将医学领域作为一个案例进行研究。本文提出了一个框架,从实用性、可审计性和人工智能治理三个维度来评估 XAI 的成熟度。提供最终的经验教训,重点关注趋势和近期要解决的问题。这是对人工智能普遍扩张时代 XAI 的影响和紧迫性的深入探讨。本研究的三个主要反思包括(a) XAI 必须加强对解释的认知参与;(b) XAI 必须不断发展,以全面解决为什么需要解释、需要什么解释以及解释的目的是什么;(c) XAI 在建立社会对人工智能的信任方面发挥着至关重要的作用。通过朝着这些方向推进 XAI,我们可以确保人工智能保持透明、可审计、可问责,并与人类需求保持一致。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reflections and attentiveness on eXplainable Artificial Intelligence (XAI). The journey ahead from criticisms to human–AI collaboration
The emergence of deep learning over the past decade has driven the development of increasingly complex AI models, amplifying the need for Explainable Artificial Intelligence (XAI). As AI systems grow in size and complexity, ensuring interpretability and transparency becomes essential, especially in high-stakes applications. With the rapid expansion of XAI research, addressing emerging debates and criticisms requires a comprehensive examination. This paper explores the complexities of XAI from multiple perspectives, proposing six key axes that shed light on its role in human–AI interaction and collaboration. First, it examines the imperative of XAI under the dominance of black-box AI models. Given the lack of definitional cohesion, the paper argues that XAI must be framed through the lens of audience and understanding, highlighting its different uses in AI–human interaction. The recent BLUE vs. RED XAI distinction is analyzed through this perspective. The study then addresses the criticisms of XAI, evaluating its maturity, current trajectory, and limitations in handling complex problems. The discussion then shifts to explanations as a bridge between AI models and human understanding, emphasizing the importance of usability of explanations in human–AI decision making. Key aspects such as AI reliance, human intuition, and emerging collaboration theories — including the human-algorithm centaur and co-intelligence paradigms — are explored in connection with XAI. The medical field is considered as a case study, given its extensive research on collaboration between doctors and AI through explainability. The paper proposes a framework to evaluate the maturity of XAI using three dimensions: practicality, auditability, and AI governance. Provide the final lessons learned focused on trends and questions to tackle in the near future. This is an in-depth exploration of the impact and urgency of XAI in the era of pervasive expansion of AI. Three Key reflections from this study include: (a) XAI must enhance cognitive engagement with explanations, (b) it must evolve to fully address why, what, and for what purpose explanations are needed, and (c) it plays a crucial role in building societal trust in AI. By advancing XAI in these directions, we can ensure that AI remains transparent, auditable, and accountable, and aligned with human needs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信