{"title":"Reflections and attentiveness on eXplainable Artificial Intelligence (XAI). The journey ahead from criticisms to human–AI collaboration","authors":"Francisco Herrera","doi":"10.1016/j.inffus.2025.103133","DOIUrl":null,"url":null,"abstract":"<div><div>The emergence of deep learning over the past decade has driven the development of increasingly complex AI models, amplifying the need for Explainable Artificial Intelligence (XAI). As AI systems grow in size and complexity, ensuring interpretability and transparency becomes essential, especially in high-stakes applications. With the rapid expansion of XAI research, addressing emerging debates and criticisms requires a comprehensive examination. This paper explores the complexities of XAI from multiple perspectives, proposing six key axes that shed light on its role in human–AI interaction and collaboration. First, it examines the imperative of XAI under the dominance of black-box AI models. Given the lack of definitional cohesion, the paper argues that XAI must be framed through the lens of audience and understanding, highlighting its different uses in AI–human interaction. The recent BLUE vs. RED XAI distinction is analyzed through this perspective. The study then addresses the criticisms of XAI, evaluating its maturity, current trajectory, and limitations in handling complex problems. The discussion then shifts to explanations as a bridge between AI models and human understanding, emphasizing the importance of usability of explanations in human–AI decision making. Key aspects such as AI reliance, human intuition, and emerging collaboration theories — including the human-algorithm centaur and co-intelligence paradigms — are explored in connection with XAI. The medical field is considered as a case study, given its extensive research on collaboration between doctors and AI through explainability. The paper proposes a framework to evaluate the maturity of XAI using three dimensions: practicality, auditability, and AI governance. Provide the final lessons learned focused on trends and questions to tackle in the near future. This is an in-depth exploration of the impact and urgency of XAI in the era of pervasive expansion of AI. Three Key reflections from this study include: (a) XAI must enhance cognitive engagement with explanations, (b) it must evolve to fully address why, what, and for what purpose explanations are needed, and (c) it plays a crucial role in building societal trust in AI. By advancing XAI in these directions, we can ensure that AI remains transparent, auditable, and accountable, and aligned with human needs.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"121 ","pages":"Article 103133"},"PeriodicalIF":14.7000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525002064","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The emergence of deep learning over the past decade has driven the development of increasingly complex AI models, amplifying the need for Explainable Artificial Intelligence (XAI). As AI systems grow in size and complexity, ensuring interpretability and transparency becomes essential, especially in high-stakes applications. With the rapid expansion of XAI research, addressing emerging debates and criticisms requires a comprehensive examination. This paper explores the complexities of XAI from multiple perspectives, proposing six key axes that shed light on its role in human–AI interaction and collaboration. First, it examines the imperative of XAI under the dominance of black-box AI models. Given the lack of definitional cohesion, the paper argues that XAI must be framed through the lens of audience and understanding, highlighting its different uses in AI–human interaction. The recent BLUE vs. RED XAI distinction is analyzed through this perspective. The study then addresses the criticisms of XAI, evaluating its maturity, current trajectory, and limitations in handling complex problems. The discussion then shifts to explanations as a bridge between AI models and human understanding, emphasizing the importance of usability of explanations in human–AI decision making. Key aspects such as AI reliance, human intuition, and emerging collaboration theories — including the human-algorithm centaur and co-intelligence paradigms — are explored in connection with XAI. The medical field is considered as a case study, given its extensive research on collaboration between doctors and AI through explainability. The paper proposes a framework to evaluate the maturity of XAI using three dimensions: practicality, auditability, and AI governance. Provide the final lessons learned focused on trends and questions to tackle in the near future. This is an in-depth exploration of the impact and urgency of XAI in the era of pervasive expansion of AI. Three Key reflections from this study include: (a) XAI must enhance cognitive engagement with explanations, (b) it must evolve to fully address why, what, and for what purpose explanations are needed, and (c) it plays a crucial role in building societal trust in AI. By advancing XAI in these directions, we can ensure that AI remains transparent, auditable, and accountable, and aligned with human needs.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.