A Maturity Model for Practical Explainability in Artificial Intelligence-Based Applications: Integrating Analysis and Evaluation (MM4XAI-AE) Models

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Julián Muñoz-Ordóñez, Carlos Cobos, Juan C. Vidal-Rojas, Francisco Herrera
{"title":"A Maturity Model for Practical Explainability in Artificial Intelligence-Based Applications: Integrating Analysis and Evaluation (MM4XAI-AE) Models","authors":"Julián Muñoz-Ordóñez,&nbsp;Carlos Cobos,&nbsp;Juan C. Vidal-Rojas,&nbsp;Francisco Herrera","doi":"10.1155/int/4934696","DOIUrl":null,"url":null,"abstract":"<div>\n <p>The increasing adoption of artificial intelligence (AI) in critical domains such as healthcare, law, and defense demands robust mechanisms to ensure transparency and explainability in decision-making processes. While machine learning and deep learning algorithms have advanced significantly, their growing complexity presents persistent interpretability challenges. Existing maturity frameworks, such as Capability Maturity Model Integration, fall short in addressing the distinct requirements of explainability in AI systems, particularly where ethical compliance and public trust are paramount. To address this gap, we propose the Maturity Model for eXplainable Artificial Intelligence: Analysis and Evaluation (MM4XAI-AE), a domain-agnostic maturity model tailored to assess and guide the practical deployment of explainability in AI-based applications. The model integrates two complementary components: an analysis model and an evaluation model, structured across four maturity levels—operational, justified, formalized, and managed. It evaluates explainability across three critical dimensions: technical foundations, structured design, and human-centered explainability. MM4XAI-AE is grounded in the PAG-XAI framework, emphasizing the interrelated dimensions of practicality, auditability, and governance, thereby aligning with current reflections on responsible and trustworthy AI. The MM4XAI-AE model is empirically validated through a structured evaluation of thirteen published AI applications from diverse sectors, analyzing their design and deployment practices. The results show a wide distribution across maturity levels, underscoring the model’s capacity to identify strengths, gaps, and actionable pathways for improving explainability. This work offers a structured and scalable framework to standardize explainability practices and supports researchers, developers, and policymakers in fostering more transparent, ethical, and trustworthy AI systems.</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/4934696","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/int/4934696","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The increasing adoption of artificial intelligence (AI) in critical domains such as healthcare, law, and defense demands robust mechanisms to ensure transparency and explainability in decision-making processes. While machine learning and deep learning algorithms have advanced significantly, their growing complexity presents persistent interpretability challenges. Existing maturity frameworks, such as Capability Maturity Model Integration, fall short in addressing the distinct requirements of explainability in AI systems, particularly where ethical compliance and public trust are paramount. To address this gap, we propose the Maturity Model for eXplainable Artificial Intelligence: Analysis and Evaluation (MM4XAI-AE), a domain-agnostic maturity model tailored to assess and guide the practical deployment of explainability in AI-based applications. The model integrates two complementary components: an analysis model and an evaluation model, structured across four maturity levels—operational, justified, formalized, and managed. It evaluates explainability across three critical dimensions: technical foundations, structured design, and human-centered explainability. MM4XAI-AE is grounded in the PAG-XAI framework, emphasizing the interrelated dimensions of practicality, auditability, and governance, thereby aligning with current reflections on responsible and trustworthy AI. The MM4XAI-AE model is empirically validated through a structured evaluation of thirteen published AI applications from diverse sectors, analyzing their design and deployment practices. The results show a wide distribution across maturity levels, underscoring the model’s capacity to identify strengths, gaps, and actionable pathways for improving explainability. This work offers a structured and scalable framework to standardize explainability practices and supports researchers, developers, and policymakers in fostering more transparent, ethical, and trustworthy AI systems.

基于人工智能应用的可解释性成熟度模型:集成分析与评估(MM4XAI-AE)模型
在医疗保健、法律和国防等关键领域越来越多地采用人工智能(AI),需要强有力的机制来确保决策过程的透明度和可解释性。虽然机器学习和深度学习算法取得了显著进步,但它们日益增长的复杂性带来了持续的可解释性挑战。现有的成熟度框架,如能力成熟度模型集成,在解决人工智能系统中可解释性的独特需求方面存在不足,特别是在道德合规和公众信任至关重要的情况下。为了解决这一差距,我们提出了可解释人工智能成熟度模型:分析与评估(MM4XAI-AE),这是一个与领域无关的成熟度模型,专门用于评估和指导基于人工智能的应用程序中可解释性的实际部署。该模型集成了两个互补的组件:一个分析模型和一个评估模型,其结构跨越四个成熟度级别——可操作的、合理的、形式化的和管理的。它从三个关键维度评估可解释性:技术基础、结构化设计和以人为中心的可解释性。MM4XAI-AE以PAG-XAI框架为基础,强调实用性、可审计性和治理的相关维度,从而与当前对负责任和可信赖的人工智能的反思保持一致。MM4XAI-AE模型通过对来自不同行业的13个已发布的AI应用程序进行结构化评估,分析其设计和部署实践,进行了经验验证。结果显示了成熟度级别之间的广泛分布,强调了模型识别优势、差距和改进可解释性的可操作途径的能力。这项工作提供了一个结构化和可扩展的框架,以标准化可解释性实践,并支持研究人员、开发人员和政策制定者培养更透明、道德和值得信赖的人工智能系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Intelligent Systems
International Journal of Intelligent Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
14.30%
发文量
304
审稿时长
9 months
期刊介绍: The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信