XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process

IF 4 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Tobias Clement, Nils Kemmerzell, Mohamed Abdelaal, M. Amberg
{"title":"XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process","authors":"Tobias Clement, Nils Kemmerzell, Mohamed Abdelaal, M. Amberg","doi":"10.3390/make5010006","DOIUrl":null,"url":null,"abstract":"Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and comprehensible for humans. Fortunately, plenty of XAI methods have been introduced to tackle the explainability problem from different perspectives. However, due to the vast search space, it is challenging for ML practitioners and data scientists to start with the development of XAI software and to optimally select the most suitable XAI methods. To tackle this challenge, we introduce XAIR, a novel systematic metareview of the most promising XAI methods and tools. XAIR differentiates itself from existing reviews by aligning its results to the five steps of the software development process, including requirement analysis, design, implementation, evaluation, and deployment. Through this mapping, we aim to create a better understanding of the individual steps of developing XAI software and to foster the creation of real-world AI applications that incorporate explainability. Finally, we conclude with highlighting new directions for future research.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"32 1","pages":"78-108"},"PeriodicalIF":4.0000,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning and knowledge extraction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/make5010006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 11

Abstract

Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and comprehensible for humans. Fortunately, plenty of XAI methods have been introduced to tackle the explainability problem from different perspectives. However, due to the vast search space, it is challenging for ML practitioners and data scientists to start with the development of XAI software and to optimally select the most suitable XAI methods. To tackle this challenge, we introduce XAIR, a novel systematic metareview of the most promising XAI methods and tools. XAIR differentiates itself from existing reviews by aligning its results to the five steps of the software development process, including requirement analysis, design, implementation, evaluation, and deployment. Through this mapping, we aim to create a better understanding of the individual steps of developing XAI software and to foster the creation of real-world AI applications that incorporate explainability. Finally, we conclude with highlighting new directions for future research.
XAIR:与软件开发过程相结合的可解释AI (XAI)的系统元视图
目前,可解释性是人工智能(AI)在各个应用领域的实际实施所面临的主要障碍。为了解决人们对基于人工智能的系统缺乏理解的问题,可解释人工智能(Explainable AI, XAI)旨在使人工智能黑箱模型对人类来说更加透明和易于理解。幸运的是,已经引入了许多XAI方法来从不同的角度解决可解释性问题。然而,由于巨大的搜索空间,对于ML从业者和数据科学家来说,从XAI软件的开发开始并优化选择最合适的XAI方法是具有挑战性的。为了应对这一挑战,我们介绍了XAIR,这是一种新颖的系统元视图,包含最有前途的XAI方法和工具。XAIR通过将其结果与软件开发过程的五个步骤(包括需求分析、设计、实现、评估和部署)结合起来,将自己与现有的审查区分开来。通过这种映射,我们的目标是更好地理解开发XAI软件的各个步骤,并促进创建包含可解释性的真实AI应用程序。最后,对今后的研究方向进行了展望。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.30
自引率
0.00%
发文量
0
审稿时长
7 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信