Model-agnostic explainable artificial intelligence methods in finance: a systematic review, recent developments, limitations, challenges and future directions

IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Farhina Sardar Khan, Syed Shahid Mazhar, Kashif Mazhar, Dhoha A. AlSaleh, Amir Mazhar
{"title":"Model-agnostic explainable artificial intelligence methods in finance: a systematic review, recent developments, limitations, challenges and future directions","authors":"Farhina Sardar Khan,&nbsp;Syed Shahid Mazhar,&nbsp;Kashif Mazhar,&nbsp;Dhoha A. AlSaleh,&nbsp;Amir Mazhar","doi":"10.1007/s10462-025-11215-9","DOIUrl":null,"url":null,"abstract":"<div><p>The increasing integration of Artificial Intelligence (AI) and Machine Learning (ML)—algorithms that enable computers to identify patterns from data—in financial applications has significantly improved predictive capabilities in areas such as credit scoring, fraud detection, portfolio management, and risk assessment. Despite these advancements, the opaque, “black box” nature of many AI and ML models raises critical concerns related to transparency, trust, and regulatory compliance. Explainable Artificial Intelligence (XAI) aims to address these issues by providing interpretable and transparent decision-making processes. This study systematically reviews Model-Agnostic Explainable AI techniques, which can be applied across different types of ML models in finance, to evaluate their effectiveness, scalability, and practical applicability. Through analysis of 150 peer-reviewed studies, the paper identifies key challenges, such as balancing interpretability with predictive accuracy, managing computational complexity, and meeting regulatory requirements. The review highlights emerging trends toward hybrid models that combine powerful ML algorithms with interpretability techniques, real-time explanations suitable for dynamic financial markets, and XAI frameworks explicitly designed to align with regulatory standards. The study concludes by outlining specific future research directions, including the development of computationally efficient explainability methods, regulatory-compliant frameworks, and ethical AI solutions to ensure transparent and accountable financial decision-making.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 8","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11215-9.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11215-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The increasing integration of Artificial Intelligence (AI) and Machine Learning (ML)—algorithms that enable computers to identify patterns from data—in financial applications has significantly improved predictive capabilities in areas such as credit scoring, fraud detection, portfolio management, and risk assessment. Despite these advancements, the opaque, “black box” nature of many AI and ML models raises critical concerns related to transparency, trust, and regulatory compliance. Explainable Artificial Intelligence (XAI) aims to address these issues by providing interpretable and transparent decision-making processes. This study systematically reviews Model-Agnostic Explainable AI techniques, which can be applied across different types of ML models in finance, to evaluate their effectiveness, scalability, and practical applicability. Through analysis of 150 peer-reviewed studies, the paper identifies key challenges, such as balancing interpretability with predictive accuracy, managing computational complexity, and meeting regulatory requirements. The review highlights emerging trends toward hybrid models that combine powerful ML algorithms with interpretability techniques, real-time explanations suitable for dynamic financial markets, and XAI frameworks explicitly designed to align with regulatory standards. The study concludes by outlining specific future research directions, including the development of computationally efficient explainability methods, regulatory-compliant frameworks, and ethical AI solutions to ensure transparent and accountable financial decision-making.

金融中模型不可知的可解释人工智能方法:系统回顾、最新发展、限制、挑战和未来方向
人工智能(AI)和机器学习(ML)——使计算机能够从数据中识别模式的算法——在金融应用中的日益集成,显著提高了信用评分、欺诈检测、投资组合管理和风险评估等领域的预测能力。尽管取得了这些进步,但许多人工智能和机器学习模型的不透明、“黑盒子”性质引发了与透明度、信任和法规遵从性相关的关键问题。可解释的人工智能(XAI)旨在通过提供可解释和透明的决策过程来解决这些问题。本研究系统地回顾了模型不可知可解释的人工智能技术,这些技术可以应用于金融中不同类型的机器学习模型,以评估其有效性、可扩展性和实用性。通过对150项同行评议研究的分析,本文确定了关键挑战,例如平衡可解释性与预测准确性,管理计算复杂性以及满足监管要求。该综述强调了混合模型的新兴趋势,这些混合模型将强大的ML算法与可解释性技术、适合动态金融市场的实时解释以及明确设计以符合监管标准的XAI框架相结合。该研究最后概述了未来的具体研究方向,包括开发计算效率高的可解释性方法、符合监管的框架和道德的人工智能解决方案,以确保透明和负责任的财务决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信