Farhina Sardar Khan, Syed Shahid Mazhar, Kashif Mazhar, Dhoha A. AlSaleh, Amir Mazhar
{"title":"金融中模型不可知的可解释人工智能方法:系统回顾、最新发展、限制、挑战和未来方向","authors":"Farhina Sardar Khan, Syed Shahid Mazhar, Kashif Mazhar, Dhoha A. AlSaleh, Amir Mazhar","doi":"10.1007/s10462-025-11215-9","DOIUrl":null,"url":null,"abstract":"<div><p>The increasing integration of Artificial Intelligence (AI) and Machine Learning (ML)—algorithms that enable computers to identify patterns from data—in financial applications has significantly improved predictive capabilities in areas such as credit scoring, fraud detection, portfolio management, and risk assessment. Despite these advancements, the opaque, “black box” nature of many AI and ML models raises critical concerns related to transparency, trust, and regulatory compliance. Explainable Artificial Intelligence (XAI) aims to address these issues by providing interpretable and transparent decision-making processes. This study systematically reviews Model-Agnostic Explainable AI techniques, which can be applied across different types of ML models in finance, to evaluate their effectiveness, scalability, and practical applicability. Through analysis of 150 peer-reviewed studies, the paper identifies key challenges, such as balancing interpretability with predictive accuracy, managing computational complexity, and meeting regulatory requirements. The review highlights emerging trends toward hybrid models that combine powerful ML algorithms with interpretability techniques, real-time explanations suitable for dynamic financial markets, and XAI frameworks explicitly designed to align with regulatory standards. The study concludes by outlining specific future research directions, including the development of computationally efficient explainability methods, regulatory-compliant frameworks, and ethical AI solutions to ensure transparent and accountable financial decision-making.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 8","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11215-9.pdf","citationCount":"0","resultStr":"{\"title\":\"Model-agnostic explainable artificial intelligence methods in finance: a systematic review, recent developments, limitations, challenges and future directions\",\"authors\":\"Farhina Sardar Khan, Syed Shahid Mazhar, Kashif Mazhar, Dhoha A. AlSaleh, Amir Mazhar\",\"doi\":\"10.1007/s10462-025-11215-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The increasing integration of Artificial Intelligence (AI) and Machine Learning (ML)—algorithms that enable computers to identify patterns from data—in financial applications has significantly improved predictive capabilities in areas such as credit scoring, fraud detection, portfolio management, and risk assessment. Despite these advancements, the opaque, “black box” nature of many AI and ML models raises critical concerns related to transparency, trust, and regulatory compliance. Explainable Artificial Intelligence (XAI) aims to address these issues by providing interpretable and transparent decision-making processes. This study systematically reviews Model-Agnostic Explainable AI techniques, which can be applied across different types of ML models in finance, to evaluate their effectiveness, scalability, and practical applicability. Through analysis of 150 peer-reviewed studies, the paper identifies key challenges, such as balancing interpretability with predictive accuracy, managing computational complexity, and meeting regulatory requirements. The review highlights emerging trends toward hybrid models that combine powerful ML algorithms with interpretability techniques, real-time explanations suitable for dynamic financial markets, and XAI frameworks explicitly designed to align with regulatory standards. The study concludes by outlining specific future research directions, including the development of computationally efficient explainability methods, regulatory-compliant frameworks, and ethical AI solutions to ensure transparent and accountable financial decision-making.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"58 8\",\"pages\":\"\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2025-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-025-11215-9.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-025-11215-9\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11215-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Model-agnostic explainable artificial intelligence methods in finance: a systematic review, recent developments, limitations, challenges and future directions
The increasing integration of Artificial Intelligence (AI) and Machine Learning (ML)—algorithms that enable computers to identify patterns from data—in financial applications has significantly improved predictive capabilities in areas such as credit scoring, fraud detection, portfolio management, and risk assessment. Despite these advancements, the opaque, “black box” nature of many AI and ML models raises critical concerns related to transparency, trust, and regulatory compliance. Explainable Artificial Intelligence (XAI) aims to address these issues by providing interpretable and transparent decision-making processes. This study systematically reviews Model-Agnostic Explainable AI techniques, which can be applied across different types of ML models in finance, to evaluate their effectiveness, scalability, and practical applicability. Through analysis of 150 peer-reviewed studies, the paper identifies key challenges, such as balancing interpretability with predictive accuracy, managing computational complexity, and meeting regulatory requirements. The review highlights emerging trends toward hybrid models that combine powerful ML algorithms with interpretability techniques, real-time explanations suitable for dynamic financial markets, and XAI frameworks explicitly designed to align with regulatory standards. The study concludes by outlining specific future research directions, including the development of computationally efficient explainability methods, regulatory-compliant frameworks, and ethical AI solutions to ensure transparent and accountable financial decision-making.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.