基于人工智能的金融时间序列预测系统的良好实践假设:面向领域驱动的XAI方法

Branka Hadji Misheva, Joerg Osterrieder
{"title":"基于人工智能的金融时间序列预测系统的良好实践假设:面向领域驱动的XAI方法","authors":"Branka Hadji Misheva, Joerg Osterrieder","doi":"arxiv-2311.07513","DOIUrl":null,"url":null,"abstract":"Machine learning and deep learning have become increasingly prevalent in\nfinancial prediction and forecasting tasks, offering advantages such as\nenhanced customer experience, democratising financial services, improving\nconsumer protection, and enhancing risk management. However, these complex\nmodels often lack transparency and interpretability, making them challenging to\nuse in sensitive domains like finance. This has led to the rise of eXplainable\nArtificial Intelligence (XAI) methods aimed at creating models that are easily\nunderstood by humans. Classical XAI methods, such as LIME and SHAP, have been\ndeveloped to provide explanations for complex models. While these methods have\nmade significant contributions, they also have limitations, including\ncomputational complexity, inherent model bias, sensitivity to data sampling,\nand challenges in dealing with feature dependence. In this context, this paper\nexplores good practices for deploying explainability in AI-based systems for\nfinance, emphasising the importance of data quality, audience-specific methods,\nconsideration of data properties, and the stability of explanations. These\npractices aim to address the unique challenges and requirements of the\nfinancial industry and guide the development of effective XAI tools.","PeriodicalId":501372,"journal":{"name":"arXiv - QuantFin - General Finance","volume":"103 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Hypothesis on Good Practices for AI-based Systems for Financial Time Series Forecasting: Towards Domain-Driven XAI Methods\",\"authors\":\"Branka Hadji Misheva, Joerg Osterrieder\",\"doi\":\"arxiv-2311.07513\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning and deep learning have become increasingly prevalent in\\nfinancial prediction and forecasting tasks, offering advantages such as\\nenhanced customer experience, democratising financial services, improving\\nconsumer protection, and enhancing risk management. However, these complex\\nmodels often lack transparency and interpretability, making them challenging to\\nuse in sensitive domains like finance. This has led to the rise of eXplainable\\nArtificial Intelligence (XAI) methods aimed at creating models that are easily\\nunderstood by humans. Classical XAI methods, such as LIME and SHAP, have been\\ndeveloped to provide explanations for complex models. While these methods have\\nmade significant contributions, they also have limitations, including\\ncomputational complexity, inherent model bias, sensitivity to data sampling,\\nand challenges in dealing with feature dependence. In this context, this paper\\nexplores good practices for deploying explainability in AI-based systems for\\nfinance, emphasising the importance of data quality, audience-specific methods,\\nconsideration of data properties, and the stability of explanations. These\\npractices aim to address the unique challenges and requirements of the\\nfinancial industry and guide the development of effective XAI tools.\",\"PeriodicalId\":501372,\"journal\":{\"name\":\"arXiv - QuantFin - General Finance\",\"volume\":\"103 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuantFin - General Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2311.07513\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - General Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2311.07513","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

机器学习和深度学习在金融预测和预测任务中变得越来越普遍,提供了诸如增强客户体验,民主化金融服务,改善消费者保护和加强风险管理等优势。然而,这些复杂的模型往往缺乏透明度和可解释性,这使得它们在金融等敏感领域的使用具有挑战性。这导致了旨在创建人类易于理解的模型的可解释人工智能(XAI)方法的兴起。经典的XAI方法,如LIME和SHAP,已经发展到为复杂模型提供解释。虽然这些方法做出了重大贡献,但它们也有局限性,包括计算复杂性、固有的模型偏差、对数据采样的敏感性以及在处理特征依赖方面的挑战。在此背景下,本文探讨了在基于人工智能的金融系统中部署可解释性的良好实践,强调了数据质量、针对受众的方法、数据属性的考虑以及解释的稳定性的重要性。这些实践旨在解决金融行业的独特挑战和要求,并指导有效的XAI工具的开发。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Hypothesis on Good Practices for AI-based Systems for Financial Time Series Forecasting: Towards Domain-Driven XAI Methods
Machine learning and deep learning have become increasingly prevalent in financial prediction and forecasting tasks, offering advantages such as enhanced customer experience, democratising financial services, improving consumer protection, and enhancing risk management. However, these complex models often lack transparency and interpretability, making them challenging to use in sensitive domains like finance. This has led to the rise of eXplainable Artificial Intelligence (XAI) methods aimed at creating models that are easily understood by humans. Classical XAI methods, such as LIME and SHAP, have been developed to provide explanations for complex models. While these methods have made significant contributions, they also have limitations, including computational complexity, inherent model bias, sensitivity to data sampling, and challenges in dealing with feature dependence. In this context, this paper explores good practices for deploying explainability in AI-based systems for finance, emphasising the importance of data quality, audience-specific methods, consideration of data properties, and the stability of explanations. These practices aim to address the unique challenges and requirements of the financial industry and guide the development of effective XAI tools.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信