{"title":"基于人工智能的金融时间序列预测系统的良好实践假设:面向领域驱动的XAI方法","authors":"Branka Hadji Misheva, Joerg Osterrieder","doi":"arxiv-2311.07513","DOIUrl":null,"url":null,"abstract":"Machine learning and deep learning have become increasingly prevalent in\nfinancial prediction and forecasting tasks, offering advantages such as\nenhanced customer experience, democratising financial services, improving\nconsumer protection, and enhancing risk management. However, these complex\nmodels often lack transparency and interpretability, making them challenging to\nuse in sensitive domains like finance. This has led to the rise of eXplainable\nArtificial Intelligence (XAI) methods aimed at creating models that are easily\nunderstood by humans. Classical XAI methods, such as LIME and SHAP, have been\ndeveloped to provide explanations for complex models. While these methods have\nmade significant contributions, they also have limitations, including\ncomputational complexity, inherent model bias, sensitivity to data sampling,\nand challenges in dealing with feature dependence. In this context, this paper\nexplores good practices for deploying explainability in AI-based systems for\nfinance, emphasising the importance of data quality, audience-specific methods,\nconsideration of data properties, and the stability of explanations. These\npractices aim to address the unique challenges and requirements of the\nfinancial industry and guide the development of effective XAI tools.","PeriodicalId":501372,"journal":{"name":"arXiv - QuantFin - General Finance","volume":"103 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Hypothesis on Good Practices for AI-based Systems for Financial Time Series Forecasting: Towards Domain-Driven XAI Methods\",\"authors\":\"Branka Hadji Misheva, Joerg Osterrieder\",\"doi\":\"arxiv-2311.07513\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning and deep learning have become increasingly prevalent in\\nfinancial prediction and forecasting tasks, offering advantages such as\\nenhanced customer experience, democratising financial services, improving\\nconsumer protection, and enhancing risk management. However, these complex\\nmodels often lack transparency and interpretability, making them challenging to\\nuse in sensitive domains like finance. This has led to the rise of eXplainable\\nArtificial Intelligence (XAI) methods aimed at creating models that are easily\\nunderstood by humans. Classical XAI methods, such as LIME and SHAP, have been\\ndeveloped to provide explanations for complex models. While these methods have\\nmade significant contributions, they also have limitations, including\\ncomputational complexity, inherent model bias, sensitivity to data sampling,\\nand challenges in dealing with feature dependence. In this context, this paper\\nexplores good practices for deploying explainability in AI-based systems for\\nfinance, emphasising the importance of data quality, audience-specific methods,\\nconsideration of data properties, and the stability of explanations. These\\npractices aim to address the unique challenges and requirements of the\\nfinancial industry and guide the development of effective XAI tools.\",\"PeriodicalId\":501372,\"journal\":{\"name\":\"arXiv - QuantFin - General Finance\",\"volume\":\"103 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuantFin - General Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2311.07513\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - General Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2311.07513","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Hypothesis on Good Practices for AI-based Systems for Financial Time Series Forecasting: Towards Domain-Driven XAI Methods
Machine learning and deep learning have become increasingly prevalent in
financial prediction and forecasting tasks, offering advantages such as
enhanced customer experience, democratising financial services, improving
consumer protection, and enhancing risk management. However, these complex
models often lack transparency and interpretability, making them challenging to
use in sensitive domains like finance. This has led to the rise of eXplainable
Artificial Intelligence (XAI) methods aimed at creating models that are easily
understood by humans. Classical XAI methods, such as LIME and SHAP, have been
developed to provide explanations for complex models. While these methods have
made significant contributions, they also have limitations, including
computational complexity, inherent model bias, sensitivity to data sampling,
and challenges in dealing with feature dependence. In this context, this paper
explores good practices for deploying explainability in AI-based systems for
finance, emphasising the importance of data quality, audience-specific methods,
consideration of data properties, and the stability of explanations. These
practices aim to address the unique challenges and requirements of the
financial industry and guide the development of effective XAI tools.