A Literature Review and Research Agenda on Explainable Artificial Intelligence (XAI)

Krishna Prakash Kalyanathaya, K. K
{"title":"A Literature Review and Research Agenda on Explainable Artificial Intelligence (XAI)","authors":"Krishna Prakash Kalyanathaya, K. K","doi":"10.47992/ijaeml.2581.7000.0119","DOIUrl":null,"url":null,"abstract":"Purpose: When Artificial Intelligence is penetrating every walk of our affairs and business, we face enormous challenges and opportunities to adopt this revolution. Machine learning models are used to make the important decisions in critical areas such as medical diagnosis, financial transactions. We need to know how they make decisions to trust the systems powered by these models. However, there are challenges in this area of explaining predictions or decisions made by machine learning model. Ensembles like Random Forest, Deep learning algorithms make the matter worst in terms of explaining the outcomes of decision even though these models produce more accurate results. We cannot accept the black box nature of AI models as we encounter the consequences of those decisions. In this paper, we would like to open this Pandora box and review the current challenges and opportunities to explain the decisions or outcome of AI model. There has been lot of debate on this topic with headlines as Explainable Artificial Intelligence (XAI), Interpreting ML models, Explainable ML models etc. This paper does the literature review of latest findings and surveys published in various reputed journals and publications. Towards the end, we try to bring some open research agenda in these findings and future directions.\nMethodology: The literature survey on the chosen topic has been exhaustively covered to include fundamental concepts of the research topic. Journals from multiple secondary data sources such as books and research papers published in various reputable publications which are relevant for the work were chosen in the methodology.\nFindings/Result: While there are no single approaches currently solve the explainable ML model challenges, some model algorithms such as Decision Trees, KNN algorithm provides built in interpretations. However there is no common approach and they cannot be used in all the problems. Developing model specific interpretations will be complex and difficult for the user to make them adopt. Model specific explanations may lead to multiple explanations on same predictions which will lead to ambiguity of the outcome. In this paper, we have conceptualized a common approach to build explainable models that may fulfill current challenges of XAI.\nOriginality: After the literature review, the knowledge gathered in the form of findings were used to model a theoretical framework for the research topic. Then concerted effort was made to develop a conceptual model to support the future research work.\nPaper Type: Literature Review.","PeriodicalId":184829,"journal":{"name":"International Journal of Applied Engineering and Management Letters","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Applied Engineering and Management Letters","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.47992/ijaeml.2581.7000.0119","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Purpose: When Artificial Intelligence is penetrating every walk of our affairs and business, we face enormous challenges and opportunities to adopt this revolution. Machine learning models are used to make the important decisions in critical areas such as medical diagnosis, financial transactions. We need to know how they make decisions to trust the systems powered by these models. However, there are challenges in this area of explaining predictions or decisions made by machine learning model. Ensembles like Random Forest, Deep learning algorithms make the matter worst in terms of explaining the outcomes of decision even though these models produce more accurate results. We cannot accept the black box nature of AI models as we encounter the consequences of those decisions. In this paper, we would like to open this Pandora box and review the current challenges and opportunities to explain the decisions or outcome of AI model. There has been lot of debate on this topic with headlines as Explainable Artificial Intelligence (XAI), Interpreting ML models, Explainable ML models etc. This paper does the literature review of latest findings and surveys published in various reputed journals and publications. Towards the end, we try to bring some open research agenda in these findings and future directions. Methodology: The literature survey on the chosen topic has been exhaustively covered to include fundamental concepts of the research topic. Journals from multiple secondary data sources such as books and research papers published in various reputable publications which are relevant for the work were chosen in the methodology. Findings/Result: While there are no single approaches currently solve the explainable ML model challenges, some model algorithms such as Decision Trees, KNN algorithm provides built in interpretations. However there is no common approach and they cannot be used in all the problems. Developing model specific interpretations will be complex and difficult for the user to make them adopt. Model specific explanations may lead to multiple explanations on same predictions which will lead to ambiguity of the outcome. In this paper, we have conceptualized a common approach to build explainable models that may fulfill current challenges of XAI. Originality: After the literature review, the knowledge gathered in the form of findings were used to model a theoretical framework for the research topic. Then concerted effort was made to develop a conceptual model to support the future research work. Paper Type: Literature Review.
可解释人工智能(XAI)的文献综述与研究进展
目的:当人工智能渗透到我们事务和商业的每一个环节时,我们面临着巨大的挑战和机遇来采用这场革命。机器学习模型用于在医疗诊断、金融交易等关键领域做出重要决策。我们需要知道他们是如何做出决定来信任由这些模型驱动的系统的。然而,在解释机器学习模型做出的预测或决策方面存在挑战。像随机森林这样的系统,深度学习算法在解释决策结果方面使事情变得最糟糕,即使这些模型产生的结果更准确。当我们遇到这些决策的后果时,我们不能接受人工智能模型的黑箱性质。在本文中,我们想打开这个潘多拉盒子,回顾当前的挑战和机遇,以解释人工智能模型的决策或结果。关于这个话题有很多争论,标题是可解释的人工智能(XAI),解释ML模型,可解释ML模型等。本文对发表在各种知名期刊和出版物上的最新发现和调查进行了文献综述。最后,我们试图在这些发现和未来的方向上提出一些开放的研究议程。方法:对所选主题的文献调查已经详尽地涵盖了包括研究主题的基本概念。在方法中选择了来自多个次要数据源的期刊,例如在各种知名出版物上发表的与工作相关的书籍和研究论文。发现/结果:虽然目前没有单一的方法解决可解释的ML模型挑战,但一些模型算法,如决策树,KNN算法提供了内置的解释。然而,没有一个通用的方法,它们不能用于所有的问题。开发特定于模型的解释对于用户来说将是复杂和困难的。模型特定解释可能导致对同一预测的多重解释,从而导致结果的模糊性。在本文中,我们概念化了一种构建可解释模型的通用方法,该模型可以满足当前XAI的挑战。原创性:在文献回顾之后,以发现的形式收集到的知识被用来为研究主题建模一个理论框架。然后共同努力开发一个概念模型,以支持未来的研究工作。论文类型:文献综述。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信