An eXplainable Artificial Intelligence Methodology on Big Data Architecture

IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Valerio La Gatta, Vincenzo Moscato, Marco Postiglione, Giancarlo Sperlì
{"title":"An eXplainable Artificial Intelligence Methodology on Big Data Architecture","authors":"Valerio La Gatta, Vincenzo Moscato, Marco Postiglione, Giancarlo Sperlì","doi":"10.1007/s12559-024-10272-6","DOIUrl":null,"url":null,"abstract":"<p>Although artificial intelligence has become part of everyone’s real life, a trust crisis against such systems is occurring, thus increasing the need to explain black-box predictions, especially in the military, medical, and financial domains. Modern eXplainable Artificial Intelligence (XAI) techniques focus on benchmark datasets, but the cognitive applicability of such solutions under big data settings is still unclear due to memory or computation constraints. In this paper, we extend a model-agnostic XAI methodology, named <i>Cluster-Aided Space Transformation for Local Explanation</i> (CASTLE), to be able to deal with high-volume datasets. CASTLE aims to explain the black-box behavior of predictive models by combining both <i>local</i> (i.e., based on the input sample) and <i>global</i> (i.e., based on the whole scope for action of the model) information. In particular, the local explanation provides a rule-based explanation for the prediction of a target instance as well as the directions to update the likelihood of the predicted class. Our extension leverages modern big data technologies (e.g., Apache Spark) to handle the high volume, variety, and velocity of huge datasets. We have evaluated the framework on five datasets, in terms of temporal efficiency, explanation quality, and model significance. Our results indicate that the proposed approach retains the high-quality explanations associated with CASTLE while efficiently handling large datasets. Importantly, it exhibits a sub-linear, rather than exponential, dependence on dataset size, making it a scalable solution for massive datasets or in any big data scenario.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12559-024-10272-6","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Although artificial intelligence has become part of everyone’s real life, a trust crisis against such systems is occurring, thus increasing the need to explain black-box predictions, especially in the military, medical, and financial domains. Modern eXplainable Artificial Intelligence (XAI) techniques focus on benchmark datasets, but the cognitive applicability of such solutions under big data settings is still unclear due to memory or computation constraints. In this paper, we extend a model-agnostic XAI methodology, named Cluster-Aided Space Transformation for Local Explanation (CASTLE), to be able to deal with high-volume datasets. CASTLE aims to explain the black-box behavior of predictive models by combining both local (i.e., based on the input sample) and global (i.e., based on the whole scope for action of the model) information. In particular, the local explanation provides a rule-based explanation for the prediction of a target instance as well as the directions to update the likelihood of the predicted class. Our extension leverages modern big data technologies (e.g., Apache Spark) to handle the high volume, variety, and velocity of huge datasets. We have evaluated the framework on five datasets, in terms of temporal efficiency, explanation quality, and model significance. Our results indicate that the proposed approach retains the high-quality explanations associated with CASTLE while efficiently handling large datasets. Importantly, it exhibits a sub-linear, rather than exponential, dependence on dataset size, making it a scalable solution for massive datasets or in any big data scenario.

Abstract Image

大数据架构上的易懂人工智能方法论
尽管人工智能已成为每个人现实生活的一部分,但针对此类系统的信任危机正在发生,因此对解释黑箱预测的需求日益增加,尤其是在军事、医疗和金融领域。现代可解释人工智能(XAI)技术侧重于基准数据集,但由于内存或计算的限制,这些解决方案在大数据环境下的认知适用性仍不明确。在本文中,我们扩展了一种与模型无关的 XAI 方法,命名为 "集群辅助空间转换局部解释(CASTLE)",以便能够处理大容量数据集。CASTLE 旨在通过结合局部(即基于输入样本)和全局(即基于模型的整个作用范围)信息来解释预测模型的黑箱行为。特别是,局部解释为目标实例的预测提供了基于规则的解释,也为更新预测类别的可能性提供了方向。我们的扩展利用了现代大数据技术(如 Apache Spark)来处理大量、多样和高速的海量数据集。我们在五个数据集上从时间效率、解释质量和模型意义等方面对该框架进行了评估。我们的结果表明,所提出的方法在高效处理大型数据集的同时,保留了与 CASTLE 相关的高质量解释。重要的是,该方法对数据集规模的依赖程度呈亚线性关系,而非指数关系,这使其成为适用于海量数据集或任何大数据场景的可扩展解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognitive Computation
Cognitive Computation COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-NEUROSCIENCES
CiteScore
9.30
自引率
3.70%
发文量
116
审稿时长
>12 weeks
期刊介绍: Cognitive Computation is an international, peer-reviewed, interdisciplinary journal that publishes cutting-edge articles describing original basic and applied work involving biologically-inspired computational accounts of all aspects of natural and artificial cognitive systems. It provides a new platform for the dissemination of research, current practices and future trends in the emerging discipline of cognitive computation that bridges the gap between life sciences, social sciences, engineering, physical and mathematical sciences, and humanities.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信