为可解释的AI创造更稳定的LIME

Ng Chung Hou, Hussain Sadiq Abuwala, C. H. Lim
{"title":"为可解释的AI创造更稳定的LIME","authors":"Ng Chung Hou, Hussain Sadiq Abuwala, C. H. Lim","doi":"10.1109/ISPACS57703.2022.10082810","DOIUrl":null,"url":null,"abstract":"Although AI's development is remarkable, end users do not know how the AI has come to a specific conclusion due to the black-box nature of AI algorithms like deep learning. This has given rise to the field of explainable AI (XAI) where techniques are being developed to explain AI algorithms. One such technique is called Local Interpretable Model-Agnostic Explanations (LIME). LIME is popular because it is modelagnostic and works well with text, tabular and image data. While it has some good features, there are still room for improvements towards the original LIME algorithm especially it's stability. In this work, the LIME stability is being reviewed and three different approaches were investigated for its effectiveness in stability improvement which are; 1) using high sample size for stable ordering, 2) using an averaging method to reduce region flipping; and 3) to evaluate different super-pixels segmentation algorithms in generating stable LIME outcome. The experiment results shows a definite increase in the stability of the improved LIME compared to the baseline LIME and thus the reliability of using it practically.","PeriodicalId":410603,"journal":{"name":"2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Towards More Stable LIME For Explainable AI\",\"authors\":\"Ng Chung Hou, Hussain Sadiq Abuwala, C. H. Lim\",\"doi\":\"10.1109/ISPACS57703.2022.10082810\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although AI's development is remarkable, end users do not know how the AI has come to a specific conclusion due to the black-box nature of AI algorithms like deep learning. This has given rise to the field of explainable AI (XAI) where techniques are being developed to explain AI algorithms. One such technique is called Local Interpretable Model-Agnostic Explanations (LIME). LIME is popular because it is modelagnostic and works well with text, tabular and image data. While it has some good features, there are still room for improvements towards the original LIME algorithm especially it's stability. In this work, the LIME stability is being reviewed and three different approaches were investigated for its effectiveness in stability improvement which are; 1) using high sample size for stable ordering, 2) using an averaging method to reduce region flipping; and 3) to evaluate different super-pixels segmentation algorithms in generating stable LIME outcome. The experiment results shows a definite increase in the stability of the improved LIME compared to the baseline LIME and thus the reliability of using it practically.\",\"PeriodicalId\":410603,\"journal\":{\"name\":\"2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)\",\"volume\":\"113 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISPACS57703.2022.10082810\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPACS57703.2022.10082810","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

虽然人工智能的发展是显著的,但由于深度学习等人工智能算法的黑箱性质,最终用户并不知道人工智能是如何得出具体结论的。这导致了可解释人工智能(XAI)领域的兴起,人们正在开发技术来解释人工智能算法。其中一种技术被称为局部可解释模型不可知论解释(LIME)。LIME之所以受欢迎,是因为它与模型无关,并且可以很好地处理文本、表格和图像数据。虽然它有一些很好的功能,但相对于最初的LIME算法,它仍有改进的空间,尤其是它的稳定性。本文对石灰的稳定性进行了综述,研究了三种不同的提高石灰稳定性的方法,分别是:1)采用高样本量实现稳定排序;2)采用平均方法减少区域翻转;3)评价不同的超像素分割算法在生成稳定的LIME结果方面的效果。实验结果表明,与基线LIME相比,改进后的LIME的稳定性有了明显的提高,从而提高了实际使用的可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Towards More Stable LIME For Explainable AI
Although AI's development is remarkable, end users do not know how the AI has come to a specific conclusion due to the black-box nature of AI algorithms like deep learning. This has given rise to the field of explainable AI (XAI) where techniques are being developed to explain AI algorithms. One such technique is called Local Interpretable Model-Agnostic Explanations (LIME). LIME is popular because it is modelagnostic and works well with text, tabular and image data. While it has some good features, there are still room for improvements towards the original LIME algorithm especially it's stability. In this work, the LIME stability is being reviewed and three different approaches were investigated for its effectiveness in stability improvement which are; 1) using high sample size for stable ordering, 2) using an averaging method to reduce region flipping; and 3) to evaluate different super-pixels segmentation algorithms in generating stable LIME outcome. The experiment results shows a definite increase in the stability of the improved LIME compared to the baseline LIME and thus the reliability of using it practically.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信