AS-XAI: CNN自监督自动语义解释

IF 6.8 Q1 AUTOMATION & CONTROL SYSTEMS
Changqi Sun, Hao Xu, Yuntian Chen, Dongxiao Zhang
{"title":"AS-XAI: CNN自监督自动语义解释","authors":"Changqi Sun,&nbsp;Hao Xu,&nbsp;Yuntian Chen,&nbsp;Dongxiao Zhang","doi":"10.1002/aisy.202400359","DOIUrl":null,"url":null,"abstract":"<p>Explainable artificial intelligence (XAI) aims to develop transparent explanatory approaches for “black-box” deep learning models. However, it remains difficult for existing methods to achieve the trade-off of the three key criteria in interpretability, namely, reliability, understandability, and usability, which hinder their practical applications. In this article, we propose a self-supervised automatic semantic interpretable explainable artificial intelligence (AS-XAI) framework, which utilizes transparent orthogonal embedding semantic extraction spaces and row-centered principal component analysis (PCA) for global semantic interpretation of model decisions in the absence of human interference, without additional computational costs. In addition, the invariance of filter feature high-rank decomposition is used to evaluate model sensitivity to different semantic concepts. Extensive experiments demonstrate that robust and orthogonal semantic spaces can be automatically extracted by AS-XAI, providing more effective global interpretability for convolutional neural networks (CNNs) and generating human-comprehensible explanations. The proposed approach offers broad fine-grained extensible practical applications, including shared semantic interpretation under out-of-distribution (OOD) categories, auxiliary explanations for species that are challenging to distinguish, and classification explanations from various perspectives. In a systematic evaluation by users with varying levels of AI knowledge, AS-XAI demonstrated superior “glass box” characteristics.</p>","PeriodicalId":93858,"journal":{"name":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","volume":"6 12","pages":""},"PeriodicalIF":6.8000,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202400359","citationCount":"0","resultStr":"{\"title\":\"AS-XAI: Self-Supervised Automatic Semantic Interpretation for CNN\",\"authors\":\"Changqi Sun,&nbsp;Hao Xu,&nbsp;Yuntian Chen,&nbsp;Dongxiao Zhang\",\"doi\":\"10.1002/aisy.202400359\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Explainable artificial intelligence (XAI) aims to develop transparent explanatory approaches for “black-box” deep learning models. However, it remains difficult for existing methods to achieve the trade-off of the three key criteria in interpretability, namely, reliability, understandability, and usability, which hinder their practical applications. In this article, we propose a self-supervised automatic semantic interpretable explainable artificial intelligence (AS-XAI) framework, which utilizes transparent orthogonal embedding semantic extraction spaces and row-centered principal component analysis (PCA) for global semantic interpretation of model decisions in the absence of human interference, without additional computational costs. In addition, the invariance of filter feature high-rank decomposition is used to evaluate model sensitivity to different semantic concepts. Extensive experiments demonstrate that robust and orthogonal semantic spaces can be automatically extracted by AS-XAI, providing more effective global interpretability for convolutional neural networks (CNNs) and generating human-comprehensible explanations. The proposed approach offers broad fine-grained extensible practical applications, including shared semantic interpretation under out-of-distribution (OOD) categories, auxiliary explanations for species that are challenging to distinguish, and classification explanations from various perspectives. In a systematic evaluation by users with varying levels of AI knowledge, AS-XAI demonstrated superior “glass box” characteristics.</p>\",\"PeriodicalId\":93858,\"journal\":{\"name\":\"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)\",\"volume\":\"6 12\",\"pages\":\"\"},\"PeriodicalIF\":6.8000,\"publicationDate\":\"2024-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202400359\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/aisy.202400359\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aisy.202400359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

可解释人工智能(XAI)旨在为“黑箱”深度学习模型开发透明的解释方法。然而,现有的方法很难在可解释性的三个关键标准,即可靠性、可理解性和可用性之间实现权衡,这阻碍了它们的实际应用。在本文中,我们提出了一个自监督自动语义可解释的人工智能(AS-XAI)框架,该框架利用透明的正交嵌入语义提取空间和行中心主成分分析(PCA)在没有人为干扰的情况下对模型决策进行全局语义解释,而无需额外的计算成本。此外,利用滤波特征高秩分解的不变性来评价模型对不同语义概念的敏感性。大量的实验表明,AS-XAI可以自动提取鲁棒和正交的语义空间,为卷积神经网络(cnn)提供更有效的全局可解释性,并生成人类可理解的解释。该方法提供了广泛的细粒度可扩展的实际应用,包括分布外(out- distribution, OOD)分类下的共享语义解释、难以区分的物种的辅助解释以及从不同角度的分类解释。在不同人工智能知识水平的用户进行的系统评估中,AS-XAI展示了优越的“玻璃盒”特性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

AS-XAI: Self-Supervised Automatic Semantic Interpretation for CNN

AS-XAI: Self-Supervised Automatic Semantic Interpretation for CNN

Explainable artificial intelligence (XAI) aims to develop transparent explanatory approaches for “black-box” deep learning models. However, it remains difficult for existing methods to achieve the trade-off of the three key criteria in interpretability, namely, reliability, understandability, and usability, which hinder their practical applications. In this article, we propose a self-supervised automatic semantic interpretable explainable artificial intelligence (AS-XAI) framework, which utilizes transparent orthogonal embedding semantic extraction spaces and row-centered principal component analysis (PCA) for global semantic interpretation of model decisions in the absence of human interference, without additional computational costs. In addition, the invariance of filter feature high-rank decomposition is used to evaluate model sensitivity to different semantic concepts. Extensive experiments demonstrate that robust and orthogonal semantic spaces can be automatically extracted by AS-XAI, providing more effective global interpretability for convolutional neural networks (CNNs) and generating human-comprehensible explanations. The proposed approach offers broad fine-grained extensible practical applications, including shared semantic interpretation under out-of-distribution (OOD) categories, auxiliary explanations for species that are challenging to distinguish, and classification explanations from various perspectives. In a systematic evaluation by users with varying levels of AI knowledge, AS-XAI demonstrated superior “glass box” characteristics.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.30
自引率
0.00%
发文量
0
审稿时长
4 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信