Knowledge-Aware Neuron Interpretation for Scene Classification

Yong Guan, Freddy Lecue, Jiaoyan Chen, Ru Li, Jeff Z. Pan
{"title":"Knowledge-Aware Neuron Interpretation for Scene Classification","authors":"Yong Guan, Freddy Lecue, Jiaoyan Chen, Ru Li, Jeff Z. Pan","doi":"10.48550/arXiv.2401.15820","DOIUrl":null,"url":null,"abstract":"Although neural models have achieved remarkable performance, they still encounter doubts due to the intransparency. To this end, model prediction explanation is attracting more and more attentions. However, current methods rarely incorporate external knowledge and still suffer from three limitations: (1) Neglecting concept completeness. Merely selecting concepts may not sufficient for prediction. (2) Lacking concept fusion. Failure to merge semantically-equivalent concepts. (3) Difficult in manipulating model behavior. Lack of verification for explanation on original model. To address these issues, we propose a novel knowledge-aware neuron interpretation framework to explain model predictions for image scene classification. Specifically, for concept completeness, we present core concepts of a scene based on knowledge graph, ConceptNet, to gauge the completeness of concepts. Our method, incorporating complete concepts, effectively provides better prediction explanations compared to baselines. Furthermore, for concept fusion, we introduce a knowledge graph-based method known as Concept Filtering, which produces over 23% point gain on neuron behaviors for neuron interpretation. At last, we propose Model Manipulation, which aims to study whether the core concepts based on ConceptNet could be employed to manipulate model behavior. The results show that core concepts can effectively improve the performance of original model by over 26%.","PeriodicalId":518480,"journal":{"name":"AAAI Conference on Artificial Intelligence","volume":"72 5","pages":"1950-1958"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AAAI Conference on Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2401.15820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Although neural models have achieved remarkable performance, they still encounter doubts due to the intransparency. To this end, model prediction explanation is attracting more and more attentions. However, current methods rarely incorporate external knowledge and still suffer from three limitations: (1) Neglecting concept completeness. Merely selecting concepts may not sufficient for prediction. (2) Lacking concept fusion. Failure to merge semantically-equivalent concepts. (3) Difficult in manipulating model behavior. Lack of verification for explanation on original model. To address these issues, we propose a novel knowledge-aware neuron interpretation framework to explain model predictions for image scene classification. Specifically, for concept completeness, we present core concepts of a scene based on knowledge graph, ConceptNet, to gauge the completeness of concepts. Our method, incorporating complete concepts, effectively provides better prediction explanations compared to baselines. Furthermore, for concept fusion, we introduce a knowledge graph-based method known as Concept Filtering, which produces over 23% point gain on neuron behaviors for neuron interpretation. At last, we propose Model Manipulation, which aims to study whether the core concepts based on ConceptNet could be employed to manipulate model behavior. The results show that core concepts can effectively improve the performance of original model by over 26%.
用于场景分类的知识感知神经元解读技术
尽管神经模型已经取得了不俗的成绩,但由于其不透明性,仍会受到质疑。为此,模型预测解释受到越来越多的关注。然而,目前的方法很少结合外部知识,仍然存在三个局限性:(1)忽视概念的完整性。仅仅选择概念可能不足以进行预测。(2) 缺乏概念融合。无法合并语义相等的概念。(3) 难以操作模型行为。缺乏对原始模型解释的验证。为了解决这些问题,我们提出了一个新颖的知识感知神经元解释框架来解释图像场景分类的模型预测。具体来说,针对概念的完整性,我们基于知识图谱 ConceptNet 提出了场景的核心概念,以衡量概念的完整性。与基线方法相比,我们的方法结合了完整概念,能有效地提供更好的预测解释。此外,在概念融合方面,我们引入了一种基于知识图谱的方法,即概念过滤法(Concept Filtering),该方法在神经元行为解释方面的增益超过 23%。最后,我们提出了 "模型操纵"(Model Manipulation),旨在研究是否可以利用基于 ConceptNet 的核心概念来操纵模型行为。结果表明,核心概念可以有效提高原始模型的性能,提高幅度超过 26%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信