NAM:神经网络能看到什么?

Katarzyna Filus, J. Domańska
{"title":"NAM:神经网络能看到什么?","authors":"Katarzyna Filus, J. Domańska","doi":"10.1109/IJCNN55064.2022.9892442","DOIUrl":null,"url":null,"abstract":"Deep Convolutional Neural Networks (CNNs) still lack interpretability and are often treated as miraculous blackbox machines. Therefore, when an intelligent system fails, it is usually difficult to troubleshoot the problems. Among others, these issues can be caused by incorrect decisions of the CNN classifier. The other reason can be selective “blindness” of the CNN - caused by an insufficient generalization of the convolutional feature extractor. To better understand the CNN decisions, methods from the Class Activation Mapping (CAM) family have been introduced. In contrast to CAM techniques, which focus on the model's predictions (thus a classifier), we propose a simple yet informative way to visualize network activation - Network Activation Mapping (NAM). Our method targets the most important part of the CNN - a convolutional feature extractor. Opposed to CAM methods, NAM is class-and classifier-independent and provides insight into what the neural network focuses on during the feature extraction process and what features it finds the most prominent in the examined image. Due to the classifier-independence, it can be used with all CNN models. In our experiments, we demonstrate how the performance of a convolutional feature extractor can be preliminarily evaluated using NAM. We also present results obtained for a simple NAM-based visual attention mechanism, which allows us to filter out less informative regions of the image and facilitates the decision making process.","PeriodicalId":106974,"journal":{"name":"2022 International Joint Conference on Neural Networks (IJCNN)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"NAM: What Does a Neural Network See?\",\"authors\":\"Katarzyna Filus, J. Domańska\",\"doi\":\"10.1109/IJCNN55064.2022.9892442\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Convolutional Neural Networks (CNNs) still lack interpretability and are often treated as miraculous blackbox machines. Therefore, when an intelligent system fails, it is usually difficult to troubleshoot the problems. Among others, these issues can be caused by incorrect decisions of the CNN classifier. The other reason can be selective “blindness” of the CNN - caused by an insufficient generalization of the convolutional feature extractor. To better understand the CNN decisions, methods from the Class Activation Mapping (CAM) family have been introduced. In contrast to CAM techniques, which focus on the model's predictions (thus a classifier), we propose a simple yet informative way to visualize network activation - Network Activation Mapping (NAM). Our method targets the most important part of the CNN - a convolutional feature extractor. Opposed to CAM methods, NAM is class-and classifier-independent and provides insight into what the neural network focuses on during the feature extraction process and what features it finds the most prominent in the examined image. Due to the classifier-independence, it can be used with all CNN models. In our experiments, we demonstrate how the performance of a convolutional feature extractor can be preliminarily evaluated using NAM. We also present results obtained for a simple NAM-based visual attention mechanism, which allows us to filter out less informative regions of the image and facilitates the decision making process.\",\"PeriodicalId\":106974,\"journal\":{\"name\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN55064.2022.9892442\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN55064.2022.9892442","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度卷积神经网络(cnn)仍然缺乏可解释性,通常被视为神奇的黑箱机器。因此,当智能系统出现故障时,通常很难排除故障。其中,这些问题可能是由CNN分类器的错误决策引起的。另一个原因可能是CNN的选择性“盲目性”——由卷积特征提取器泛化不足引起的。为了更好地理解CNN决策,引入了类激活映射(CAM)家族的方法。与CAM技术相反,CAM技术侧重于模型的预测(因此是分类器),我们提出了一种简单但信息丰富的方法来可视化网络激活-网络激活映射(NAM)。我们的方法针对的是CNN最重要的部分——卷积特征提取器。与CAM方法相反,NAM是独立于类和分类器的,并提供了神经网络在特征提取过程中关注的内容以及它在检查的图像中发现的最突出的特征。由于与分类器无关,它可以用于所有CNN模型。在我们的实验中,我们演示了如何使用NAM初步评估卷积特征提取器的性能。我们还介绍了一个简单的基于nam的视觉注意机制的结果,该机制允许我们过滤掉图像中信息较少的区域,从而促进决策过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
NAM: What Does a Neural Network See?
Deep Convolutional Neural Networks (CNNs) still lack interpretability and are often treated as miraculous blackbox machines. Therefore, when an intelligent system fails, it is usually difficult to troubleshoot the problems. Among others, these issues can be caused by incorrect decisions of the CNN classifier. The other reason can be selective “blindness” of the CNN - caused by an insufficient generalization of the convolutional feature extractor. To better understand the CNN decisions, methods from the Class Activation Mapping (CAM) family have been introduced. In contrast to CAM techniques, which focus on the model's predictions (thus a classifier), we propose a simple yet informative way to visualize network activation - Network Activation Mapping (NAM). Our method targets the most important part of the CNN - a convolutional feature extractor. Opposed to CAM methods, NAM is class-and classifier-independent and provides insight into what the neural network focuses on during the feature extraction process and what features it finds the most prominent in the examined image. Due to the classifier-independence, it can be used with all CNN models. In our experiments, we demonstrate how the performance of a convolutional feature extractor can be preliminarily evaluated using NAM. We also present results obtained for a simple NAM-based visual attention mechanism, which allows us to filter out less informative regions of the image and facilitates the decision making process.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信