NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning

M. Alzantot, Amy Widdicombe, S. Julier, M. Srivastava
{"title":"NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning","authors":"M. Alzantot, Amy Widdicombe, S. Julier, M. Srivastava","doi":"10.1109/SMARTCOMP.2019.00033","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) deliver state-of-the-art performance in many image recognition and understanding applications. However, despite their outstanding performance, these models are black-boxes and it is hard to understand how they make their decisions. Over the past few years, researchers have studied the problem of providing explanations of why DNNs predicted their results. However, existing techniques are either obtrusive, requiring changes in model training, or suffer from low output quality. In this paper, we present a novel method, NeuroMask, for generating an interpretable explanation of classification model results. When applied to image classification models, NeuroMask identifies the image parts that are most important to classifier results by applying a mask that hides/reveals different parts of the image, before feeding it back into the model. The mask values are tuned by minimizing a properly designed cost function that preserves the classification result and encourages producing an interpretable mask. Experiments using state-of-art Convolutional Neural Networks for image recognition on different datasets (CIFAR-10 and ImageNet) show that NeuroMask successfully localizes the parts of the input image which are most relevant to the DNN decision. By showing a visual quality comparison between NeuroMask explanations and those of other methods, we find NeuroMask to be both accurate and interpretable.","PeriodicalId":253364,"journal":{"name":"2019 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"461 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Smart Computing (SMARTCOMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMARTCOMP.2019.00033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Deep Neural Networks (DNNs) deliver state-of-the-art performance in many image recognition and understanding applications. However, despite their outstanding performance, these models are black-boxes and it is hard to understand how they make their decisions. Over the past few years, researchers have studied the problem of providing explanations of why DNNs predicted their results. However, existing techniques are either obtrusive, requiring changes in model training, or suffer from low output quality. In this paper, we present a novel method, NeuroMask, for generating an interpretable explanation of classification model results. When applied to image classification models, NeuroMask identifies the image parts that are most important to classifier results by applying a mask that hides/reveals different parts of the image, before feeding it back into the model. The mask values are tuned by minimizing a properly designed cost function that preserves the classification result and encourages producing an interpretable mask. Experiments using state-of-art Convolutional Neural Networks for image recognition on different datasets (CIFAR-10 and ImageNet) show that NeuroMask successfully localizes the parts of the input image which are most relevant to the DNN decision. By showing a visual quality comparison between NeuroMask explanations and those of other methods, we find NeuroMask to be both accurate and interpretable.
NeuroMask:通过掩模学习解释深度神经网络的预测
深度神经网络(dnn)在许多图像识别和理解应用中提供了最先进的性能。然而,尽管这些模型表现出色,但它们都是黑盒子,很难理解它们是如何做出决策的。在过去的几年里,研究人员一直在研究如何解释为什么dnn预测了他们的结果。然而,现有的技术要么是突兀的,需要改变模型训练,要么是输出质量低。在本文中,我们提出了一种新的方法,NeuroMask,用于生成分类模型结果的可解释解释。当应用于图像分类模型时,NeuroMask在将图像反馈回模型之前,通过应用隐藏/显示图像不同部分的掩模来识别对分类器结果最重要的图像部分。通过最小化适当设计的成本函数来调整掩码值,该函数保留了分类结果并鼓励产生可解释的掩码。在不同的数据集(CIFAR-10和ImageNet)上使用最先进的卷积神经网络进行图像识别的实验表明,NeuroMask成功地定位了输入图像中与DNN决策最相关的部分。通过对NeuroMask解释和其他方法之间的视觉质量比较,我们发现NeuroMask既准确又可解释。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信