Out-of-Distribution Detection through Relative Activation-Deactivation Abstractions

Zhen Zhang, Peng Wu, Yuhang Chen, Jing Su
{"title":"Out-of-Distribution Detection through Relative Activation-Deactivation Abstractions","authors":"Zhen Zhang, Peng Wu, Yuhang Chen, Jing Su","doi":"10.1109/ISSRE52982.2021.00027","DOIUrl":null,"url":null,"abstract":"A deep learning model always misclassifies an out-of-distribution input, which is not of any category that the deep learning model is trained for. Hence, out-of-distribution detection is practically an important task for ensuring the safety and reliability of a deep learning based system. We present in this paper the notion of relative activation and deactivation to interpret the inference behavior of the deep learning model. Then, we propose a relative activation-deactivation abstraction approach to characterize the decision logic of the deep learning model. The relative activation-deactivation abstractions enjoy close intra-class aggregation for each category under training, as well as diverse inter-class separation between various categories under training. We further propose an out-of-distribution detection algorithm based on the relative activation-deactivation abstraction approach, following the underlying principle that the relative activation-deactivation abstraction of a deep learning model under an out-of-distribution input is far away from the one for the predicted category the deep learning model outputs. Our detection algorithm does not require any designed perturbation to the input data, nor any hyperparameter tuning to the deep learning model with out-of-distribution data. We evaluate the detection algorithm with 8 typical benchmark datasets in literature. The experimental results show that our detection algorithm can achieve better and more stable performance than the state-of-the-art white-box abstraction based detection algorithms, with significantly more true positive and less false positive alerts for out-of-distribution detection.","PeriodicalId":162410,"journal":{"name":"2021 IEEE 32nd International Symposium on Software Reliability Engineering (ISSRE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 32nd International Symposium on Software Reliability Engineering (ISSRE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSRE52982.2021.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

A deep learning model always misclassifies an out-of-distribution input, which is not of any category that the deep learning model is trained for. Hence, out-of-distribution detection is practically an important task for ensuring the safety and reliability of a deep learning based system. We present in this paper the notion of relative activation and deactivation to interpret the inference behavior of the deep learning model. Then, we propose a relative activation-deactivation abstraction approach to characterize the decision logic of the deep learning model. The relative activation-deactivation abstractions enjoy close intra-class aggregation for each category under training, as well as diverse inter-class separation between various categories under training. We further propose an out-of-distribution detection algorithm based on the relative activation-deactivation abstraction approach, following the underlying principle that the relative activation-deactivation abstraction of a deep learning model under an out-of-distribution input is far away from the one for the predicted category the deep learning model outputs. Our detection algorithm does not require any designed perturbation to the input data, nor any hyperparameter tuning to the deep learning model with out-of-distribution data. We evaluate the detection algorithm with 8 typical benchmark datasets in literature. The experimental results show that our detection algorithm can achieve better and more stable performance than the state-of-the-art white-box abstraction based detection algorithms, with significantly more true positive and less false positive alerts for out-of-distribution detection.
通过相对激活-去激活抽象的分布外检测
深度学习模型总是对分布外的输入进行错误分类,而这些输入不属于深度学习模型训练的任何类别。因此,离分布检测实际上是确保基于深度学习的系统安全可靠的重要任务。本文提出了相对激活和相对失活的概念来解释深度学习模型的推理行为。然后,我们提出了一种相对激活-停用抽象方法来表征深度学习模型的决策逻辑。相对的激活-去激活抽象对每个被训练的类别具有紧密的类内聚合,对不同的被训练类别具有不同的类间分离。我们进一步提出了一种基于相对激活-失活抽象方法的离分布检测算法,其基本原理是在离分布输入下深度学习模型的相对激活-失活抽象远低于深度学习模型输出的预测类别的抽象。我们的检测算法不需要对输入数据进行任何设计扰动,也不需要对具有非分布数据的深度学习模型进行任何超参数调整。我们用文献中8个典型的基准数据集对检测算法进行了评估。实验结果表明,与目前基于白盒抽象的检测算法相比,我们的检测算法可以获得更好、更稳定的性能,对分布外检测的真阳性报警明显增加,假阳性报警明显减少。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信