基于多分类器的分布外检测

IF 1.2 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Weijie Jiang, Yuanlong Yu
{"title":"基于多分类器的分布外检测","authors":"Weijie Jiang,&nbsp;Yuanlong Yu","doi":"10.1049/ccs2.12079","DOIUrl":null,"url":null,"abstract":"<p>Existing out-of-distribution detection models rely on the prediction of a single classifier and are sensitive to classifier bias, making it difficult to discriminate similar feature out-of-distribution data. This article proposed a multi-classifier-based model and two strategies to enhance the performance of the model. The model first trains several different base classifiers and obtains the predictions of the test data on each base classifier, then uses cross-entropy to calculate the dispersion between these predictions, and finally uses the dispersion as a metric to identify the out-of-distribution data. A large scatter implies inconsistency in the predictions of the base classifier, and the greater the probability of belonging to the out-of-distribution data. The first strategy is applied in the training process of the model to increase the difference between base classifiers by using various scales of Label smoothing regularisation. The second strategy is applied to the inference process of the model by changing the mean and variance of the activations in the neural network to perturb the inference results of the test data. These two strategies can effectively amplify the discrepancy in the dispersion of the in-distribution and out-of-distribution data. The experimental results show that the method in this article can effectively improve the performance of the model in the detection of different types of out-of-distribution data, improve the robustness of deep neural networks (DNN) in the face of unknown classes, and promote the application of DNN in systems and engineering with high security requirements.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"5 2","pages":"95-108"},"PeriodicalIF":1.2000,"publicationDate":"2023-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12079","citationCount":"0","resultStr":"{\"title\":\"Out-of-distribution detection based on multi-classifiers\",\"authors\":\"Weijie Jiang,&nbsp;Yuanlong Yu\",\"doi\":\"10.1049/ccs2.12079\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Existing out-of-distribution detection models rely on the prediction of a single classifier and are sensitive to classifier bias, making it difficult to discriminate similar feature out-of-distribution data. This article proposed a multi-classifier-based model and two strategies to enhance the performance of the model. The model first trains several different base classifiers and obtains the predictions of the test data on each base classifier, then uses cross-entropy to calculate the dispersion between these predictions, and finally uses the dispersion as a metric to identify the out-of-distribution data. A large scatter implies inconsistency in the predictions of the base classifier, and the greater the probability of belonging to the out-of-distribution data. The first strategy is applied in the training process of the model to increase the difference between base classifiers by using various scales of Label smoothing regularisation. The second strategy is applied to the inference process of the model by changing the mean and variance of the activations in the neural network to perturb the inference results of the test data. These two strategies can effectively amplify the discrepancy in the dispersion of the in-distribution and out-of-distribution data. The experimental results show that the method in this article can effectively improve the performance of the model in the detection of different types of out-of-distribution data, improve the robustness of deep neural networks (DNN) in the face of unknown classes, and promote the application of DNN in systems and engineering with high security requirements.</p>\",\"PeriodicalId\":33652,\"journal\":{\"name\":\"Cognitive Computation and Systems\",\"volume\":\"5 2\",\"pages\":\"95-108\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2023-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12079\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Computation and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12079\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12079","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

现有的分布外检测模型依赖于单个分类器的预测,并且对分类器偏差敏感,使得很难区分分布数据中的相似特征。本文提出了一种基于多分类器的模型和两种提高模型性能的策略。该模型首先训练几个不同的基本分类器,并在每个基本分类器上获得测试数据的预测,然后使用交叉熵来计算这些预测之间的离散度,最后使用离散度作为度量来识别分布外的数据。大的分散意味着基本分类器的预测不一致,并且属于分布外数据的概率越大。第一种策略应用于模型的训练过程,通过使用不同尺度的标签平滑正则化来增加基本分类器之间的差异。第二种策略通过改变神经网络中激活的均值和方差来干扰测试数据的推理结果,应用于模型的推理过程。这两种策略可以有效地放大分布内和分布外数据的分散差异。实验结果表明,本文的方法可以有效地提高模型在检测不同类型的分布外数据方面的性能,提高深度神经网络(DNN)在面对未知类时的鲁棒性,促进DNN在安全要求高的系统和工程中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Out-of-distribution detection based on multi-classifiers

Out-of-distribution detection based on multi-classifiers

Existing out-of-distribution detection models rely on the prediction of a single classifier and are sensitive to classifier bias, making it difficult to discriminate similar feature out-of-distribution data. This article proposed a multi-classifier-based model and two strategies to enhance the performance of the model. The model first trains several different base classifiers and obtains the predictions of the test data on each base classifier, then uses cross-entropy to calculate the dispersion between these predictions, and finally uses the dispersion as a metric to identify the out-of-distribution data. A large scatter implies inconsistency in the predictions of the base classifier, and the greater the probability of belonging to the out-of-distribution data. The first strategy is applied in the training process of the model to increase the difference between base classifiers by using various scales of Label smoothing regularisation. The second strategy is applied to the inference process of the model by changing the mean and variance of the activations in the neural network to perturb the inference results of the test data. These two strategies can effectively amplify the discrepancy in the dispersion of the in-distribution and out-of-distribution data. The experimental results show that the method in this article can effectively improve the performance of the model in the detection of different types of out-of-distribution data, improve the robustness of deep neural networks (DNN) in the face of unknown classes, and promote the application of DNN in systems and engineering with high security requirements.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Computation and Systems
Cognitive Computation and Systems Computer Science-Computer Science Applications
CiteScore
2.50
自引率
0.00%
发文量
39
审稿时长
10 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信