{"title":"基于多分类器的分布外检测","authors":"Weijie Jiang, Yuanlong Yu","doi":"10.1049/ccs2.12079","DOIUrl":null,"url":null,"abstract":"<p>Existing out-of-distribution detection models rely on the prediction of a single classifier and are sensitive to classifier bias, making it difficult to discriminate similar feature out-of-distribution data. This article proposed a multi-classifier-based model and two strategies to enhance the performance of the model. The model first trains several different base classifiers and obtains the predictions of the test data on each base classifier, then uses cross-entropy to calculate the dispersion between these predictions, and finally uses the dispersion as a metric to identify the out-of-distribution data. A large scatter implies inconsistency in the predictions of the base classifier, and the greater the probability of belonging to the out-of-distribution data. The first strategy is applied in the training process of the model to increase the difference between base classifiers by using various scales of Label smoothing regularisation. The second strategy is applied to the inference process of the model by changing the mean and variance of the activations in the neural network to perturb the inference results of the test data. These two strategies can effectively amplify the discrepancy in the dispersion of the in-distribution and out-of-distribution data. The experimental results show that the method in this article can effectively improve the performance of the model in the detection of different types of out-of-distribution data, improve the robustness of deep neural networks (DNN) in the face of unknown classes, and promote the application of DNN in systems and engineering with high security requirements.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"5 2","pages":"95-108"},"PeriodicalIF":1.2000,"publicationDate":"2023-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12079","citationCount":"0","resultStr":"{\"title\":\"Out-of-distribution detection based on multi-classifiers\",\"authors\":\"Weijie Jiang, Yuanlong Yu\",\"doi\":\"10.1049/ccs2.12079\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Existing out-of-distribution detection models rely on the prediction of a single classifier and are sensitive to classifier bias, making it difficult to discriminate similar feature out-of-distribution data. This article proposed a multi-classifier-based model and two strategies to enhance the performance of the model. The model first trains several different base classifiers and obtains the predictions of the test data on each base classifier, then uses cross-entropy to calculate the dispersion between these predictions, and finally uses the dispersion as a metric to identify the out-of-distribution data. A large scatter implies inconsistency in the predictions of the base classifier, and the greater the probability of belonging to the out-of-distribution data. The first strategy is applied in the training process of the model to increase the difference between base classifiers by using various scales of Label smoothing regularisation. The second strategy is applied to the inference process of the model by changing the mean and variance of the activations in the neural network to perturb the inference results of the test data. These two strategies can effectively amplify the discrepancy in the dispersion of the in-distribution and out-of-distribution data. The experimental results show that the method in this article can effectively improve the performance of the model in the detection of different types of out-of-distribution data, improve the robustness of deep neural networks (DNN) in the face of unknown classes, and promote the application of DNN in systems and engineering with high security requirements.</p>\",\"PeriodicalId\":33652,\"journal\":{\"name\":\"Cognitive Computation and Systems\",\"volume\":\"5 2\",\"pages\":\"95-108\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2023-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12079\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Computation and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12079\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12079","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Out-of-distribution detection based on multi-classifiers
Existing out-of-distribution detection models rely on the prediction of a single classifier and are sensitive to classifier bias, making it difficult to discriminate similar feature out-of-distribution data. This article proposed a multi-classifier-based model and two strategies to enhance the performance of the model. The model first trains several different base classifiers and obtains the predictions of the test data on each base classifier, then uses cross-entropy to calculate the dispersion between these predictions, and finally uses the dispersion as a metric to identify the out-of-distribution data. A large scatter implies inconsistency in the predictions of the base classifier, and the greater the probability of belonging to the out-of-distribution data. The first strategy is applied in the training process of the model to increase the difference between base classifiers by using various scales of Label smoothing regularisation. The second strategy is applied to the inference process of the model by changing the mean and variance of the activations in the neural network to perturb the inference results of the test data. These two strategies can effectively amplify the discrepancy in the dispersion of the in-distribution and out-of-distribution data. The experimental results show that the method in this article can effectively improve the performance of the model in the detection of different types of out-of-distribution data, improve the robustness of deep neural networks (DNN) in the face of unknown classes, and promote the application of DNN in systems and engineering with high security requirements.