Shafinul Haque, A. Liu, Serena Liu, Jonathan H. Chan
{"title":"用离分布数据微调和图像预处理提高卷积神经网络的鲁棒性","authors":"Shafinul Haque, A. Liu, Serena Liu, Jonathan H. Chan","doi":"10.1145/3468784.3470655","DOIUrl":null,"url":null,"abstract":"Deep convolutional neural networks trained on readily available datasets are often susceptible to decreases in performance when executing tasks on new data from a different domain. Making models generalize well on data in a new domain is the task of domain adaptation. Recently, a simple method, known as Out-of-Distribution Image Detector for Neural Networks (ODIN), was proposed for identifying out-of-distribution (OOD) images in a dataset. This paper proposes fine-tuning an image classifier model using OOD images detected in an ideal training set to improve the model's ability to classify real-life images. This work aims to investigate the effectiveness of such a technique, as well as image preprocessing methods like background removal and image cropping, at increasing the robustness of a ResNet50V2 baseline image classifier in the context of a multi-class classification task. It was observed that fine-tuning with OOD images identified by ODIN consistently increased the model's performance and that a combination of cropping images and fine-tuning with OOD images resulted in the greatest increase in the model's performance.","PeriodicalId":341589,"journal":{"name":"The 12th International Conference on Advances in Information Technology","volume":"71 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Improving the Robustness of a Convolutional Neural Network with Out-of-Distribution Data Fine-Tuning and Image Preprocessing\",\"authors\":\"Shafinul Haque, A. Liu, Serena Liu, Jonathan H. Chan\",\"doi\":\"10.1145/3468784.3470655\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep convolutional neural networks trained on readily available datasets are often susceptible to decreases in performance when executing tasks on new data from a different domain. Making models generalize well on data in a new domain is the task of domain adaptation. Recently, a simple method, known as Out-of-Distribution Image Detector for Neural Networks (ODIN), was proposed for identifying out-of-distribution (OOD) images in a dataset. This paper proposes fine-tuning an image classifier model using OOD images detected in an ideal training set to improve the model's ability to classify real-life images. This work aims to investigate the effectiveness of such a technique, as well as image preprocessing methods like background removal and image cropping, at increasing the robustness of a ResNet50V2 baseline image classifier in the context of a multi-class classification task. It was observed that fine-tuning with OOD images identified by ODIN consistently increased the model's performance and that a combination of cropping images and fine-tuning with OOD images resulted in the greatest increase in the model's performance.\",\"PeriodicalId\":341589,\"journal\":{\"name\":\"The 12th International Conference on Advances in Information Technology\",\"volume\":\"71 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 12th International Conference on Advances in Information Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3468784.3470655\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 12th International Conference on Advances in Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3468784.3470655","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving the Robustness of a Convolutional Neural Network with Out-of-Distribution Data Fine-Tuning and Image Preprocessing
Deep convolutional neural networks trained on readily available datasets are often susceptible to decreases in performance when executing tasks on new data from a different domain. Making models generalize well on data in a new domain is the task of domain adaptation. Recently, a simple method, known as Out-of-Distribution Image Detector for Neural Networks (ODIN), was proposed for identifying out-of-distribution (OOD) images in a dataset. This paper proposes fine-tuning an image classifier model using OOD images detected in an ideal training set to improve the model's ability to classify real-life images. This work aims to investigate the effectiveness of such a technique, as well as image preprocessing methods like background removal and image cropping, at increasing the robustness of a ResNet50V2 baseline image classifier in the context of a multi-class classification task. It was observed that fine-tuning with OOD images identified by ODIN consistently increased the model's performance and that a combination of cropping images and fine-tuning with OOD images resulted in the greatest increase in the model's performance.