Charles Rogers, John Bugg, C. Nyheim, Will Gebhardt, Brian Andris, Evan Heitman, C. Fleming
{"title":"Adversarial Artificial Intelligence for Overhead Imagery Classification Models","authors":"Charles Rogers, John Bugg, C. Nyheim, Will Gebhardt, Brian Andris, Evan Heitman, C. Fleming","doi":"10.1109/SIEDS.2019.8735608","DOIUrl":null,"url":null,"abstract":"In overhead object detection, computers are increasingly replacing humans at spotting and identifying specific items within images through the use of machine learning (ML). These ML programs must be both accurate and robust. Accuracy means the results must be trusted enough to substitute for the manual deduction process. Robustness is the magnitude to which the network can handle discrepancies within the images. One way to gauge the robustness is through the use of adversarial networks. Adversarial algorithms are trained using perturbations of the image to reduce the accuracy of an existing classification model. The greater degree of perturbations a model can withstand, the more robust it is. In this paper, comparisons of existing deep neural network models and the advancement of adversarial AI are explored. While there is some published research about AI and adversarial networks, very little discusses this particular utilization for overhead imagery. This paper focuses on overhead imagery, specifically that of ships. Using a public Kaggle dataset, we developed multiple models to detect ships in overhead imagery, specifically ResNet50, DenseNet201, and InceptionV3. The goal of the adversarial works is to manipulate an image so that its contents are misclassified. This paper focuses specifically on producing perturbations that can be recreated in the physical world. This serves to account for physical conditions, whether intentional or not, that could reduce accuracy within our network. While there are military applications for this specific research, the general findings can be applied to all AI overhead image classification topics. This work will explore both the vulnerabilities of existing classifier neural net models and the visualization of these vulnerabilities.","PeriodicalId":265421,"journal":{"name":"2019 Systems and Information Engineering Design Symposium (SIEDS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Systems and Information Engineering Design Symposium (SIEDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIEDS.2019.8735608","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In overhead object detection, computers are increasingly replacing humans at spotting and identifying specific items within images through the use of machine learning (ML). These ML programs must be both accurate and robust. Accuracy means the results must be trusted enough to substitute for the manual deduction process. Robustness is the magnitude to which the network can handle discrepancies within the images. One way to gauge the robustness is through the use of adversarial networks. Adversarial algorithms are trained using perturbations of the image to reduce the accuracy of an existing classification model. The greater degree of perturbations a model can withstand, the more robust it is. In this paper, comparisons of existing deep neural network models and the advancement of adversarial AI are explored. While there is some published research about AI and adversarial networks, very little discusses this particular utilization for overhead imagery. This paper focuses on overhead imagery, specifically that of ships. Using a public Kaggle dataset, we developed multiple models to detect ships in overhead imagery, specifically ResNet50, DenseNet201, and InceptionV3. The goal of the adversarial works is to manipulate an image so that its contents are misclassified. This paper focuses specifically on producing perturbations that can be recreated in the physical world. This serves to account for physical conditions, whether intentional or not, that could reduce accuracy within our network. While there are military applications for this specific research, the general findings can be applied to all AI overhead image classification topics. This work will explore both the vulnerabilities of existing classifier neural net models and the visualization of these vulnerabilities.