Adversarial Artificial Intelligence for Overhead Imagery Classification Models

Charles Rogers, John Bugg, C. Nyheim, Will Gebhardt, Brian Andris, Evan Heitman, C. Fleming
{"title":"Adversarial Artificial Intelligence for Overhead Imagery Classification Models","authors":"Charles Rogers, John Bugg, C. Nyheim, Will Gebhardt, Brian Andris, Evan Heitman, C. Fleming","doi":"10.1109/SIEDS.2019.8735608","DOIUrl":null,"url":null,"abstract":"In overhead object detection, computers are increasingly replacing humans at spotting and identifying specific items within images through the use of machine learning (ML). These ML programs must be both accurate and robust. Accuracy means the results must be trusted enough to substitute for the manual deduction process. Robustness is the magnitude to which the network can handle discrepancies within the images. One way to gauge the robustness is through the use of adversarial networks. Adversarial algorithms are trained using perturbations of the image to reduce the accuracy of an existing classification model. The greater degree of perturbations a model can withstand, the more robust it is. In this paper, comparisons of existing deep neural network models and the advancement of adversarial AI are explored. While there is some published research about AI and adversarial networks, very little discusses this particular utilization for overhead imagery. This paper focuses on overhead imagery, specifically that of ships. Using a public Kaggle dataset, we developed multiple models to detect ships in overhead imagery, specifically ResNet50, DenseNet201, and InceptionV3. The goal of the adversarial works is to manipulate an image so that its contents are misclassified. This paper focuses specifically on producing perturbations that can be recreated in the physical world. This serves to account for physical conditions, whether intentional or not, that could reduce accuracy within our network. While there are military applications for this specific research, the general findings can be applied to all AI overhead image classification topics. This work will explore both the vulnerabilities of existing classifier neural net models and the visualization of these vulnerabilities.","PeriodicalId":265421,"journal":{"name":"2019 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Systems and Information Engineering Design Symposium (SIEDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIEDS.2019.8735608","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In overhead object detection, computers are increasingly replacing humans at spotting and identifying specific items within images through the use of machine learning (ML). These ML programs must be both accurate and robust. Accuracy means the results must be trusted enough to substitute for the manual deduction process. Robustness is the magnitude to which the network can handle discrepancies within the images. One way to gauge the robustness is through the use of adversarial networks. Adversarial algorithms are trained using perturbations of the image to reduce the accuracy of an existing classification model. The greater degree of perturbations a model can withstand, the more robust it is. In this paper, comparisons of existing deep neural network models and the advancement of adversarial AI are explored. While there is some published research about AI and adversarial networks, very little discusses this particular utilization for overhead imagery. This paper focuses on overhead imagery, specifically that of ships. Using a public Kaggle dataset, we developed multiple models to detect ships in overhead imagery, specifically ResNet50, DenseNet201, and InceptionV3. The goal of the adversarial works is to manipulate an image so that its contents are misclassified. This paper focuses specifically on producing perturbations that can be recreated in the physical world. This serves to account for physical conditions, whether intentional or not, that could reduce accuracy within our network. While there are military applications for this specific research, the general findings can be applied to all AI overhead image classification topics. This work will explore both the vulnerabilities of existing classifier neural net models and the visualization of these vulnerabilities.
基于对抗性人工智能的高架图像分类模型
在头顶物体检测中,通过使用机器学习(ML),计算机越来越多地取代人类在图像中发现和识别特定物品。这些机器学习程序必须既准确又健壮。准确性意味着结果必须足够可信,以取代人工推理过程。鲁棒性是指网络能够处理图像中的差异的程度。衡量稳健性的一种方法是使用对抗性网络。对抗算法使用图像的扰动来训练,以降低现有分类模型的准确性。一个模型能承受的扰动程度越大,它就越健壮。本文对现有的深度神经网络模型和对抗人工智能的进展进行了比较。虽然有一些关于人工智能和对抗网络的出版研究,但很少讨论这种对头顶图像的特殊利用。本文的重点是高架图像,特别是船舶的高架图像。使用公共Kaggle数据集,我们开发了多个模型来检测架空图像中的船舶,特别是ResNet50, DenseNet201和InceptionV3。对抗性作品的目标是操纵图像,使其内容被错误分类。这篇论文特别着重于产生可以在物理世界中重现的扰动。这有助于解释物理条件,无论是有意还是无意,都可能降低我们网络中的准确性。虽然这一特定研究有军事应用,但一般研究结果可以应用于所有人工智能头顶图像分类主题。这项工作将探索现有分类器神经网络模型的漏洞以及这些漏洞的可视化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信