对基于深度学习的直升机识别系统的规避攻击

IF 16.4 1区 化学 Q1 CHEMISTRY, MULTIDISCIPLINARY
Jun Lee, Taewan Kim, Seungho Bang, Sehong Oh, Hyun Kwon
{"title":"对基于深度学习的直升机识别系统的规避攻击","authors":"Jun Lee, Taewan Kim, Seungho Bang, Sehong Oh, Hyun Kwon","doi":"10.1155/2024/1124598","DOIUrl":null,"url":null,"abstract":"Identifying objects in surveillance and reconnaissance systems with the human eye can be challenging, underscoring the growing importance of employing deep learning models for the recognition of enemy weapon systems. These systems, leveraging deep neural networks known for their strong performance in image recognition and classification, are currently under extensive research. However, it is crucial to acknowledge that surveillance and reconnaissance systems utilizing deep neural networks are susceptible to vulnerabilities posed by adversarial examples. While prior adversarial example research has mainly utilized publicly available internet data, there has been a significant absence of studies concerning adversarial attacks on data and models specific to real military scenarios. In this paper, we introduce an adversarial example designed for a binary classifier tasked with recognizing helicopters. Our approach generates an adversarial example that is misclassified by the model, despite appearing unproblematic to the human eye. To conduct our experiments, we gathered real attack and transport helicopters and employed TensorFlow as the machine learning library of choice. Our experimental findings demonstrate that the average attack success rate of the proposed method is 81.9%. Additionally, when epsilon is 0.4, the attack success rate is 90.1%. Before epsilon reaches 0.4, the attack success rate increases rapidly, and then we can see that epsilon increases little by little thereafter.","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":null,"pages":null},"PeriodicalIF":16.4000,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evasion Attacks on Deep Learning-Based Helicopter Recognition Systems\",\"authors\":\"Jun Lee, Taewan Kim, Seungho Bang, Sehong Oh, Hyun Kwon\",\"doi\":\"10.1155/2024/1124598\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Identifying objects in surveillance and reconnaissance systems with the human eye can be challenging, underscoring the growing importance of employing deep learning models for the recognition of enemy weapon systems. These systems, leveraging deep neural networks known for their strong performance in image recognition and classification, are currently under extensive research. However, it is crucial to acknowledge that surveillance and reconnaissance systems utilizing deep neural networks are susceptible to vulnerabilities posed by adversarial examples. While prior adversarial example research has mainly utilized publicly available internet data, there has been a significant absence of studies concerning adversarial attacks on data and models specific to real military scenarios. In this paper, we introduce an adversarial example designed for a binary classifier tasked with recognizing helicopters. Our approach generates an adversarial example that is misclassified by the model, despite appearing unproblematic to the human eye. To conduct our experiments, we gathered real attack and transport helicopters and employed TensorFlow as the machine learning library of choice. Our experimental findings demonstrate that the average attack success rate of the proposed method is 81.9%. Additionally, when epsilon is 0.4, the attack success rate is 90.1%. Before epsilon reaches 0.4, the attack success rate increases rapidly, and then we can see that epsilon increases little by little thereafter.\",\"PeriodicalId\":1,\"journal\":{\"name\":\"Accounts of Chemical Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":16.4000,\"publicationDate\":\"2024-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Accounts of Chemical Research\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1155/2024/1124598\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1155/2024/1124598","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

用人眼识别监视和侦察系统中的物体是一项挑战,这说明采用深度学习模型识别敌方武器系统越来越重要。这些系统利用在图像识别和分类方面表现出色的深度神经网络,目前正在进行广泛的研究。然而,必须承认的是,利用深度神经网络的监视和侦察系统容易受到对抗性示例的影响。此前的对抗性示例研究主要利用公开的互联网数据,而针对真实军事场景中特定数据和模型的对抗性攻击研究却严重缺乏。在本文中,我们介绍了一个为二元分类器设计的对抗示例,该分类器的任务是识别直升机。我们的方法生成了一个被模型错误分类的对抗示例,尽管它在人眼看来毫无问题。为了进行实验,我们收集了真实的攻击直升机和运输直升机,并将 TensorFlow 作为首选的机器学习库。实验结果表明,建议方法的平均攻击成功率为 81.9%。此外,当 epsilon 为 0.4 时,攻击成功率为 90.1%。在ε达到 0.4 之前,攻击成功率迅速上升,之后我们可以看到ε在逐渐增加。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evasion Attacks on Deep Learning-Based Helicopter Recognition Systems
Identifying objects in surveillance and reconnaissance systems with the human eye can be challenging, underscoring the growing importance of employing deep learning models for the recognition of enemy weapon systems. These systems, leveraging deep neural networks known for their strong performance in image recognition and classification, are currently under extensive research. However, it is crucial to acknowledge that surveillance and reconnaissance systems utilizing deep neural networks are susceptible to vulnerabilities posed by adversarial examples. While prior adversarial example research has mainly utilized publicly available internet data, there has been a significant absence of studies concerning adversarial attacks on data and models specific to real military scenarios. In this paper, we introduce an adversarial example designed for a binary classifier tasked with recognizing helicopters. Our approach generates an adversarial example that is misclassified by the model, despite appearing unproblematic to the human eye. To conduct our experiments, we gathered real attack and transport helicopters and employed TensorFlow as the machine learning library of choice. Our experimental findings demonstrate that the average attack success rate of the proposed method is 81.9%. Additionally, when epsilon is 0.4, the attack success rate is 90.1%. Before epsilon reaches 0.4, the attack success rate increases rapidly, and then we can see that epsilon increases little by little thereafter.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Accounts of Chemical Research
Accounts of Chemical Research 化学-化学综合
CiteScore
31.40
自引率
1.10%
发文量
312
审稿时长
2 months
期刊介绍: Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance. Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信