Jun Lee, Taewan Kim, Seungho Bang, Sehong Oh, Hyun Kwon
{"title":"对基于深度学习的直升机识别系统的规避攻击","authors":"Jun Lee, Taewan Kim, Seungho Bang, Sehong Oh, Hyun Kwon","doi":"10.1155/2024/1124598","DOIUrl":null,"url":null,"abstract":"Identifying objects in surveillance and reconnaissance systems with the human eye can be challenging, underscoring the growing importance of employing deep learning models for the recognition of enemy weapon systems. These systems, leveraging deep neural networks known for their strong performance in image recognition and classification, are currently under extensive research. However, it is crucial to acknowledge that surveillance and reconnaissance systems utilizing deep neural networks are susceptible to vulnerabilities posed by adversarial examples. While prior adversarial example research has mainly utilized publicly available internet data, there has been a significant absence of studies concerning adversarial attacks on data and models specific to real military scenarios. In this paper, we introduce an adversarial example designed for a binary classifier tasked with recognizing helicopters. Our approach generates an adversarial example that is misclassified by the model, despite appearing unproblematic to the human eye. To conduct our experiments, we gathered real attack and transport helicopters and employed TensorFlow as the machine learning library of choice. Our experimental findings demonstrate that the average attack success rate of the proposed method is 81.9%. Additionally, when epsilon is 0.4, the attack success rate is 90.1%. Before epsilon reaches 0.4, the attack success rate increases rapidly, and then we can see that epsilon increases little by little thereafter.","PeriodicalId":48792,"journal":{"name":"Journal of Sensors","volume":"283 1","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evasion Attacks on Deep Learning-Based Helicopter Recognition Systems\",\"authors\":\"Jun Lee, Taewan Kim, Seungho Bang, Sehong Oh, Hyun Kwon\",\"doi\":\"10.1155/2024/1124598\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Identifying objects in surveillance and reconnaissance systems with the human eye can be challenging, underscoring the growing importance of employing deep learning models for the recognition of enemy weapon systems. These systems, leveraging deep neural networks known for their strong performance in image recognition and classification, are currently under extensive research. However, it is crucial to acknowledge that surveillance and reconnaissance systems utilizing deep neural networks are susceptible to vulnerabilities posed by adversarial examples. While prior adversarial example research has mainly utilized publicly available internet data, there has been a significant absence of studies concerning adversarial attacks on data and models specific to real military scenarios. In this paper, we introduce an adversarial example designed for a binary classifier tasked with recognizing helicopters. Our approach generates an adversarial example that is misclassified by the model, despite appearing unproblematic to the human eye. To conduct our experiments, we gathered real attack and transport helicopters and employed TensorFlow as the machine learning library of choice. Our experimental findings demonstrate that the average attack success rate of the proposed method is 81.9%. Additionally, when epsilon is 0.4, the attack success rate is 90.1%. Before epsilon reaches 0.4, the attack success rate increases rapidly, and then we can see that epsilon increases little by little thereafter.\",\"PeriodicalId\":48792,\"journal\":{\"name\":\"Journal of Sensors\",\"volume\":\"283 1\",\"pages\":\"\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Sensors\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1155/2024/1124598\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Sensors","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1155/2024/1124598","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Evasion Attacks on Deep Learning-Based Helicopter Recognition Systems
Identifying objects in surveillance and reconnaissance systems with the human eye can be challenging, underscoring the growing importance of employing deep learning models for the recognition of enemy weapon systems. These systems, leveraging deep neural networks known for their strong performance in image recognition and classification, are currently under extensive research. However, it is crucial to acknowledge that surveillance and reconnaissance systems utilizing deep neural networks are susceptible to vulnerabilities posed by adversarial examples. While prior adversarial example research has mainly utilized publicly available internet data, there has been a significant absence of studies concerning adversarial attacks on data and models specific to real military scenarios. In this paper, we introduce an adversarial example designed for a binary classifier tasked with recognizing helicopters. Our approach generates an adversarial example that is misclassified by the model, despite appearing unproblematic to the human eye. To conduct our experiments, we gathered real attack and transport helicopters and employed TensorFlow as the machine learning library of choice. Our experimental findings demonstrate that the average attack success rate of the proposed method is 81.9%. Additionally, when epsilon is 0.4, the attack success rate is 90.1%. Before epsilon reaches 0.4, the attack success rate increases rapidly, and then we can see that epsilon increases little by little thereafter.
Journal of SensorsENGINEERING, ELECTRICAL & ELECTRONIC-INSTRUMENTS & INSTRUMENTATION
CiteScore
4.10
自引率
5.30%
发文量
833
审稿时长
18 weeks
期刊介绍:
Journal of Sensors publishes papers related to all aspects of sensors, from their theory and design, to the applications of complete sensing devices. All classes of sensor are covered, including acoustic, biological, chemical, electronic, electromagnetic (including optical), mechanical, proximity, and thermal. Submissions relating to wearable, implantable, and remote sensing devices are encouraged.
Envisaged applications include, but are not limited to:
-Medical, healthcare, and lifestyle monitoring
-Environmental and atmospheric monitoring
-Sensing for engineering, manufacturing and processing industries
-Transportation, navigation, and geolocation
-Vision, perception, and sensing for robots and UAVs
The journal welcomes articles that, as well as the sensor technology itself, consider the practical aspects of modern sensor implementation, such as networking, communications, signal processing, and data management.
As well as original research, the Journal of Sensors also publishes focused review articles that examine the state of the art, identify emerging trends, and suggest future directions for developing fields.