Juann Kim, Dong-Whan Lee, Youngseop Kim, Heeyeon Shin, Yeeun Heo, Yaqin Wang, E. Matson
{"title":"Deep Learning Based Malicious Drone Detection Using Acoustic and Image Data","authors":"Juann Kim, Dong-Whan Lee, Youngseop Kim, Heeyeon Shin, Yeeun Heo, Yaqin Wang, E. Matson","doi":"10.1109/IRC55401.2022.00024","DOIUrl":null,"url":null,"abstract":"Drones have been studied in a variety of industries. Drone detection is one of the most important task. The goal of this paper is to detect the target drone using the microphone and a camera of the detecting drone by training deep learning models. For evaluation, three methods are used: visual-based, audio-based, and the decision fusion of both features. Image and audio data were collected from the detecting drone, by flying two drones in the sky at a fixed distance of 20m. CNN (Convolutional Neural Network) was used for audio, and YOLOv5 was used for computer vision. From the result, the decision fusion of audio and vision-based features showed the highest accuracy among the three evaluation methods.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRC55401.2022.00024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Drones have been studied in a variety of industries. Drone detection is one of the most important task. The goal of this paper is to detect the target drone using the microphone and a camera of the detecting drone by training deep learning models. For evaluation, three methods are used: visual-based, audio-based, and the decision fusion of both features. Image and audio data were collected from the detecting drone, by flying two drones in the sky at a fixed distance of 20m. CNN (Convolutional Neural Network) was used for audio, and YOLOv5 was used for computer vision. From the result, the decision fusion of audio and vision-based features showed the highest accuracy among the three evaluation methods.