{"title":"Spatio-Temporal Semantic Segmentation for Drone Detection","authors":"Céline Craye, Salem Ardjoune","doi":"10.1109/AVSS.2019.8909854","DOIUrl":null,"url":null,"abstract":"The democratization of drones over the past decade has opened wide cracks in airspace security. Research in drone detection and neutralization for critical infrastructures is a very active area with a number of open issues, such as robust detection of drones based on opto-electronic imaging. Indeed, drones at a certain distance only represent a few pixel points on an image, even on a high resolution camera, and can be easily mistaken for birds or any other flying objects in the airspace. In this context, we propose a spatio-temporal semantic segmentation approach based on convolutional neural networks. We handle the problem of detecting very small targets by using a U-Net architecture to identify areas of interest within the larger image. Then, we use a classification network, ResNet, to determine whether those areas contain a drone or not. To further help the localization and classification process, we provide spatiotemporal input patches to our networks. Drones are mostly moving targets, and birds do not follow the same kinds of trajectories; therefore, this additional feature significantly increases overall performance. This work was carried out in the context of the 2019 Drone-vs-Bird detection Challenge. The evaluation is conducted on the provided dataset under several configurations.","PeriodicalId":243194,"journal":{"name":"2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS.2019.8909854","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 39
Abstract
The democratization of drones over the past decade has opened wide cracks in airspace security. Research in drone detection and neutralization for critical infrastructures is a very active area with a number of open issues, such as robust detection of drones based on opto-electronic imaging. Indeed, drones at a certain distance only represent a few pixel points on an image, even on a high resolution camera, and can be easily mistaken for birds or any other flying objects in the airspace. In this context, we propose a spatio-temporal semantic segmentation approach based on convolutional neural networks. We handle the problem of detecting very small targets by using a U-Net architecture to identify areas of interest within the larger image. Then, we use a classification network, ResNet, to determine whether those areas contain a drone or not. To further help the localization and classification process, we provide spatiotemporal input patches to our networks. Drones are mostly moving targets, and birds do not follow the same kinds of trajectories; therefore, this additional feature significantly increases overall performance. This work was carried out in the context of the 2019 Drone-vs-Bird detection Challenge. The evaluation is conducted on the provided dataset under several configurations.