Elena M. Vella, Anee Azim, H. Gaetjens, Boris Repasky, Timothy Payne
{"title":"Improved Detection for WAMI using Background Contextual Information","authors":"Elena M. Vella, Anee Azim, H. Gaetjens, Boris Repasky, Timothy Payne","doi":"10.1109/DICTA47822.2019.8945924","DOIUrl":null,"url":null,"abstract":"Current vehicle detection and tracking in imagery characterised by large ground coverage, low resolution and low frame rate data, such as Wide Area Motion Imagery (WAMI), does not reliably sustain vehicle tracks through start-stop movement profiles. This limits the continuity of tracks and its usefulness in higher level analysis such as pattern of behaviour or activity analysis. We develop and implement a two-step registration method to create well-registered images which are used to generate a novel low-noise representation of the static background context which is fed into our Context Convolutional Neural Network (C-CNN) detector. This network is unique as the C-CCN learns changing features in the scene and thus produces reliable, sustained vehicle detection independent of motion. A quantitative evaluation against WAMI imagery is presented for a Region of Interest (ROI) of the WPAFB 2009 annotated dataset [1]. We apply a Kalman filter tracker with WAMI-specific adaptions to the single frame C-CNN detections, and evaluate the results with respect to the tracking ground truth. We show improved detection and sustained tracking in WAMI using static background contextual information and reliably detect all vehicles that move, including vehicles that become stationary for short periods of time as they move through stop-start manoeuvres.","PeriodicalId":6696,"journal":{"name":"2019 Digital Image Computing: Techniques and Applications (DICTA)","volume":"145 1","pages":"1-9"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA47822.2019.8945924","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Current vehicle detection and tracking in imagery characterised by large ground coverage, low resolution and low frame rate data, such as Wide Area Motion Imagery (WAMI), does not reliably sustain vehicle tracks through start-stop movement profiles. This limits the continuity of tracks and its usefulness in higher level analysis such as pattern of behaviour or activity analysis. We develop and implement a two-step registration method to create well-registered images which are used to generate a novel low-noise representation of the static background context which is fed into our Context Convolutional Neural Network (C-CNN) detector. This network is unique as the C-CCN learns changing features in the scene and thus produces reliable, sustained vehicle detection independent of motion. A quantitative evaluation against WAMI imagery is presented for a Region of Interest (ROI) of the WPAFB 2009 annotated dataset [1]. We apply a Kalman filter tracker with WAMI-specific adaptions to the single frame C-CNN detections, and evaluate the results with respect to the tracking ground truth. We show improved detection and sustained tracking in WAMI using static background contextual information and reliably detect all vehicles that move, including vehicles that become stationary for short periods of time as they move through stop-start manoeuvres.