S. Fehlmann, C. Pontecorvo, D. Booth, P. Janney, Robert Christie, N. Redding, Mike Royce, Merrilyn J. Fiebig
{"title":"Fusion of Multiple Sensor Data to Recognise Moving Objects in Wide Area Motion Imagery","authors":"S. Fehlmann, C. Pontecorvo, D. Booth, P. Janney, Robert Christie, N. Redding, Mike Royce, Merrilyn J. Fiebig","doi":"10.1109/DICTA.2014.7008110","DOIUrl":null,"url":null,"abstract":"This work addresses the problem of extracting semantics associated with multiple, cooperatively managed motion imagery sensors to support indexing and search of large imagery collections. The extracted semantics relate to the motion and identity of vehicles within a scene, viewed from aircraft and the ground. Semantic extraction required three steps: Video Moving Target Indication (VMTI), imagery fusion, and object recognition. VMTI used a previously published algorithm, with some novel modifications allowing detection and tracking in low frame rate, Wide Area Motion Imagery (WAMI), and Full Motion Video (FMV). Following this, the data from multiple sensors were fused to identify a highest resolution image, corresponding to each moving object. A final recognition stage attempted to fit each delineated object to a database of 3D models to determine its type. A proof-of-concept has been developed to allow processing of imagery collected during a recent experiment using a state of the art airborne surveillance sensor providing WAMI, with coincident narrower-area FMV sensors and simultaneous collection by a ground-based camera. An indication of the potential utility of the system was obtained using ground-truthed examples.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2014.7008110","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This work addresses the problem of extracting semantics associated with multiple, cooperatively managed motion imagery sensors to support indexing and search of large imagery collections. The extracted semantics relate to the motion and identity of vehicles within a scene, viewed from aircraft and the ground. Semantic extraction required three steps: Video Moving Target Indication (VMTI), imagery fusion, and object recognition. VMTI used a previously published algorithm, with some novel modifications allowing detection and tracking in low frame rate, Wide Area Motion Imagery (WAMI), and Full Motion Video (FMV). Following this, the data from multiple sensors were fused to identify a highest resolution image, corresponding to each moving object. A final recognition stage attempted to fit each delineated object to a database of 3D models to determine its type. A proof-of-concept has been developed to allow processing of imagery collected during a recent experiment using a state of the art airborne surveillance sensor providing WAMI, with coincident narrower-area FMV sensors and simultaneous collection by a ground-based camera. An indication of the potential utility of the system was obtained using ground-truthed examples.