Erik Blasch, G. Seetharaman, K. Palaniappan, Haibin Ling, Genshe Chen
{"title":"Wide-area motion imagery (WAMI) exploitation tools for enhanced situation awareness","authors":"Erik Blasch, G. Seetharaman, K. Palaniappan, Haibin Ling, Genshe Chen","doi":"10.1109/AIPR.2012.6528198","DOIUrl":null,"url":null,"abstract":"The advent of streaming feeds of full-motion video (FMV) and wide-area motion imagery (WAMI) have overloaded an image analyst's capacity to detect patterns, movements, and patterns of life. To aid in the process of WAMI exploitation, we explore computer vision and pattern recognition methods to cue the user to salient information. For enhanced exploitation and analysis, there is a need to develop WAMI methods for situation awareness. Computer vision algorithms provide cues, contexts, and communication patterns to enhance exploitation capabilities. Multi-source data fusion using exploitation context from the video needs to be linked to semantically extracted elements for situation awareness to aid an operator in rapid image understanding. In this paper, we identify: (1) opportunities from computer vision techniques to improve WAMI target tracking, (2) relate developments of clustering methods for activity-based intelligence and stochastic context-free grammars for accessing, indexing, and linking relevant information to assist processing and exploitation, and (3) address situation awareness methods of multi-intelligence collaboration for future automated video understanding techniques. Our example uses the open-source Columbus Large Image Format (CLIF) WAMI data to demonstrate connection of video-based semantic labeling with other information fusion enterprise capabilities incorporating text-based semantic extraction.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"72","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2012.6528198","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 72
Abstract
The advent of streaming feeds of full-motion video (FMV) and wide-area motion imagery (WAMI) have overloaded an image analyst's capacity to detect patterns, movements, and patterns of life. To aid in the process of WAMI exploitation, we explore computer vision and pattern recognition methods to cue the user to salient information. For enhanced exploitation and analysis, there is a need to develop WAMI methods for situation awareness. Computer vision algorithms provide cues, contexts, and communication patterns to enhance exploitation capabilities. Multi-source data fusion using exploitation context from the video needs to be linked to semantically extracted elements for situation awareness to aid an operator in rapid image understanding. In this paper, we identify: (1) opportunities from computer vision techniques to improve WAMI target tracking, (2) relate developments of clustering methods for activity-based intelligence and stochastic context-free grammars for accessing, indexing, and linking relevant information to assist processing and exploitation, and (3) address situation awareness methods of multi-intelligence collaboration for future automated video understanding techniques. Our example uses the open-source Columbus Large Image Format (CLIF) WAMI data to demonstrate connection of video-based semantic labeling with other information fusion enterprise capabilities incorporating text-based semantic extraction.