{"title":"Vessel detection in video with dynamic maritime background","authors":"Michael T. Chan, C. Weed","doi":"10.1109/AIPR.2012.6528222","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528222","url":null,"abstract":"Automating the detection of non-cooperative vessels in surveillance video is challenging. First, the detection algorithm has to handle a large degree of appearance variation of vessels with respect to shape, size and viewing geometry. Second, a unique challenge in the maritime domain is the presence of sea clutter, which can cause a high number of false detections. While recent research in object detection has largely been focused on objects on the ground, we have developed a layered detection algorithm to address challenges in the maritime domain by fusing cues from (1) a discriminative detection algorithm that learns a vessel target model from hundreds of vessel images, and (2) a dynamic texture-based background model that adaptively learns the spatiotemporal dynamics of sea clutter. We present results on how each layer of the algorithms was individually optimized, and how their outputs were fused. Initial results were promising showing a significantly lower false alarm rate than when only the target model was applied. The proposed approach has applications in port, coastal and waterway surveillance.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125757401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steven C. Rowe, Pritpaul Mahal, Lucas Burkowski, G. Beach, C. Cohen
{"title":"Template matching localization for GPS denied environments","authors":"Steven C. Rowe, Pritpaul Mahal, Lucas Burkowski, G. Beach, C. Cohen","doi":"10.1109/AIPR.2012.6528208","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528208","url":null,"abstract":"This paper describes our solution to LADAR based robot localization in a GPS denied environment where there is only one consistent feature. We detail the type of data received, how we define a doorway and gap, the line extraction method, and doorway template matching with simulated annealing solution.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132975416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Bryant, W. Bunch, R. Fretz, P. Kim, T. Logan, M. Smyth, A. Zobrist
{"title":"Obtaining accurate change detection results from high-resolution satellite sensors","authors":"N. Bryant, W. Bunch, R. Fretz, P. Kim, T. Logan, M. Smyth, A. Zobrist","doi":"10.1109/AIPR.2012.6528199","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528199","url":null,"abstract":"Multi-date acquisitions of high-resolution imaging satellites (e.g. GeoEye and WorldView), can display local changes of current economic interest. However, their large data volume precludes effective manual analysis, requiring image co-registration followed by image-to-image change detection, preferably with minimal analyst attention. We have recently developed an automatic change detection procedure that minimizes false-positives. The processing steps include: (a) Conversion of both the pre- and post- images to reflectance values (this step is of critical importance when different sensors are involved); reflectance values can be either top-of-atmosphere units or have full aerosol optical depth calibration applied using bi-directional reflectance knowledge. (b) Panchromatic band image-to-image co-registration, using an orthorectified base reference image (e.g. Digital Orthophoto Quadrangle) and a digital elevation model; this step can be improved if a stereo-pair of images have been acquired on one of the image dates. (c) Pan-sharpening of the multispectral data to assure recognition of change objects at the highest resolution. (d) Characterization of multispectral data in the post-image (i.e. the background) using unsupervised cluster analysis. (e) Band ratio selection in the post-image to separate surface materials of interest from the background. (f) Preparing a pre-to-post change image. (g) Identifying locations where change has occurred involving materials of interest.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126137636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new approach to graph analysis for activity based intelligence","authors":"William Raetz","doi":"10.1109/AIPR.2012.6528204","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528204","url":null,"abstract":"This paper investigates the potential of migrating activity metadata to graph space, applying graph analysis to activity metadata, and investigates a method of reducing the time derivative of activity to a small set of time values for use in comparison and activity pattern recognition.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114685382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Future image frame generation using Artificial Neural Network with selected features","authors":"N. Verma","doi":"10.1109/AIPR.2012.6528189","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528189","url":null,"abstract":"This paper presents a novel approach for the generation of Future image frames using Artificial Neural Network (ANN) on spatiotemporal framework. The input to this network are hyper-dimensional color and spatiotemporal features of every pixel of an image in an image sequence. Principal Component Analysis, Mutual Information, Interaction Information and Bhattacharyya Distance measure based feature selection techniques have been used to reduce the dimensionality of the feature set. The pixel values of an image frame are predicted using a simple ANN back propagation algorithm. The ANN network is trained for R, G and B values for each and every pixel in an image frame. The resulting model is successfully applied on an image sequence of a landing fighter plane. As Mentioned above four feature selection techniques are used to compare the performance of the proposed ANN model. The quality of the generated future image frames is assessed using, Canny edge detection based Image Comparison Metric(CIM) and Mean Structural Similarity Index Measure(MSSIM) image quality measures. The proposed approach is found to have generated six future image frames successfully with acceptable quality of images.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132423603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Contreras, William Cuadrado, David Muñoz, George Archbold, Geraldine Delgado, V. Diaz
{"title":"Automatic ship hull inspection using fuzzy logic","authors":"J. Contreras, William Cuadrado, David Muñoz, George Archbold, Geraldine Delgado, V. Diaz","doi":"10.1109/AIPR.2012.6528214","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528214","url":null,"abstract":"This article presents the methodology to reconstruct images of ship hulls in turbid waters from an information gathered to a system composed of video camera, laser line-point and sonar scanning, which were incorporated into an underwater vehicle that has a navigation control. The acquired data is processed by diffuse algorithms that have demonstrated low computational complexity and high efficiency in image reconstruction. We present the results of image reconstruction of ship hulls in low visibility conditions.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127842384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Jackovitz, V. Asari, E. Balster, J. Vasquez, P. Hytla
{"title":"Registration of region of interest for object tracking applications in wide area motion imagery","authors":"K. Jackovitz, V. Asari, E. Balster, J. Vasquez, P. Hytla","doi":"10.1109/AIPR.2012.6528202","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528202","url":null,"abstract":"Image registration or stabilization is a task that has been focused on in many fields in image processing. Many methods are available for registering two images, but when dealing with wide area motion imagery, registration of the full scene can be taxing computationally. The proposed algorithm is an application utilizing two existing registration methods to stabilize a specific region of interest in an aerial image database. The registration tools implemented during this application are a phase correlation method and the efficient second-order minimization method. The combination of both registration functions act as a coarse-to-fine image registration algorithm. The goal of this application is to output a set of registered smaller sized images from the larger dataset for the use in detection and tracking of objects in wide area imagery. Experiments performed on several image streams show that the proposed technique is effective and gives high registration accuracy.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129804043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image search system","authors":"P. Cho, Michael Yee","doi":"10.1109/AIPR.2012.6528193","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528193","url":null,"abstract":"We present a prototype system which enables users to explore the global structure for digital imagery archives as well as drill-down into individual pictures. Our search engine builds upon computer vision advances made over the past decade in low-level feature matching, large data handling and object recognition. We demonstrate hierarchical clustering among images semi-cooperatively shot around MIT, automatic linking of flickr photos and aerial frames from the Grand Canyon, and video segment identification for a TV broadcast. Moreover, our software tools incorporate visible vs infrared band selection, color content quantization and human face detection. Ongoing and future extensions of this image search system are discussed.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122615230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distributed adaptive spectral and spatial sensor fusion for super-resolution classification","authors":"T. Khuon, R. Rand, J. Greer, E. Truslow","doi":"10.1109/AIPR.2012.6528194","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528194","url":null,"abstract":"A distributed architecture for adaptive sensor fusion (a multisensor fusion neural net) is introduced for 3D imagery data that makes use of a super-resolution technique computed with a Bregman-Iteration deconvolution algorithm. This architecture is a cascaded neural network, which consists of two levels of neural networks. The first level consists of sensor networks: two independent sensor neural nets, namely, a spatial neural net and spectral neural net. The second level is a fusion neural net, which contains a single neural net that combines the information from the sensor level. The inputs to the sensor networks are obtained from unsupervised spatial and spectral segmentation algorithms that can be applied to the original imagery or imagery enhanced by a proposed super-resolution process. Spatial segmentation is obtained by a mean-shift method and spectral segmentation is obtained by a Stochastic Expectation Maximization method. The decision outputs from the sensor nets are used to train the fusion net to a specific overall decision. The overall approach is tested with an experiment involving a multi-sensor airborne collection of LIDAR and Hyperspectral data over a university campus in Gulfport MS. The success of the system in utilizing sensor synergism for an enhanced classification is clearly demonstrated. The final class map contains the geographical classes as well as the signature classes.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"515 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116216587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion of LIDAR data with hyperspectral and high-resolution imagery for automation of DIRSIG scene generation","authors":"Ryan N. Givens, K. Walli, M. Eismann","doi":"10.1109/AIPR.2012.6528191","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528191","url":null,"abstract":"Developing new remote sensing instruments is a costly and time consuming process. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model gives users the ability to create synthetic images for a proposed sensor before building it. However, to produce synthetic images, DIRSIG requires facetized, three-dimensional models attributed with spectral and texture information which can themselves be costly and time consuming to produce. Recent work by Walli has shown that coincident LIDAR data and high-resolution imagery can be registered and used to automatically generate the geometry and texture information needed for a DIRSIG scene. This method, called LIDAR Direct, greatly reduces the time and manpower needed to generate a scene, but still requires user interaction to attribute facets with either library or field measured spectral information. This paper builds upon that work and presents a method for autonomously generating the geometry, texture, and spectral content for a scene when coincident LIDAR data, high-resolution imagery, and HyperSpectral Imagery (HSI) of a site are available. Then the method is demonstrated on real data.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127173733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}