{"title":"Focal plane array folding for efficient information extraction and tracking","authors":"L. Hamilton, D. Parker, Chris Yu, P. Indyk","doi":"10.1109/AIPR.2012.6528209","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528209","url":null,"abstract":"We develop a novel compressive sensing based approach for detecting point sources in images and tracking of moving point sources across temporal images. One application is the muzzle flash detection and tracking problem. We pursue the concept of lower-dimension signal representation from structured sparse matrices, which is in contrast to the use of random sparse matrices described in common compressive sensing algorithms. The primary motivation is that an approach using structured sparse matrices can lead to efficient hardware implementations and a scheme that we term folding in the focal plane array. This method “bins” pixels modulo a pair of specified numbers across the pixel plane in both the horizontal and vertical directions. Under this paradigm, a significant reduction in the amount of pixel samples is required, which enable high speed target acquisition and tracking while reducing the number of A/D's. Folding is used to acquire a pair of significantly smaller images, in which two different folded images provide the necessary redundancy to uniquely extract location information. We detect the centroid of point sources in each of the two folded images and use the Chinese remainder theorem (CRT) to determine the location of the point sources in the original image. In our work, we successfully demonstrated the correctness of this algorithm through simulation and showed the algorithm is capable of detecting and tracking multiple muzzle flashes in multiple temporal frames. We present both initial results and improvements to the algorithm's robustness, based on robust Chinese remainder theorem (rCRT) in the presence of noise.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132767409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evidence filtering in a sequence of images for recognition","authors":"Sukhan Lee, M. Ilyas, Jaewoong Kim, A. Naguib","doi":"10.1109/AIPR.2012.6528203","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528203","url":null,"abstract":"In recognizing a target object/entity with its attribute such as pose from images, the evidences extracted initially may be uncertain and/or ambiguous as they can only be defined probabilistically and/or do not satisfy the sufficient condition for recognition. These uncertainties and ambiguities associated with evidences are often due as much to the external, uncontrollable causes, such as the variation of illumination and texture distributions in the scene, as to the quality of the imaging tools used. This paper presents a method of filtering the uncertain and ambiguous evidences obtained from a sequence of images in such a way as to reach a reliable decision level for recognition. First, at each of the image sequence, a number of weak evidences are generated using 3D line, 3D shape descriptor and SIFT which may be ambiguous and/or uncertain to decide recognition quickly and reliably. To reach a faithful recognition, we need to enrich these evidences by Appearance vector and generate multiple interpretations of the target object with higher weights. We incorporate prior established Bayesian Evidence structure which embodied sufficient condition for recognition, to generate such interpretations. Furthermore, when robot moves, we do active recognition using particle filter framework in sequence of images to produce interpretation with highest weight and lowest error covariance. This paper provides readers with the details of the implementation and experimental results of Evidence Filtering in image sequences using Particle filter in HomeMate Robot application.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125219734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Recker, Mauricio Hess-Flores, M. Duchaineau, K. Joy
{"title":"Visualization of scene structure uncertainty in multi-view reconstruction","authors":"S. Recker, Mauricio Hess-Flores, M. Duchaineau, K. Joy","doi":"10.1109/AIPR.2012.6528216","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528216","url":null,"abstract":"This paper presents an interactive visualization system, based upon previous work, that allows for the analysis of scene structure uncertainty and its sensitivity to parameters in different multi-view scene reconstruction stages. Given a set of input cameras and feature tracks, the volume rendering-based approach creates a scalar field from reprojection error measurements. The obtained statistical, visual, and isosurface information provides insight into the sensitivity of scene structure at the stages leading up to structure computation, such as frame decimation, feature tracking, and self-calibration. Furthermore, user interaction allows for such an analysis in ways that have traditionally been achieved mathematically, without any visual aid. Results are shown for different types of camera configurations for real and synthetic data as well as compared to prior work.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127531023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Jin, Joo-Soo Kim, S. Kutcher, Emir Y. Haskovic, D. Meyers, J. Soos, S. Trivedi, N. Gupta
{"title":"Polarization Imaging for crystallographic orientation of large mercurous halide crystals","authors":"F. Jin, Joo-Soo Kim, S. Kutcher, Emir Y. Haskovic, D. Meyers, J. Soos, S. Trivedi, N. Gupta","doi":"10.1109/AIPR.2012.6528206","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528206","url":null,"abstract":"Polarization Imaging is a useful technique to optically determine the orientation of optic axis of birefringent crystals by examining the interference patterns produced in convergent polarized light by the crystal. We developed a polariscope, also known as a conoscope to characterize large mercurous bromide (Hg2Br2) crystals. Such crystals have large birefringence and they are transparent from 0.35 to 30 micron. They are very useful in designing Acousto-Optic Tunable Filters (AOTFs) for multi-spectral and hyperspectral imaging applications, especially in the strategic Long Wavelength Infrared (LWIR) atmospheric window covering 8 to 12 mm. Fabrication of an efficient LWIR AOTF in Hg2Br2 crystal requires knowledge of precise crystallographic orientation of the crystal. We have grown 2-inch in diameter and 2-inch long Hg2Br2 crystals, by vapor phase technique. The Laue x-ray diffraction technique is difficult in the case of this material, especially for large as grown crystals, due to absorption and x-ray induced fluorescence. Conoscopy is a good technique to verify optic and other axes directions and is complimentary to the x-ray diffraction method used for precise crystallographic orientation. We are reporting here, use of a combination of conoscopy, x-ray diffraction, and the birefringent property of Hg2Br2 to identify the optic and other axes directions in such crystals.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126587166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion of LIDAR data with hyperspectral and high-resolution imagery for automation of DIRSIG scene generation","authors":"Ryan N. Givens, K. Walli, M. Eismann","doi":"10.1109/AIPR.2012.6528215","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528215","url":null,"abstract":"Developing new remote sensing instruments is a costly and time consuming process. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model gives users the ability to create synthetic images for a proposed sensor before building it. However, to produce synthetic images, DIRSIG requires facetized, three-dimensional models attributed with spectral and texture information which can themselves be costly and time consuming to produce. Recent work by Walli has shown that coincident LIDAR data and high-resolution imagery can be registered and used to automatically generate the geometry and texture information needed for a DIRSIG scene. This method, called LIDAR Direct, greatly reduces the time and manpower needed to generate a scene, but still requires user interaction to attribute facets with either library or field measured spectral information. This paper builds upon that work and presents a method for autonomously generating the geometry, texture, and spectral content for a scene when coincident LIDAR data, high-resolution imagery, and HyperSpectral Imagery (HSI) of a site are available. Then the method is demonstrated on real data.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121351927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}