Theodore Sobolewski, Neal Messer, Adam Lutz, Soundararajan Ezekiel, Erik Blasch, M. Alford, A. Bubalo
{"title":"Contourlet image preprocessing for enhanced control point selection in airborne image registration","authors":"Theodore Sobolewski, Neal Messer, Adam Lutz, Soundararajan Ezekiel, Erik Blasch, M. Alford, A. Bubalo","doi":"10.1109/AIPR.2015.7444529","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444529","url":null,"abstract":"In applications such as airborne imagery, target tracking, remote sensing, and medical imaging; it is helpful to have an image set where all of the images lie on one fixed coordinate system. However, frequently a set of images cannot be captured from a fixed perspective using the same sensor or different sensors at the same time. Image registration presents a solution by mapping points from one image to corresponding points in another image; however existing registration methods are computationally expensive and not completely accurate. Hence, continual investigation of image registration methods is needed such as those using feature-based or intensity-based approaches, transformation models, spatial and frequency domain methods, and single or multi-modality data. In this paper, we investigate these processes by focusing on the identification of control points, which play a vital role in the process of registering images. By using the multi-resolution contourlet transform for image preprocessing, control points are better identified, which provides us a more reliable image registration for applications such as image fusion.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116317099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised classification of SAR imagery using polarimetric decomposition to preserve scattering characteristics","authors":"R. Marapareddy, J. Aanstoos, N. Younan","doi":"10.1109/AIPR.2015.7444532","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444532","url":null,"abstract":"We propose an unsupervised classification method using polarimetric synthetic aperture radar data to detect anomalies on earthen levees. This process mainly involves two stages: 1. Apply the scattering model-based decomposition developed by Freeman and Durden to divide pixels into three scattering categories: surface scattering, volume scattering, and double-bounce scattering. A class initialization scheme is also performed to initially merge clusters from many small clusters in each scattering category by applying a merge criterion developed based on the Wishart distance measure. 2. The iterative Wishart classifier is applied, which is a maximum likelihood classifier based on the complex Wishart distribution. This method not only uses a statistical classification, but also preserves the purity of dominant polarimetric scattering properties, and is superior to the entropy/anisotropy/Wishart classifier. An automated color rendering scheme is applied, based on the classes' scattering category to code the pixels. The effectiveness of the algorithms is demonstrated using fully quad-polarimetric L-band SAR imagery from the NASA Jet Propulsion Laboratory's (JPL's) Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). The study area is a section of the lower Mississippi River valley in the southern USA, where earthen flood control levees are maintained by the US Army Corps of Engineers.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"2017 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114873590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fine-grain uncommon object detection from satellite images","authors":"Lily Lee, Benjamin Smith, T. Chen","doi":"10.1109/AIPR.2015.7444538","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444538","url":null,"abstract":"The ever increasing amount of earth observing satellite images is a vast treasure trove of interesting objects. We address the topic of object detection from satellite images in cases where the object is rarely observed and hence there is very little availability of images to support training classifiers. Unlike objects observed on the ground, there is no equivalent ImageNet with labeled data for objects as seen from satellite or aerial platform sensors that could be used to train classifiers. In addition, we focus on specific uncommon objects with very limited observations. To overcome the lack of training data, we built a near-class object detector and verified the uncommon object detection using images from different domains. We demonstrate the performance of our uncommon object detector and show a high detection rate in satellite images.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125953946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated variability selection in time-domain imaging surveys using sparse representations with learned dictionaries","authors":"D. Moody, P. Wozniak, S. Brumby","doi":"10.1109/AIPR.2015.7444552","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444552","url":null,"abstract":"Exponential growth in data streams and discovery power delivered by modern time-domain imaging surveys creates a pressing need for variability extraction algorithms that are both fully automated and highly reliable. The current state of the art methods based on image differencing are limited by the fact that for every real variable source the algorithm returns a large number of bogus “detections” caused by atmospheric effects and instrumental signatures coupled with imperfect image processing. Here we present a new approach to this problem inspired by recent advances in computer vision and train the machine to learn new features directly from pixel data. The training data set comes from the Palomar Transient Factory survey and consists of small images centered around transient candidates with known real/bogus classification. This set of high-dimensional vectors (~1000 features) is then transformed into a linear representation using the so called dictionary, an overcomplete feature set constructed separately for each class. The data vectors are well approximated with a small number of dictionary elements, i.e. the dictionary representation is sparse. We show how sparse representations can be used to construct informative features for any suitable machine learning classifier. Our top level classifier is based on the random forest algorithm (collections of decision trees) with input data vectors consisting of up to 6 computer vision features and 20 additional context features designed by subject domain experts. Machine-learned features alone provide only an approximate classification with a 20% missed detection rate at a fixed false positive rate of 1%. When automatically extracted features are appended to those constructed by humans, the rate of missed detections is reduced from 8% to about 4% at 1% false positive rate.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115824740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating temporal and spectral features of astronomical data using wavelet analysis for source classification","authors":"T. Ukwatta, P. Wozniak","doi":"10.1109/AIPR.2015.7444533","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444533","url":null,"abstract":"Temporal and spectral information extracted from a stream of photons received from astronomical sources is the foundation on which we build understanding of various objects and processes in the Universe. Typically astronomers fit a number of models separately to light curves and spectra to extract relevant features. These features are then used to classify, identify, and understand the nature of the sources. However, these feature extraction methods may not be optimally sensitive to unknown properties of light curves and spectra. One can use the raw light curves and spectra as features to train classifiers, but this typically increases the dimensionality of the problem, often by several orders of magnitude. We overcome this problem by integrating light curves and spectra to create an abstract image and using wavelet analysis to extract important features from the image. Such features incorporate both temporal and spectral properties of the astronomical data. Classification is then performed on those abstract features. In order to demonstrate this technique, we have used gamma-ray burst (GRB) data from the NASA's Swift mission to classify GRBs into high- and low-redshift groups. Reliable selection of high-redshift GRBs is of considerable interest in astrophysics and cosmology.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128307503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Francis, T. Estlin, D. Gaines, B. Bornstein, S. Schaffer, V. Verma, R. Anderson, M. Burl, Selina Chu, R. Castaño, D. Thompson, D. Blaney, L. D. Flores, G. Doran, T. Nelson, R. Wiens
{"title":"AEGIS autonomous targeting for the Curiosity rover's ChemCam instrument","authors":"R. Francis, T. Estlin, D. Gaines, B. Bornstein, S. Schaffer, V. Verma, R. Anderson, M. Burl, Selina Chu, R. Castaño, D. Thompson, D. Blaney, L. D. Flores, G. Doran, T. Nelson, R. Wiens","doi":"10.1109/AIPR.2015.7444544","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444544","url":null,"abstract":"AEGIS (Autonomous Exploration for Gathering Increased Science) is a software suite that will imminently be operational aboard NASA's Curiosity Mars rover, allowing the rover to autonomously detect and prioritize targets in its surroundings, and acquire geochemical spectra using its ChemCam instrument. ChemCam, a Laser-Induced Breakdown Spectrometer (LIBS), is normally used to study targets selected by scientists using images taken by the rover on a previous sol and relayed by Mars orbiters to Earth. During certain mission phases, ground-based target selection entails significant delays and the use of limited communication bandwidth to send the images. AEGIS will allow the science team to define the properties of preferred targets, and obtain geochemical data more quickly, at lower data penalty, without the extra ground-inthe-loop step. The system uses advanced image analysis techniques to find targets in images taken by the rover's stereo navigation cameras (NavCam), and can rank, filter, and select targets based on properties selected by the science team. AEGIS can also be used to analyze images from ChemCam's Remote Micro Imager (RMI) context camera, allowing it to autonomously target very fine-scale features - such as veins in a rock outcrop - which are too small to detect with the range and resolution of NavCam. AEGIS allows science activities to be conducted in a greater range of mission conditions, and saves precious time and command cycles during the rover's surface mission. The system is currently undergoing initial tests and checkouts aboard the rover, and is expected to be operational by late 2015. Other current activities are focused on science team training and the development of target profiles for the environments in which AEGIS is expected to be used on Mars.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122375223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Doucette, J. Dolloff, A. Braun, Adam Gurson, C. Read, B. Shapo
{"title":"Error estimation for gridded bathymetry","authors":"Peter Doucette, J. Dolloff, A. Braun, Adam Gurson, C. Read, B. Shapo","doi":"10.1109/AIPR.2015.7444528","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444528","url":null,"abstract":"Estimating the uncertainty or predicted accuracy of gridded products that are generated from historical bathymetric survey data is of high interest to the maritime navigation community. Surface interpolation methods used for gridding survey data in practice are well established. This paper investigates error estimation methods for gridded bathymetry in terms of their practical utility. Of particular interest are: 1) assessing the quality of a prior uncertainty of random error in survey data; 2) the significance of autocorrelated random errors; 3) the relationship between survey point density and propagated or product uncertainty; 4) the computational feasibility of Monte Carlo (MC) methods over large regions; and 5) the value of cross-validation to estimate error in the absence of controlled truth. K-fold cross-validation is used as the basis for performance evaluation of our approach to propagate a priori random errors via MC perturbation with spline-in-tension surface interpolation. Experiments are conducted with test areas in the Norwegian archipelago of Svalbard.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126406600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Gokaraju, A. Turlapaty, D. Doss, R. King, N. Younan
{"title":"Change detection analysis of tornado disaster using conditional copulas and Data Fusion for cost-effective disaster management","authors":"B. Gokaraju, A. Turlapaty, D. Doss, R. King, N. Younan","doi":"10.1109/AIPR.2015.7444537","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444537","url":null,"abstract":"The up-to-date results are presented from an ongoing study of the Data Fusion of multi-temporal and multi-sensor satellite datasets for near real time damage and debris assessment after a tornado disaster event. The space-borne sensor datasets comprising of: (i) C-band SAR dataset from RADARSAT-2; (ii) Multi-Spectral (MS) optical dataset including NIR from RapidEye; (iii) MS and panchromatic dataset of Advanced Linear Imaging (ALI), are studied for multi-sensor data fusion. A combined approach of multi-polarized radiometric and textural feature extraction, and statistical learning based feature classification is devised for fine tuning of the complex and generalized change detection model. We also investigated the use of multi-variate conditional copula as a classifier technique, by formulating the change and no-change as a binary-class classification problem in this study. The classification results from the above technique are used for assessment of damage and debris cover after the tornado disaster event. The performance of the above approach yields a very significant Kappa accuracy up to 75%. A 10-fold cross validation strategy is used for quantitative analysis of the performance of the classification model. This study will be further extended for modelling the effect of incidence angle discrepancies or climatic condition variances, which will address the heterogeneity factor in terms of local statistics of the dataset.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121456893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Rajan, P. Burlina, M. Chen, D. Edell, B. Jedynak, N. Mehta, Ayushi Sinha, Gregory Hager
{"title":"Autonomous on-board Near Earth Object detection","authors":"P. Rajan, P. Burlina, M. Chen, D. Edell, B. Jedynak, N. Mehta, Ayushi Sinha, Gregory Hager","doi":"10.1109/AIPR.2015.7444551","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444551","url":null,"abstract":"Most large asteroid population discovery has been accomplished to date by Earth-based telescopes. It is speculated that most of the smaller Near Earth Objects (NEOs) that are less than 100 meters in diameter, whose impact can create substantial city-size damage, have not yet been discovered. Many asteroids cannot be detected with an Earth-based telescope given their size and/or their location with respect to the Sun. We are investigating the feasibility of deploying asteroid detection algorithms on-board a spacecraft, thereby minimizing the expense and need to downlink large collection of images. Having autonomous on-board image analysis algorithms enables the deployment of a spacecraft at approximately 0.7 AU heliocentric or Earth-Sun L1/L2 halo orbits, removing some of the challenges associated with detecting asteroids with Earth-based telescopes. We describe an image analysis algorithmic pipeline developed and targeted for on-board asteroid detection and show that its performance is consistent with deployment on flight-qualified hardware.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125104984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hyperspectral target detection using manifold learning and multiple target spectra","authors":"A. Ziemann, J. Theiler, D. Messinger","doi":"10.1109/AIPR.2015.7444547","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444547","url":null,"abstract":"Imagery collected from satellites and airborne platforms provides an important tool for remotely analyzing the content of a scene. In particular, the ability to remotely detect a specific material within a scene is of critical importance in nonproliferation and other applications. The sensor systems that process hyperspectral images collect the high-dimensional spectral information necessary to perform these detection analyses. For a d-dimensional hyperspectral image, however, where d is the number of spectral bands, it is common for the data to inherently occupy an m-dimensional space with m ≪ d. In the remote sensing community, this has led to recent interest in the use of manifold learning, which seeks to characterize the embedded lower-dimensional, nonlinear manifold that the data discretely approximate. The research presented here focuses on a graph theory and manifold learning approach to target detection, using an adaptive version of locally linear embedding that is biased to separate target pixels from background pixels. This approach incorporates multiple target signatures for a particular material, accounting for the spectral variability that is often present within a solid material of interest.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125784386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}