J. Aanstoos, K. Hasan, C. O'Hara, Lalitha Dabbiru, Majid Mahrooghy, R. Nóbrega, Matthew A. Lee
{"title":"Detection of slump slides on earthen levees using polarimetric SAR imagery","authors":"J. Aanstoos, K. Hasan, C. O'Hara, Lalitha Dabbiru, Majid Mahrooghy, R. Nóbrega, Matthew A. Lee","doi":"10.1109/AIPR.2012.6528207","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528207","url":null,"abstract":"Key results are presented of an extensive project studying the use of synthetic aperture radar (SAR) as an aid to the levee screening process. SAR sensors used are: (1) The NASA UAVSAR (Uninhabited Aerial Vehicle SAR), a fully polarimetric L-band SAR capable of sub-meter ground sample distance; and (2) The German TerraSAR-X radar satellite, also multi-polarized and featuring 1-meter GSD, but using an X-band carrier. The study area is a stretch of 230 km of levees along the lower Mississippi River. The L-band measurements can penetrate vegetation and soil somewhat, thus carrying some information on soil texture and moisture which are relevant features to identifying levee vulnerability to slump slides. While X-band does not penetrate as much, its ready availability via satellite makes multitemporal algorithms practical. Various feature types and classification algorithms were applied to the polarimetry data in the project; this paper reports the results of using the Support Vector Machine (SVM) and back-propagation Artificial Neural Network (ANN) classifiers with a combination of the polarimetric backscatter magnitudes and texture features based on the wavelet transform. Ground reference data used to assess classifier performance is based on soil moisture measurements, soil sample tests, and on site visual inspections.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134265658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attribute analyses of GPR data for heavy minerals exploration","authors":"Aycan Catakli, Hanan Mahdi, Haydar Al Shukri","doi":"10.1109/AIPR.2012.6528192","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528192","url":null,"abstract":"This study is a continuation for our previous work [1] depicting soil mineralogy using Texture Analysis (TA) of Ground Penetrating Radar (GPR) data. In addition to TA, Complex Trace Analysis (CTA), and Center Frequency Destitution (CFD) were applied to GPR data to predict the existence of buried heavy mineral deposits. CFD and CTA attribute were also used to determine the concentration of the buried heavy mineral deposits. The features of CTA are useful in showing changes of the potential energy components such as instantaneous energy. τ-parameter and Normal Distribution of Amplitude Spectra (NDoAS) were calculated from CTA to inspect the concentration of the buried samples and CFD was used to reveal energy allocations using spectral content of GPR data in time and frequency domain. GPR data collected from laboratory experiments using 1.5 GHz antenna were used in the study. The experiments were conducted using various heavy mineral samples with different concentrations. Our previous study showed that buried minerals produced high entropy, contrast, correlation, standard deviation, and cluster, but these samples produced low energy, and homogeneity. Variance measure signifies edges of buried samples within host material. This study indicates that first and second derivatives of the envelope calculated from CTA emphasize the variation of the reflected energy and sharpen the reflection boundaries in the data. Instantaneous measures (energy and power) of envelope data reveal the existence of buried samples, while the frequency distribution of the data enables locating the contact of buried mineral. We found τ-parameter, NDoAS, and center-frequency proportionally increase with increased concentration of the mineral samples. The results from the three analyses, although in agreement with the previous work, they substantially improve the detection as well as quantifying the mineral concentration.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131599381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the uses of fingerprint patterns and ridge-counts in biographical associations","authors":"Dale Herdegen, M. Loew","doi":"10.1109/AIPR.2012.6528220","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528220","url":null,"abstract":"Fingerprint characteristics describe people and groups. A person can be described approximately by fingerprint patterns and ridge-counts, and described uniquely by fingerprint minutiae. A group can be described by overall pattern frequencies and average ridge-counts (per-person or per-finger). When a group comprises related persons, the genetic basis of fingerprints produces common qualitative and quantitative fingerprint characteristics within the group. Those common characteristics allow for differentiation between endogamous (related by marriage) groups; forensic anthropology is replete with examples detailing the dermatoglyphic differences between endogamous groups based on race, religion, geography, or caste. This paper examines the degree of differentiation between discrete endogamous groups and explores the ability to associate an individual to a group by comparing individual-to-group fingerprint characteristics. Data from dermatoglyphic anthropologic studies cited herein are used to illustrate the ability to differentiate groups based on fingerprint pattern and/or ridge-counts. In some instances, the degree of differentiation between groups suggests that it is possible to identify associations of individuals to a group. A case study is presented that illustrates the association of a person to one of two endogamous groups by comparing individual per-finger patterns and ridge-counts to composited group pattern and ridge-count information. In this case, approximately 80 percent association accuracy was achieved using a decision tree classifier. The success achieved in this limited case suggests further study in associating persons to groups using fingerprint characteristics.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131147974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial feature evaluation for aerial scene analysis","authors":"Thomas Swearingen, A. Cheriyadat","doi":"10.1109/AIPR.2012.6528212","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528212","url":null,"abstract":"High-resolution aerial images are becoming more readily available, which drives the demand for robust, intelligent and efficient systems to process increasingly large amounts of image data. However, automated image interpretation still remains a challenging problem. Robust techniques to extract and represent features to uniquely characterize various aerial scene categories is key for automated image analysis. In this paper we examined the role of spatial features to uniquely characterize various aerial scene categories. We studied low-level features such as colors, edge orientations, and textures, and examined their local spatial arrangements. We computed correlograms representing the spatial correlation of features at various distances, then measured the distance between correlograms to identify similar scenes. We evaluated the proposed technique on several aerial image databases containing challenging aerial scene categories. We report detailed evaluation of various low-level features by quantitatively measuring accuracy and parameter sensitivity. To demonstrate the feature performance, we present a simple query-based aerial scene retrieval system.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"63 S15","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113973085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erik Blasch, G. Seetharaman, K. Palaniappan, Haibin Ling, Genshe Chen
{"title":"Wide-area motion imagery (WAMI) exploitation tools for enhanced situation awareness","authors":"Erik Blasch, G. Seetharaman, K. Palaniappan, Haibin Ling, Genshe Chen","doi":"10.1109/AIPR.2012.6528198","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528198","url":null,"abstract":"The advent of streaming feeds of full-motion video (FMV) and wide-area motion imagery (WAMI) have overloaded an image analyst's capacity to detect patterns, movements, and patterns of life. To aid in the process of WAMI exploitation, we explore computer vision and pattern recognition methods to cue the user to salient information. For enhanced exploitation and analysis, there is a need to develop WAMI methods for situation awareness. Computer vision algorithms provide cues, contexts, and communication patterns to enhance exploitation capabilities. Multi-source data fusion using exploitation context from the video needs to be linked to semantically extracted elements for situation awareness to aid an operator in rapid image understanding. In this paper, we identify: (1) opportunities from computer vision techniques to improve WAMI target tracking, (2) relate developments of clustering methods for activity-based intelligence and stochastic context-free grammars for accessing, indexing, and linking relevant information to assist processing and exploitation, and (3) address situation awareness methods of multi-intelligence collaboration for future automated video understanding techniques. Our example uses the open-source Columbus Large Image Format (CLIF) WAMI data to demonstrate connection of video-based semantic labeling with other information fusion enterprise capabilities incorporating text-based semantic extraction.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132163282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Online biometric authentication using facial thermograms","authors":"M. Hanmandlu, S. Vasikarla","doi":"10.1109/AIPR.2012.6528223","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528223","url":null,"abstract":"The requirement for a reliable personal identification in computerized access control, security applications, human machine interaction etc has led to an unprecedented interest in biometrics. The usefulness of face as a primary modality for biometric authentication is on the rise in the recent years because of it's non-intrusiveness and uniqueness. Visual Face recognition is successful only in the controlled environment but fails in the case of disguised faces and under varying lighting conditions. As an alternative to the visual recognition this paper presents the Long Wave Infra Red (LWIR) for face recognition. In this we make of the facial thermograms that are the images formed by the capturing the heat radiated by the face. It is observed that it's performance falls drastically with varying temperature conditions. To overcome this drawback simplified Blood perfusion model is proposed to convert thermograms into Blood perfusion data. If a person wears spectacles, the glasses obstruct the radiated and hence thermograms loses the information. An efficient algorithm is developed to detect the eyeglasses and to remove it's effect.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131106459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Irvine, M. Young, Owen Deutsch, Erik Antelman, S. Guler, Ashutosh Morde, Xiang Ma, Ian Pushee
{"title":"Enhanced event recognition in video using image quality assessment","authors":"J. Irvine, M. Young, Owen Deutsch, Erik Antelman, S. Guler, Ashutosh Morde, Xiang Ma, Ian Pushee","doi":"10.1109/AIPR.2012.6528211","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528211","url":null,"abstract":"Extensive growing repositories of multimedia present significant challenges for storage, indexing, retrieval, and analysis. The ability to recognize events based on automated analysis of the video content would facilitate tagging and retrieval of relevant data from large repositories. The unconstrained nature of multi-media data means that metadata often associated with a video is not known. In addition, many clips exhibit poor quality due to lighting, camera motion, compression artifacts, and other factors. The variable and frequently poor quality of video data challenges the state of the art in computer vision. In the absence of sensor metadata, we present an approach that estimates various attributes of video quality based on the content and incorporates this information into the event classification. Using a set of canonical content detectors, we establish a baseline level of event classification performance. Guided by the quality assessment into the classification process, we can identify data quality problems automatically. This analysis is a first step in tailored processing that would adapt the content extraction method to the estimated quality level. We present the formulation of the image quality measures and a quantitative assessment of the methods.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130869548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generation of Future image frames using Adaptive Network Based Fuzzy Inference System on spatiotemporal framework","authors":"N. Verma, Shimaila","doi":"10.1109/AIPR.2012.6528197","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528197","url":null,"abstract":"This paper presents an algorithm for Future image frames generation using Adaptive Network Based Fuzzy Inference System (ANFIS) on spatiotemporal framework. The input to the network is a hyper-dimensional color and spatiotemporal feature of a pixel in an image sequence. The ANFIS is trained for R, G and B values separately for each and every pixel in image frame. Principal Component Analysis, Interaction Information and Bhattacharyya Distance measure have been used to reduce the dimensionality of the feature set. The resulting scheme has successfully been applied on satellite image sequence of a tropical cyclone. Two image quality assessment techniques, Canny edge detection based Image Comparison Metric (CIM) and Mean Structural Similarity Index Measure (MSSIM) have been used to evaluate future image frames quality. The proposed approach is found to have generated nine future image frames successfully.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126530260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Guler, Ashutosh Morde, Ian A. Pushee, Xiang Ma, Jason A. Silverstein, S. McAuliffe
{"title":"Contextual video clip classification","authors":"S. Guler, Ashutosh Morde, Ian A. Pushee, Xiang Ma, Jason A. Silverstein, S. McAuliffe","doi":"10.1109/AIPR.2012.6528196","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528196","url":null,"abstract":"Content based classification of unrestricted video clips from various sources plays an important role in video analysis and search. Thus far automated video understanding research focused on videos from sources such as aerial, broadcast, meeting room etc. For each of these video sources certain assumptions are made which constrain the problem of content analysis. None of these assumptions hold for analyzing the contents of unrestricted videos. We present a top down approach to content based video classification by first understanding the overall scene structure and then detecting the actors, actions and objects along with the context they interact in as well as the global motion information from the scene. A scene in a video clip is used as a semantic unit providing the visual context and the location characteristics such as indoor, outdoor and type of each associated with the scene. The location context is tied with the video shooting style of zooming in and out to create a scene description hierarchy. Actors are considered as detected people and faces, certain poses of people help define the action and activities, while objects relevant to certain types of events provide additional context. Summary features are created for the scene semantic units based on the actors, actions, object detections and the context. These features were successfully used to train an asymmetric Random Forest classifier for video event classification. The top down approach we present here has the inherent advantage of being able to describe the video in addition to providing content based classification. The approach was tested on the Multimedia Event Detection (MED) 2011 dataset with promising results.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123885954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust needle recognition using Artificial Neural Network (ANN) and Random Sample Consensus (RANSAC)","authors":"Jaewon Chang","doi":"10.1109/AIPR.2012.6528219","DOIUrl":"https://doi.org/10.1109/AIPR.2012.6528219","url":null,"abstract":"In this paper, we suggest an algorithm for a half-circle-like surgical needle recognition in stereo image. The recognition starts from segmentation of needle in both stereo images using Artificial Neural Network (ANN). Next, the points in the segments are being matched to each other stereo image through intensity based matching, and then re-projected to 3D space which will be fitted to 3D circle. Finally, estimate the circle of the needle using RANdom SAmple Consensus (RANSAC) and known specification of the needle.","PeriodicalId":406942,"journal":{"name":"2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124061430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}