{"title":"Do you see what i see?","authors":"P. S. Cerkez","doi":"10.1109/AIPR.2013.6749313","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749313","url":null,"abstract":"Semagrams are a subset of steganography. When a message is transmitted in a non-textual format, (i.e., in the visual content of an image), it is referred to as a semagram. While semagrams are relatively easy to create (as shown in published papers covering hiding techniques), detecting a hidden message in or embedded as an image-based semagram is a greater magnitude of difficultly than typical digital steganography. US Patents issued based on semagram technology show that this feature has been exploited in the copyright/watermarking world to increase protection. In a semagram, the image is the message and they work well for simple messages and dead drops. Attacks on semagrams are primarily visual examinations of artifacts. In the counter-espionage world, the rule of the thumb is that there is always a message hidden in an image or graphic, it is simply up to the steganalyst to find it. In short, detecting semagrams is a matter of recognizing patterns of patterns that represent a hidden message within an image. This presentation provides a brief summary of the technology underlying semagrams, present a short non-technical discussion of the technology used in the attack on semagrams, followed by a discussion on current work and planned future implementations of the proven semagram detection ANN. It will focus on extending the ANN to other domains (e.g., non-visual spectrums, multi/cross spectrum correlation, scene identification, image classification) and efforts to improve the processing speed and throughput via parallel/distributed methods.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114996699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-perspective anomaly prediction using neural networks","authors":"A. Waibel, A. Alshehri, Soundararajan Ezekiel","doi":"10.1109/AIPR.2013.6749341","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749341","url":null,"abstract":"In this paper, we introduce a technique for predicting anomalies in a signal by observing relationships between multiple meaningful transformations of the signal called perspectives. In particular, we use the Fourier transform to provide a holistic view of the frequencies present in a signal, along with a wavelet denoised signal that is filtered to locate anomalous peaks. Then we input these perspectives of the signal into a feedforward neural network technique to recognize patterns in the relationship between perspectives, and the presence of anomalies. The neural network is trained using a supervised learning algorithm for a given data set. Once trained, the neural network outputs the probability of a significant event occurring later in the signal based on anomalies occurring in the early part of the signal. A large collection of seismic signals was used in this study to illustrate the underlying methodology. Using this method we were able to achieve 54.7% accuracy in predicting anomalies further in a seismic signal. The techniques we present in this paper, with some refinement, can readily be applied to detect anomalies in seismic, electrocardiogram, electroencephalogram, and other non-stationary signals.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125867051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Edge grouping around a fixation point","authors":"Toshiro Kubota","doi":"10.1109/AIPR.2013.6749328","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749328","url":null,"abstract":"The paper presents three edge grouping algorithms for finding a closed contour starting from a selected edge point and enclosing a fixation point. The algorithms search a shortest simple cycle in a graph derived from an edge image where a vertex is an end point of a contour fragment and an undirected arc is drawn between every pair of end-points whose visual angle from the fixation point is less than a threshold value (set to π/2 in our experiments). The first algorithm restricts the search space to shapes where no contour point seen from the fixation point is occluded by other contour points, and finds the shortest simply cycle. The second algorithm restricts the search space to shapes where the starting edge point neither occludes nor is occluded by other contour points, and finds a shortest simple cycle. The third algorithm is free from any constraints, but does not guarantee that the solution is a shortest cycle. The third algorithm, however, guarantees a solution no worse than that of the second algorithm. The paper demonstrates effectiveness of these algorithms with a number of natural images. Finally, the paper proposes a way to automate placement of a fixation point and a starting point so that the procedure runs in a fully automated manner.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125524303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vision systems and the lives of people with disabilities","authors":"J. Peters, Vincent Collin","doi":"10.1109/AIPR.2013.6749333","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749333","url":null,"abstract":"A lot of difficulties for disabled persons can be attenuated using imagery. This is not only true for the visually impaired, but also for other kinds of handicap. We will review the major contributions of the image processing in the different contexts, showing in the same time the benefit of this technology for everybody. The gap between valid and impaired persons is not so large. This will lead us to the concept of “Universal Design”.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"298 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132002607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linear cyclic pursuit based prediction of personal space violation in surveillance video","authors":"Neha Bhargava, S. Chaudhuri, G. Seetharaman","doi":"10.1109/AIPR.2013.6749324","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749324","url":null,"abstract":"Analysis of human interaction in a social gathering is of high interest in security and surveillance applications. It is also of psychological interest to study the interaction to get a better understanding of the participant behavior. This paper is an attempt to explore and analyze interactions among the individuals from a single calibrated camera. We are particularly interested in trajectory prediction. These predicted trajectories of individuals are then used in predicting personal space violation. Each individual, represented by a feature point in a 2.5D coordinate system, is tracked using Lucas-Kanade tracking algorithm. We use the linear cyclic pursuit framework to model this point motion. This model is used for short-term prediction of individual trajectory. We demonstrate these ideas on different types of datasets.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130116189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Electroencephelograph based brain machine interface for controlling a robotic arm","authors":"Wenjia Ouyang, K. Cashion, V. Asari","doi":"10.1109/AIPR.2013.6749312","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749312","url":null,"abstract":"A brain machine interface (BMI) facilitates the control of machines through the analysis and classification of signals directly from the human brain. Using an electroencephalograph (EEG) to detect neurological activity permits the collection of data representing brain signals without the need for invasive technology or procedures. A 14-electrode EPOC headset produced by the Emotiv Company is used to capture live data, which can then be classified and encoded into control signals for a 7-degree-of-freedom robotic arm. The collected data is analyzed in using an independent component analysis (ICA) based feature extraction and a neural network classifier. The collected EEG data is classified into one of four control signals: lift, lower, rotate clockwise, and rotate counter-clockwise. Additionally, the system watches the collected data for electromyography (EMG) signals indicative of movement of the facial muscles. Detections are used to incorporate two additional control signals: open and close. A personal set of EEG data patterns is trained for each individual, with each control signal requiring only a few minutes to train initially. EMG signal detections are measured against a generic threshold for all users. Once a user has trained their personal data into the system any positive detections trigger a signal to the interfaced robotic arm to perform a corresponding, discrete action. Currently, subjects are able to repeatedly execute two EEG commands with accuracy within a short period of time. As the number of EEG based commands increases, the training time required for accurate control increases significantly. EMG based control is almost always immediately responsive. In order to extend the range of available controls beyond a few discrete actions, this research intends to incorporate and refine the algorithmic steps of classification and detection to shift an increased percentage of the burden of training onto the computer.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114248045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sensor fusion framework for robust occupancy grid mapping","authors":"K. S. Nagla, Dilbag Singh, M. Uddin","doi":"10.1109/AIPR.2013.6749330","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749330","url":null,"abstract":"Sensor based perception of the environment is an emerging area of research where sensors play a pivotal role in mobile robots to map the environment. For autonomous mobile robot mapping, information from different range sensors like vision sensor, laser range finder, ultrasonic and infrared sensors, etc. are fused to obtained better perception. Despite significant progress in this area, it still poses great challenges to attain robustness and reliability of the maps. In this paper, a new architecture of sensor fusion framework is proposed to make the map robust and reliable. The proposed architecture consists of the three main segments: a) Pre-processing of sensory information b) Fusion of information from heterogeneous sensors and c) Post-processing of the map. As reported in literature, specular reflection of sonar sensor is considered as the fundamental cause of an error in map making. To overcome such problem, pre-processing of information for sonar sensor is proposed in which fuzzy logic algorithm is used to discard the specular information. The proposed fuzzy technique shows that the average performance of the resultant grid is increased by 6.6%. The last part of the paper deals with the post-processing of grid with newly proposed dedicated filter (DF). The updated results using proposed framework show an average improvement of 8.4% in the occupancy grid. The qualitative comparisons show the improvement in the results where the overall occupied and empty area of the resultant map is extremely near to the reference map.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122591506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of an efficient perception system and a path planning algorithm for autonomous mobile robots","authors":"Sherif M. A. Matta, N. Chalhoub","doi":"10.1109/AIPR.2013.6749329","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749329","url":null,"abstract":"The current work deals with the development of enabling technologies for autonomous mobile robots. A perception system, based on a laser range finder sensor, has been developed to enable robots to construct 3-D maps of their surroundings from which projected free configuration space maps are generated. These maps are then used by a computationally efficient path planning algorithm to yield the shortest possible route between the current and the user-defined final position of the robot.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124279070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multimodal approach to high resolution image classification","authors":"Ryan N. Givens, K. Walli, M. Eismann","doi":"10.1109/AIPR.2013.6749322","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749322","url":null,"abstract":"As the collection of multiple modalities over a single region of interest becomes more common, users are provided with the capability to better overcome limitations of one data type by using the strengths of another. Often, when working only with hyperspectral imagery, scene classification is limited both by the generally lower spatial resolution of the hyperspectral imagery as well as the inability to distinguish classes which are spectrally similar, like asphalt roofing material and road asphalt. This paper will present and demonstrate a method to determine pure pixels in hyperspectral imagery by taking advantage of higher spatial resolution information available in color imagery fused with LIDAR return strength and elevation data. In return, the spectral information gained from hyperspectral imagery will then be used to perform image classification at the higher resolution of the color image. The result is a fully automated process for pure pixel determination and high resolution image classification.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130519289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of commercial remote sensing to issues in human geography","authors":"J. Irvine, J. Kimball, J. Regan, J. Lepanto","doi":"10.1109/AIPR.2013.6749327","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749327","url":null,"abstract":"Characterizing attributes of a society is fundamental to human geography. Cultural, social, and economic factors that are critical to understanding societal attitudes are associated with specific phenomena that are observable from overhead imagery. The application of remote sensing to specific issues, such as population estimation, agricultural analysis, and environmental monitoring, has shown great promise. Extending these concepts, we explore the potential for assessing aspects of governance, well-being, and social capital. Social science theory indicates the relationships among physical structures, institutional features, and social structures. Motivated by this underlying theory, we explore the relationship between observable physical phenomena and attributes of the society. Using imagery data from two study regions: sub-Saharan Africa and rural Afghanistan, we present an initial exploration of the direct and indirect indicators derived from the imagery. We demonstrate a methodology for extracting relevant measures from the imagery, using a combination of human-guided and machine learning methods. Our comparison of results for the two regions demonstrates the degree to which methods can generalize or must be tailored to a specific study area.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123991878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}