{"title":"Detection of Abnormal behavior in Dynamic Crowded Gatherings","authors":"Hiba H. Alqaysi, S. Sasi","doi":"10.1109/AIPR.2013.6749309","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749309","url":null,"abstract":"People gather for parades, sports, musical events, and mass gatherings for pilgrimage at religious places like Mecca, Jerusalem, Vatican, etc. Most often, these mass gatherings lead to crowd disasters. In this research, a new automated algorithm for the Detection of Abnormal behavior in Dynamic Crowded Gatherings (DADCG) is proposed that has reduced processing speed, sensitivity to noise, and improved accuracy. Initially, the temporal features of the scenes are extracted using Motion History Image (MHI) technique. Then the Optical Flow (OF) vectors are calculated for each MHI image using Lucas-Kanade method to obtain the spatial features. This Optical flow image is segmented into four equal-sized blocks. Finally, a two dimensional histogram is generated with motion direction and motion magnitude for each block. Stampede and congestion areas can be detected by comparing the mean value of the histogram of each segmented optical flow image. Based on this result, an alarm may be generated for the security personnel to take appropriate actions.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121232820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Blasch, Zhonghai Wang, Haibin Ling, K. Palaniappan, Genshe Chen, D. Shen, Alexander J. Aved, G. Seetharaman
{"title":"Video-based activity analysis using the L1 tracker on VIRAT data","authors":"E. Blasch, Zhonghai Wang, Haibin Ling, K. Palaniappan, Genshe Chen, D. Shen, Alexander J. Aved, G. Seetharaman","doi":"10.1109/AIPR.2013.6749311","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749311","url":null,"abstract":"Developments in video tracking have addressed various aspects such as target detection, tracking accuracy, algorithm comparison, and implementation methods which are briefly reviewed. However, there are other attributes of full motion video (FMV) tracking that require further investigation for situation awareness of event and activity analysis. Key aspects of activity and behavior analysis include interaction between individuals, groups, and crowds as well as with objects in the environment like vehicles and buildings over a specified time duration as it is typically assumed that the activities of interest include people. In this paper, we explore activity analysis using the L1 tracker over various scenarios in the VIRAT data. Activity analysis extends event detection from tracking accuracy to characterizing number, types, and relationships between actors in analyzing human activities of interest. Relationships include correlation in space and time of actors with other people, objects, vehicles, and facilities (POVF). Event detection is more mature (e.g., based on image exploitation and tracking techniques), while activity analysis (as a higher level fusion function) requires innovative techniques for relationship understanding.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116328480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Indoor location with Wi-Fi fingerprinting","authors":"Noah Pritt","doi":"10.1109/AIPR.2013.6749334","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749334","url":null,"abstract":"There are many applications for indoor location determination, from the navigation of hospitals, airports, parking garages and shopping malls, for example, to navigational aids for the blind and visually impaired, targeted advertising, mining, and disaster response. GPS signals are too weak for indoor use, however, making it necessary to investigate other means of navigation. Most approaches such as ultrasound and RFID tags require special hardware to be installed and remain expensive and inconvenient. The solution proposed in this paper makes use of commonly available Wi-Fi networks and runs on ordinary smart phones and tablets without the need to install special hardware. It comprises a calibration stage and a navigation stage. The calibration stage creates a “Wi-Fi fingerprint” for each room of a building. It minimizes the calibration time through the use of waypoints. The navigation stage matches Wi-Fi signals to the fingerprints to determine the user's most likely location. It uses maximum likelihood classification for this matching and takes the building's topology into account through the use of Bayes' Theorem. The system is implemented as a mobile Android app and is easy to use. In testing, it took only an hour to calibrate a home or shopping mall, and the navigation stage yielded the correct location 97.5% of the time in a home and 100% of the time in a mall.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127375436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonlinear dimensionality reduction for structural discovery in image processing","authors":"D. Floyd, R. Cloutier, Teresa Zigh","doi":"10.1109/AIPR.2013.6749319","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749319","url":null,"abstract":"Nonlinear dimensionality reduction techniques are a thriving area of research in many fields, including pattern recognition, statistical learning, medical imaging, and statistics. This is largely driven by our need to collect, represent, manipulate, and understand high-dimensional data in practically all areas of science. Here we define “high-dimensional” to be where dimension d > 10, and in many applications d ≫ 10. In this paper we discuss several nonlinear dimensionality reduction techniques and compare their characteristics, with a focus on applications to improve tractability and provide low-dimensional structural discovery for image processing.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122759435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust feature vector for efficient human detection","authors":"A. Bell","doi":"10.1109/AIPR.2013.6749310","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749310","url":null,"abstract":"This research presents a method for the automatic detection of a dismounted human at long range from a single, highly compressed image. The histogram of oriented gradients (HOG) method provides the feature vector, a support vector machine performs the classification, and the JPEG2000 standard compresses the image. This work presents an understanding of how HOG for human detection holds up as range and compression increases. The results indicate that HOG remains effective even at long distances: the average miss rate and false alarm rate are both kept to 5% for humans only 12 pixels tall and 4-5 pixels wide in uncompressed images. Next, classification performance for humans at close range (100 pixels tall) is evaluated for compressed and uncompressed versions of the same test images. Using a compression ratio of 32:1 (97% of each image's data is discarded and the image is reconstructed from only the 3% retained), the miss rates for the compressed and uncompressed images are equivalent at 0.5% while the 1.0% false alarm rate for the compressed images is only slightly higher than the 0.5% rate for the uncompressed images. Finally, this work depicts good detection performance for humans at long ranges in highly compressed images. Insights into important design issues-for example, the impact of the amount and type of training data needed to achieve this performance-are also discussed.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":" 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120933917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Doucette, Jim Antonisse, A. Braun, M. Lenihan, Michelle Brennan
{"title":"Image georegistration methods: A framework for application guidelines","authors":"Peter Doucette, Jim Antonisse, A. Braun, M. Lenihan, Michelle Brennan","doi":"10.1109/AIPR.2013.6749317","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749317","url":null,"abstract":"With the rapid growth of sensor platforms for imagery collection, from micro-unmanned aerial systems (UAS) to smart phones, an ability to geo-register image data is a fundamental need for many downstream applications. Approaches to georegistration for sensor imagery have deep roots in photogrammetry, and more recently with the integration of computer vision techniques. Georegistration solutions are increasingly sought for inexpensive and non-metric quality sensors and/or those that may lack the metadata needed to support rigorous coordinate transfer with error estimation. This indicates a range of solution quality, with situational awareness at one end, and rigorous accuracy at the other. There are a variety of correspondence and transformation models from which to select, with tradeoffs among simplicity, accuracy, and error estimation. The continually expanding vernacular of terms and methods can lead to confusion of application among the broader community of users. A sorting of representative terminology, processes, and techniques, is proposed as a framework. The goal is to motivate discussion for application guidelines.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134265284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Plenoptic camera range finding","authors":"Robert Raynor, K. Walli","doi":"10.1109/AIPR.2013.6749336","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749336","url":null,"abstract":"The ability of the plenoptic camera to perform ranging has been recognized since the camera's inception. It is possible to think of the range finding operation performed by the camera alternatively in terms of “depth from parallax,” “depth from defocus,” or even “depth through refocusing.” Each of these conceptions of the problem leads to a different approach, often yielding varying results in terms of performance and efficiency. However, each is subject to the same fundamental limitations. This research attempts to formulate this theoretical limit on ranging performance. In the process, it also provides a spatial domain explanation of “light field spreading,” a sampling phenomenon of importance for both image formation and ranging, which has elsewhere been explained in the frequency domain under the assumption of band limitedness. Finally, the research describes implementations of rangefinding procedures, and provides some results for a sample plenoptic camera.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129198634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anomaly prediction in seismic signals using neural networks","authors":"A. Waibel, A. Alshehri, Soundararajan Ezekiel","doi":"10.1109/AIPR.2013.6749340","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749340","url":null,"abstract":"In this paper, we present a robust technique for predicting anomalies in the near future of an observed signal. First, wavelet de-noising is applied to the signal. Next, peak-finding algorithms search for smaller anomalies that appear frequently throughout the signal. Then the data from the peak-finding algorithm is fed into a feed-forward neural which predicts the likelihood of an anomalous event occurring later in the signal. The neural network is trained using supervised learning techniques with data sets consisting of a mix of signals known to precede anomalous events, and signals known to be free of significant anomalies. Our approach provides a means of predicting large events in signals such as seismograms, EKGs, EEGs, and other non-stationary signals. The proposed technique yielded 83% accuracy when used to predict earthquakes using seismic signals, and so is an effective strategy for predicting seismic events.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133567195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature track summary visualization for sequential multi-view reconstruction","authors":"S. Recker, Mauricio Hess-Flores, K. Joy","doi":"10.1109/AIPR.2013.6749337","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749337","url":null,"abstract":"Analyzing sources and causes of error in multi-view scene reconstruction is difficult. In the absence of any ground-truth information, reprojection error is the only valid metric to assess error. Unfortunately, inspecting reprojection error values does not allow computer vision researchers to attribute a cause to the error. A visualization technique to analyze errors in sequential multi-view reconstruction is presented. By computing feature track summaries, researchers can easily observe the progression of feature tracks through a set of frames over time. These summaries easily isolate poor feature tracks and allow the observer to infer the cause of a delinquent track. This visualization technique allows computer vision researchers to analyze errors in ways previously unachieved. It allows for a visual performance analysis and comparison between feature trackers, a previously unachieved result in the computer vision literature. This framework also provides the foundation to a number of novel error detection and correction algorithms.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131172672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face recognition using Elastic bunch graph matching","authors":"M. Hanmandlu, Divya Gupta, S. Vasikarla","doi":"10.1109/AIPR.2013.6749338","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749338","url":null,"abstract":"A closed-set identification is implemented using Elastic bunch graph matching (EBGM) algorithm. It uses cosine similarity as its matching criterion instead of a classifier for recognition. The proposed method makes use of facial features like fuducial points to differentiate between faces. It is insensitive to variation in facial expressions, illumination and poses on frontal and ¾ frontal images. Experimental results show that the proposed method can achieve a recognition accuracy of 96.67% for the training to test ratio of 7:3 on face images. This method can be extended to provide profile face recognition.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131472513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}