{"title":"Tracking and handoff between multiple perspective camera views","authors":"S. Guler, John M. Griffith, Ian A. Pushee","doi":"10.1109/AIPR.2003.1284284","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284284","url":null,"abstract":"We present a system for tracking objects between multiple uncalibrated widely varying perspective view cameras. The spatial relationships between multiple perspective views are established using a simple setup by using tracks of objects moving in and out of individual camera views. A parameterized Edge of Field of View (EoFOV) map augmented with internal overlap region boundaries is generated based on the detected object trajectories in each view. This EoFOV map is then used to associate multiple objects entering and leaving a particular camera's FOV into and out of another camera view providing uninterrupted object tracking between multiple cameras. The main focus of the paper is robust tracking and handoff of objects between omni-directional and regular narrow FOV surveillance video cameras without the need for formal camera calibration. The system tracks objects in both omni-directional and narrow field camera views employing adaptive background subtraction followed by foreground object segmentation using gradient and region correspondence.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126746048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arthur W. Wetzel, G. L. Nieder, Geri Durka-Pelok, T. Gest, S. Pomerantz, Démian Nave, S. Czanner, Lynn Wagner, Ethan Shirey, D. Deerfield
{"title":"Photo-realistic representation of anatomical structures for medical education by fusion of volumetric and surface image data","authors":"Arthur W. Wetzel, G. L. Nieder, Geri Durka-Pelok, T. Gest, S. Pomerantz, Démian Nave, S. Czanner, Lynn Wagner, Ethan Shirey, D. Deerfield","doi":"10.1109/AIPR.2003.1284261","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284261","url":null,"abstract":"We have produced improved photo-realistic views of anatomical structures for medical education combining data from photographic images of anatomical surfaces with optical, CT and MRI volumetric data such as provided by the NLM Visible Human Project. Volumetric data contains the information needed to construct 3D geometrical models of anatomical structures, but cannot provide a realistic appearance for surfaces. Nieder has captured high quality photographic sequences of anatomy specimens over a range of rotational angles. These have been assembled into QuickTime VR Object movies that can be viewed statically or dynamically. We reuse this surface imagery to produce textures and surface reflectance maps for 3D anatomy models to allow viewing from any orientation and lighting condition. Because the volumetric data comes from different individuals than the surface images, we have to warp these data into alignment. Currently we do not use structured lighting or other direct 3D surface information, so surface shape is recovered from rotational sequences using silhouettes and texture correlations. The results of this work improves the appearance and generality of models, used for anatomy instruction with the PSC Volume Browser.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127179649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Associative memory based on ratio learning for real time skin color detection","authors":"Ming-Jung Seow, V. Asari","doi":"10.1109/AIPR.2003.1284264","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284264","url":null,"abstract":"A novel approach for skin color modeling using ratio rule learning algorithm is proposed in this paper. The learning algorithm is applied to a real time skin color detection application. The neural network learn, based on the degree of similarity between the relative magnitudes of the output of each neuron with respect to that of all other neurons. The activation/threshold function of the network is determined by the statistical characteristic of the input patterns. Theoretical analysis has shown that the network is able to learn and recall the trained patterns without much problem. It is shown mathematically that the network system is stable and converges in all circumstances for the trained patterns. The network utilizes the ratio-learning algorithm for modeling the characteristic of skin color in the RGB space as a linear attractor. The skin color will converge to a line of attraction. The new technique is applied to images captured by a surveillance camera and it is observed that the skin color model is capable of processing 420/spl times/315 resolution images of 24-bit color at 30 frames per second in a dual Xeon 2.2 GHz CPU workstation running Windows 2000.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130708380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Imaging of moving targets using a Doppler compensated multiresolution method","authors":"R. Bonneau","doi":"10.1109/AIPR.2003.1284251","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284251","url":null,"abstract":"Traditional radar imaging has difficulties in imaging moving targets due to Doppler shifts induced in the imagery and limited spatial resolution of the target. We propose a method that uses a multiresolution processing technique that sharpens the ambiguity function of moving objects to remove Doppler induced imaging errors and improves instantaneous resolution. This method allows for instantaneous imaging of both static an moving objects in a computationally efficient manner thereby allowing more real time radar imagery generation.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133508835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image formation through walls using a distributed radar sensor array","authors":"A. Hunt","doi":"10.1109/AIPR.2003.1284277","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284277","url":null,"abstract":"Through the wall surveillance is a difficult but important problem for both law enforcement and military personnel. Getting information on both the internal features of a structure and the location of people inside improves the operational effectiveness in search and rescue, hostage, and barricade situations. However, the electromagnetic properties of walls constrain the choices available as sensor candidates. We have demonstrated that a high range resolution radar operating between 450 MHz and 2 GHz can be used with a fixed linear array of antennas to produce images and detect motion through both interior and exterior walls. While the experimental results are good, it has been shown that the linear array causes signal processing artifacts that appear as ghosts in the resultant images. By moving toward a sensor concept where the antennas in the array are randomly spaced, the effect of ghost images can be reduced and operational and performance benefits gained.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133368585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eigenviews for object recognition in multispectral imaging systems","authors":"R. Ramanath, W. Snyder, H. Qi","doi":"10.1109/AIPR.2003.1284245","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284245","url":null,"abstract":"We address the problem of representing multispectral images of objects using eigenviews for recognition purposes. Eigenviews have long been used for object recognition and pose estimation purposes in the grayscale and color image settings. The purpose of this paper is two-fold: firstly to extend the idealogies of eigenviews to multispectral images and secondly to propose the use of dimensionality reduction techniques other than those popularly used. Principal Component Analysis (PCA) and its various kernel-based flavors are popularly used to extract eigenviews. We propose the use of Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) as possible candidates for eigenview extraction. Multispectral images of a collection of 3D objects captured under different viewpoint locations are used to obtain representative views (eigenviews) that encode the information in these images. The idea is illustrated with a collection of eight synthetic objects imaged in both reflection and emission bands. A Nearest Neighbor classifier is used to perform the classification of an arbitrary view of an object. Classifier performance under additive white Gaussian noise is also tested. The results demonstrate that this system holds promise for use in object recognition under the multispectral imaging setting and also for novel dimensionality reduction techniques. The number of eigenviews needed by various techniques to obtain a given classifier accuracy is also calculated as a measure of the performance of the dimensionality reduction technique.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"18 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116091850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A survey of recent developments in theoretical neuroscience and machine vision","authors":"J. Colombe","doi":"10.1109/AIPR.2003.1284273","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284273","url":null,"abstract":"Efforts to explain human and animal vision, and to automate visual function in machines, have found it difficult to account for the view-invariant perception of universals such as environmental objects or processes, and the explicit perception of featural parts and wholes in visual scenes. A handful of unsupervised learning methods, many of which relate directly to independent components analysis (ICA), have been used to make predictive perceptual models of the spatial and temporal statistical structure in natural visual scenes, and to develop principled explanations for several important properties of the architecture and dynamics of mammalian visual cortex. Emerging principles include a new understanding of invariances and part-whole compositions in terms of the hierarchical analysis of covariation in feature subspaces, reminiscent of the processing across layers and areas of visual cortex, and the analysis of view manifolds, which relate to the topologically ordered feature maps in cortex.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"433 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123049896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Clark, C. Martin, Peter J. Costianes, V. Kolinko, J. Lovberg
{"title":"A real-time wide field of view passive millimeter-wave imaging camera","authors":"S. Clark, C. Martin, Peter J. Costianes, V. Kolinko, J. Lovberg","doi":"10.1109/AIPR.2003.1284280","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284280","url":null,"abstract":"With the current upsurge in domestic terrorism, suicide bombings and the like, there is an increased interest in high technology sensors that can provide true stand-off detection of concealed articles such as guns and, in particular, explosives in both controlled and uncontrolled areas. The camera discussed in this paper is based upon passive millimeter-wave imaging (75.5-93.5 GHz) and is intrinsically safe as it uses only the natural thermal (blackbody) emissions from living beings and inanimate objects to form images with. The camera consists of four subsystems which are interfaced to complete the final camera. The subsystems are Trex's patented flat panel frequency scanned phased array antenna, a front end receiver, and phase and frequency processors to convert the antenna output (in phase and frequency space) into image space and in doing so form a readily recognizable image. The phase and frequency processors are based upon variants of a Rotman lens.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117269551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stereo mosaics with slanting parallel projections from many cameras or a moving camera","authors":"Zhigang Zhu","doi":"10.1109/AIPR.2003.1284282","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284282","url":null,"abstract":"This paper presents an approach of fusing images from many video cameras or a moving video camera with external orientation data (e.g. GPS and INS data) into a few mosaicked images that preserve 3D information. In both cases, a virtual 2D array of cameras with FOV overlaps is formed to generate the whole coverage of a scene (or an object). We propose a representation that can re-organize the original perspective images into a set of parallel projections with different slanting viewing angles. In addition to providing a wide field of view, there are two more benefits of such a representation. First, mosaics with different slanting views represent occlusions encountered in a usual nadir view. Second, stereo pair can be formed from a pair of slanting parallel mosaics thus image-based 3D viewing can be achieved. This representation can be used as both an advanced video interface for surveillance or a pre-processing for 3D reconstruction.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125379132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual literacy: an overview","authors":"J. Aanstoos","doi":"10.1109/AIPR.2003.1284270","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284270","url":null,"abstract":"Visual literacy may be defined as the ability to recognize and understand ideas conveyed through visible actions or images, as well as to be able to convey ideas or messages through imagery. Based on the idea that visual images are a language, some authors consider visual literacy to be more of a metaphor, relating imagery interpretation to conventional literacy, than a well-defined and teachable skill. However, the field is credited with the development of educational programs that enhance students' abilities to interpret and create visual messages, as well as improvement of reading and writing skills through the use of visual imagery. This paper presents a broad overview of the concept of field literacy, focusing on its interdisciplinary nature and varied points of view.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125503016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}