{"title":"3-D real-time gesture recognition using proximity spaces","authors":"E. Huber","doi":"10.1109/ACV.1996.572020","DOIUrl":"https://doi.org/10.1109/ACV.1996.572020","url":null,"abstract":"A 3D segment tracking approach to recognition of human pose and gestures is presented. The author has previously developed and refined a stereo based method, called the proximity space method, for acquiring and maintaining the track of object surfaces in 3-space. This method uses LoG filtered images and relies solely on stereo measurement to spatially distinguish between objects in 3D. The objective of the work is to obtain useful state information about the shape, size, and pose of natural (unadorned) objects in their naturally cluttered environments. Thus, the system does not require or benefit from special markers, colors, or other tailored artifacts. Recently he has extended this method in order to track multiple regions and segments of complex objects. The paper describes techniques for applying the proximity space method to a particularly interesting system: the human. Specifically, he discusses the use of simple models for constraining proximity space behavior in order to track gestures as a person moves through a cluttered environment. It is demonstrated that by observing the behavior of the model, used to tract the human's pose through time, different gestures can be easily recognized. The approach is illustrated through a discussion of gestures used to provide logical and spatial commands to a mobile robot.","PeriodicalId":222106,"journal":{"name":"Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114591228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast range image segmentation using high-level segmentation primitives","authors":"Xiaoyi Jiang, Urs Meier, H. Bunke","doi":"10.1109/ACV.1996.572006","DOIUrl":"https://doi.org/10.1109/ACV.1996.572006","url":null,"abstract":"In this paper we present a novel algorithm for very fast segmentation of range images into both planar and curved surface patches. In contrast to other known segmentation methods our approach makes use of high-level features (curve segments) as segmentation primitives instead of individual pixels. This way the amount of data can be significantly reduced and a very fast segmentation algorithm is obtained. The proposed algorithm has been tested on a large number of real range images and demonstrated good results. With an optimized implementation our method has the potential to operate in quasi real-time (a few range images per second).","PeriodicalId":222106,"journal":{"name":"Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129450706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonparametric correction of distortion","authors":"D. Stevenson, Margaret M. Fleck","doi":"10.1109/ACV.1996.572058","DOIUrl":"https://doi.org/10.1109/ACV.1996.572058","url":null,"abstract":"Images taken with wide angle and inexpensive medium angle lenses show substantial distortion, which will cause many computer vision applications to malfunction. Parametric calibration algorithms cannot handle the wide range of possible distortion functions and they require a known image center, expensive equipment, and/or estimation of unneeded parameters. We present a new algorithm to remove distortion. It is similar to the plumb line method but it: (a) uses images of spheres rather than lines, (b) corrects to stereographic rather than perspective projection, (c) calibrates aspect ratio, and (d) uses a nonparametric distortion model.","PeriodicalId":222106,"journal":{"name":"Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122835010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance evaluation of people tracking systems","authors":"G. Pingali, J. Segen","doi":"10.1109/ACV.1996.571994","DOIUrl":"https://doi.org/10.1109/ACV.1996.571994","url":null,"abstract":"A tracking system outputs a separate motion trajectory for each moving object in a scene. The paper presents a problem of performance evaluation and performance metrics for real time systems that track people, or moving objects, in video sequences, and it proposes performance measurement methodology for such systems. Two approaches to measuring performance are presented. The first approach compares the computed motion trajectories to the reference trajectories. It enables a complete evaluation of tracking results, but reference trajectories it requires are difficult to get. The second, more practical approach identifies in the computed trajectories specific discrete events, such as line crossings, and compares sequences of these events to sequences of reference events, which are much easier to obtain than reference trajectories. These events can usually be chosen such that they reflect the application goal of a tracking system, e.g. counting people in an area. Precision of evaluation increases with density of events. Short event sequences measure the sensitivity and selectivity of a tracking method, i.e. how well it satisfies the \"one person one trajectory\" objective. Long sequences measure continuity of trajectories: how long a method can keep track of one person. The paper shows performance measurement results for a real time people tracking system developed by the authors.","PeriodicalId":222106,"journal":{"name":"Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96","volume":"26 2-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114028995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust automatic target recognition in second generation FLIR images","authors":"D. Nair, J. Aggarwal","doi":"10.1109/ACV.1996.572055","DOIUrl":"https://doi.org/10.1109/ACV.1996.572055","url":null,"abstract":"In this paper we present a system for the detection and recognition of targets in second generation forward looking infrared (FLIR) images. The system uses new algorithms for target detection and segmentation of the targets. Recognition is based on a methodology far target recognition by parts. A diffusion based approach for determining the parts of a target is also presented here. Experimental results on a large database of FLIR images validate the robustness of the system, and its applicability to FLIR imagery obtained from real scenes.","PeriodicalId":222106,"journal":{"name":"Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123127498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Slater, G. Healey, P. Sheu, C. Cotman, Joseph H. Su, A. Wasserman, W. Shankle
{"title":"A machine vision system for the automated classification and counting of neurons in 3-D brain tissue samples","authors":"D. Slater, G. Healey, P. Sheu, C. Cotman, Joseph H. Su, A. Wasserman, W. Shankle","doi":"10.1109/ACV.1996.572059","DOIUrl":"https://doi.org/10.1109/ACV.1996.572059","url":null,"abstract":"Neuron count in various brain structures is an important factor in many neurobiological studies. We describe a machine vision system which uses color images for the automated classification and counting of neurons in tissue samples. Samples are sliced into registered sections whose thickness is on the order of the diameter of a neuronal nucleus. Sections are stained so that the spectral transmission functions of the neuronal nuclei differ from the surrounding tissue. Each section is imaged using a light microscope. A Bayesian classifier is used for pixel labeling and a geometric analysis routine is employed to segment neuron regions in each section. The 3D tissue sample is reconstructed using registered neuron regions from each section. An object oriented database management system provides an experimental framework for cataloging neuron classes. Experimental results are presented and compared with results obtained by a histologist.","PeriodicalId":222106,"journal":{"name":"Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96","volume":"531 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123575919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic measurement of vertebral shape using active shape models","authors":"P. Smyth, C. Taylor, J. Adams","doi":"10.1109/ACV.1996.572050","DOIUrl":"https://doi.org/10.1109/ACV.1996.572050","url":null,"abstract":"The authors describe how active shape models (ASMs) have been used to accurately and robustly locate vertebrae in lateral dual energy X-ray absorptiometry (DXA) images of the spine. DXA images are of low spatial resolution, and contain significant random and structural noise, providing a difficult challenge for object location methods. All vertebrae in the image were searched for simultaneously, improving robustness in location of individual vertebrae by making use of constraints on shape provided by the position of other vertebrae. They show that the use of ASMs with minimal user interaction allows accuracy to be obtained which is as good as that achievable by human operators using a standard manual method.","PeriodicalId":222106,"journal":{"name":"Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116353638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A computer vision system to detect 3-D rectangular solids","authors":"Kashi Rao","doi":"10.1109/ACV.1996.571992","DOIUrl":"https://doi.org/10.1109/ACV.1996.571992","url":null,"abstract":"We present a computer vision system to detect 3D rectangular objects. We first describe the method to detect rectangular solids in real video/images in arbitrary orientations, positions, distances from the camera and lighting. The method works by detecting junctions and adjacent edges of rectangular solids. If a rough reference image of the background is available, that can also be used. We have tested our system on several hundreds of real images and video sequences. In particular, we evaluated the system performance by plotting receiver operating characteristic carves (probability of detection versus probability of false alarm). These curves were plotted for results on 500 images and video sequences acquired an a scene with rich background structure; that is, the scene had a large number of background lines and rectangles. In such an environment, we achieved 93% detection at a 13% false alarm rate. Potential applications of this system include detection of packing boxes, trailers of trucks and rectangular buildings. This system could be used for video indexing or for video surveillance in a security monitoring system.","PeriodicalId":222106,"journal":{"name":"Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126928817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Hoff, R. Komistek, D. Dennis, S. Walker, E. Northcut, K. Spargo
{"title":"Pose estimation of artificial knee implants in fluoroscopy images using a template matching technique","authors":"W. Hoff, R. Komistek, D. Dennis, S. Walker, E. Northcut, K. Spargo","doi":"10.1109/ACV.1996.572053","DOIUrl":"https://doi.org/10.1109/ACV.1996.572053","url":null,"abstract":"The paper describes an algorithm to estimate the position and orientation (pose) of artificial knee implants from X-ray fluoroscopy images using computer vision. The resulting information is used to determine the kinematics of bone motion in implanted knees. This determination can be used to support the development of improved prosthetic knee implants, which currently have a limited life span due to premature wear of the polyethylene material at the joint surface. The algorithm determines the full 6 degree of freedom translation and rotation of knee components. This is necessary for artificial knees which have shown significant rotation out of the sagittal plane, in particular internal/external rotations. By creating a library of images of components at known orientation and performing a template matching technique, the 3D pose of the femoral and tibial components are determined. The entire process, when used at certain knee angles, gives a representation of the positions in contact during normal knee motion.","PeriodicalId":222106,"journal":{"name":"Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130737586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Passive navigation using focus of expansion","authors":"A. Branca, E. Stella, A. Distante","doi":"10.1109/ACV.1996.572001","DOIUrl":"https://doi.org/10.1109/ACV.1996.572001","url":null,"abstract":"The goal of this work is to propose a method to solve the problem of passive navigation with visual means. The method is a two stage approach: matching of feature extracted from 2D images of a sequence at different times and egomotion parameter computation. Both algorithms are based on a least-square-error technique to minimize appropriate energy functions. Experimental results obtained in real context show the robustness of the method.","PeriodicalId":222106,"journal":{"name":"Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116422734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}