Gali Zimmerman-Moreno, Arnaldo Mayer, H. Greenspan
{"title":"Classification trees for fast segmentation of DTI brain fiber tracts","authors":"Gali Zimmerman-Moreno, Arnaldo Mayer, H. Greenspan","doi":"10.1109/CVPRW.2008.4562998","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562998","url":null,"abstract":"A method is proposed for modeling and classification of White Matter fiber tracts in the brain. The presented scheme uses classification trees in conjunction with spatial representation of the individual fibers, in order to capture the characteristic behavior of fibers belonging to a specific anatomical structure. The method is characterized by high classification speed, under 3 seconds for all the fibers in a typical DTI of a brain. The model has the ability to represent complex geometric structures and has an intuitive interpretation. Encouraging results are demonstrated for tract classification on real data from ten different subjects.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124926682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Villegas, Roberto Paredes Palacios, Alfons Juan-Císcar, E. Vidal
{"title":"Face verification on color images using local features","authors":"M. Villegas, Roberto Paredes Palacios, Alfons Juan-Císcar, E. Vidal","doi":"10.1109/CVPRW.2008.4563123","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563123","url":null,"abstract":"In this paper we propose a probabilistic model for the local features technique which provides a methodology to improve this approach. On the other hand, a method for compensating the color variability in images is adapted for the local feature model. Finally, an experimental study is made in order to evaluate the performance of the local features approach on challenging situations such as partially occluded images and having only one training image per user. The results of the experiments are competitive with state-of-the-art algorithms even when we have the mentioned extreme situations.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121904409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Ohtsuka, Daisuke Watanabe, Daisuke Tomizawa, Yuta Hasegawa, H. Aoki
{"title":"Reliable detection of core and delta in fingerprints by using singular candidate method","authors":"T. Ohtsuka, Daisuke Watanabe, Daisuke Tomizawa, Yuta Hasegawa, H. Aoki","doi":"10.1109/CVPRW.2008.4563119","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563119","url":null,"abstract":"The singular points of fingerprints, namely, core and delta, are important referential points for the classification of fingerprints. Several conventional approaches such as the Poincare index method have been proposed; however, these approaches cannot achieve the reliable detection of poor-quality fingerprints. In this paper, we propose a new core and delta detection method by singular candidate analysis using an extended relational graph. In order to use both the local and global features of the ridge direction patterns and to realize a method with high tolerance to local image noise, singular candidate analysis is adopted in the detection process; this analysis involves the extraction of locations in which the probability of the existence of a singular point is high. The experimental results show that the success rate of this approach is higher than that of the Poincare index method by 10% for singularity detection using the fingerprint image databases FVC2000 and FVC2002. These databases contain several poor quality images, even though the average computation time is 15%-30% greater than the Poincare index method.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"2676 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122646448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agnes Swadzba, Niklas Beuter, Joachim Schmidt, G. Sagerer
{"title":"Tracking objects in 6D for reconstructing static scenes","authors":"Agnes Swadzba, Niklas Beuter, Joachim Schmidt, G. Sagerer","doi":"10.1109/CVPRW.2008.4563155","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563155","url":null,"abstract":"This paper focuses on two aspects of a human robot interaction scenario: Detection and tracking of moving objects, e.g., persons is necessary for localizing possible interaction partners and reconstruction of the surroundings can be used for navigation purposes and room categorization. Although these processes can be addressed independent from each other, we show that using the available data in exchange enables a more exact reconstruction of the static scene. A 6D data representation consisting of 3D Time-of-Flight (ToF) Sensor data and computed 3D velocities allows segmenting the scene into clusters with consistent velocities. A weak object model is applied to localize and track objects within a particle filter framework. As a consequence, points emerging from moving objects can be neglected during reconstruction. Experiments demonstrate enhanced reconstruction results in comparison to pure bottom-up methods, especially for very short image sequences.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122083428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model-based perceptual grouping and shape abstraction","authors":"Pablo Sala, Sven J. Dickinson","doi":"10.1109/CVPRW.2008.4562979","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562979","url":null,"abstract":"Contour features are re-emerging in the categorization community as it moves from appearance back to shape. However, the classical assumption of one-to-one correspondence between an extracted image contour and a model contour constrains category models to be highly brittle, offering little abstraction between image and model. Moreover, todaypsilas contour-based models are category-specific, offering no mechanism for contour grouping and abstraction in the absence of an object prior. We present a novel framework for recovering a set of abstract parts from a multi-scale contour image. Given a user-specified part vocabulary and an image to be analyzed, the system covers the image with abstract part models drawn from the vocabulary. More importantly, correspondence between image contours and part contours is many-to-one, yielding a powerful shape abstraction mechanism. We illustrate the strengths and weaknesses of this work in progress on a set of anecdotal scenes.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123810472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Particle filtering with rendered models: A two pass approach to multi-object 3D tracking with the GPU","authors":"E. Murphy-Chutorian, M. Trivedi","doi":"10.1109/CVPRW.2008.4563102","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563102","url":null,"abstract":"We describe a new approach to vision-based 3D object tracking, using appearance-based particle filters to follow 3D model reconstructions. This method is targeted towards modern graphics processors, which are optimized for 3D reconstruction and are capable of highly parallel computation. We discuss an OpenGL implementation of this approach, which uses two rendering passes to update the particle filter weights. In the first pass, the system renders the previous object state estimates to an off-screen framebuffer. In the second pass, the system uses a programmable vertex shader to compute the mean normalized cross-correlation between each sample and the subsequent video frame. The particle filters are updated using the correlation scores and provide a full 3D track of the objects. We provide examples for tracking human heads in both single and multi-camera scenarios.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131572046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Duda, B. Avants, Junghoon J. Kim, Hui Zhang, Sunil Patel, J. Whyte, J. Gee
{"title":"Multivariate analysis of thalamo-cortical connectivity loss in TBI","authors":"J. Duda, B. Avants, Junghoon J. Kim, Hui Zhang, Sunil Patel, J. Whyte, J. Gee","doi":"10.1109/CVPRW.2008.4562992","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562992","url":null,"abstract":"Diffusion tensor (DT) images quantify connectivity patterns in the brain while the T1 modality provides high-resolution images of tissue interfaces. Our objective is to use both modalities to build subject-specific, quantitative models of fiber connections in order to discover effects specific to a neural system. The health of this thalamo-cortical network is compromised by traumatic brain injury, and we hypothesize that these effects are due to a primary injury to the thalamus which results in subsequent compromise of radiating fibers. We first use a population-specific average T1 and DT template to label the thalamus and Brodmann areas (BA) 9,10 and 11 in each subject. We also build an expected connection model within this template space that is transferred to subject space in order to provide a prior restriction on probabilistic tracking performed in subject space. We evaluate the effect of traumatic brain injury on this prefrontal-thalamus network by quantifying, in 10 subjects and 8 controls, the mean diffusion and fractional anisotropy along fiber tracts, along with the mean diffusion within the thalamus and cortical regions. We contrast results gained by a template-based tract definition with those gained by performing analysis in the subject space. Both approaches reveal connectivity effects of TBI, specifically a region of reduced FA in the white matter connecting the thalamus to BA 10.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126268144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Spatio-Temporal face recognition using dynamic range model sequences","authors":"Yi Sun, L. Yin","doi":"10.1109/CVPRW.2008.4563125","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563125","url":null,"abstract":"Research on 3D face recognition has been intensified in recent years. However, most research has focused on the 3D static data analysis. In this paper, we investigate the face recognition problem using dynamic 3D face model sequences. Based on our newly created 3D dynamic face database, we propose to use a spatio-temporal hidden Markov model (HMM) which incorporates 3D surface feature characterization to learn the spatial and temporal information of faces. The advantage of using 3D dynamic data for face recognition has been evaluated by comparing our approach to three conventional approaches: 2D video based temporal HMM model, conventional 2D-texture based approach (e.g., Gabor wavelet based approach), and static 3D-model-based approaches.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123137803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sigurjón Árni Guðmundsson, R. Larsen, H. Aanæs, M. Pardàs, J. Casas
{"title":"TOF imaging in Smart room environments towards improved people tracking","authors":"Sigurjón Árni Guðmundsson, R. Larsen, H. Aanæs, M. Pardàs, J. Casas","doi":"10.1109/CVPRW.2008.4563154","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563154","url":null,"abstract":"In this paper we present the use of time-of-flight (TOF) cameras in smart-rooms and how this leads to improved results in segmenting the people in the room from the background and consequently better 3D reconstruction of the people. A calibrated rig of one Swissranger SR3100 time-of-flight range camera and a high resolution standard camera is set in a smart-room consisting of 5 other standard cameras. A probabilistic background model is used to segment each view and a shape from silhouette 3D volume is constructed. It is shown that the presence of the range camera gives ways of eliminating regional artifacts and therefore a more robust input for higher level applications such people tracking or human motion analysis.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114939563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Hemaraj, M. Sen, W. Plishker, R. Shekhar, S. Bhattacharyya
{"title":"Model-based mapping of a nonrigid image registration algorithm to heterogeneous architectures","authors":"Y. Hemaraj, M. Sen, W. Plishker, R. Shekhar, S. Bhattacharyya","doi":"10.1109/CVPRW.2008.4563151","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563151","url":null,"abstract":"This work targets the design of customized accelerators for image registration algorithms, which are required for many important computer vision applications. By capturing key, domain-specific characteristics of application structure, signal-processing-oriented models of computation provide a valuable foundation for structured development of efficient image registration accelerators. Building upon the meta-modeling framework of homogeneous parameterized dataflow, we develop in this paper an approach for automatically generating streamlined implementations of image registration algorithms according to performance metrics such as image size, area and overall processing speed. Results from hardware synthesis demonstrate the efficiency of our methods. Our approach provides designers an effective way to explore different architectures, and systematically provide acceleration for high-performance nonrigid image registration based on a variety of requirements. Our dataflow-based framework can be adapted to explore different architectures for other kinds of image processing algorithms as well.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115066765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}