{"title":"Robust real-time 3D modeling of static scenes using solely a Time-of-Flight sensor","authors":"J. Feulner, J. Penne, E. Kollorz, J. Hornegger","doi":"10.1109/CVPRW.2009.5205204","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5205204","url":null,"abstract":"An algorithm is proposed for the 3D modeling of static scenes solely based on the range and intensity data acquired by a time-of-flight camera during an arbitrary movement. No additional scene acquisition devices, like inertia sensor, positioning robots or intensity based cameras are incorporated. The current pose is estimated by maximizing the uncentered correlation coefficient between edges detected in the current and a preceding frame at a minimum frame rate of four fps and an average accuracy of 45 mm. The paper also describes several extensions for robust registration like multiresolution hierarchies and projection Iterative Closest Point algorithm. The basic registration algorithm and its extensions were intensively evaluated against ground truth data to validate the accuracy, robustness and real-time-capability.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115035570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transparent watermarking using bidirectional imaging","authors":"Kristin J. Dana, G. Livescu, R. Makonahalli","doi":"10.1109/CVPRW.2009.5204321","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204321","url":null,"abstract":"We present a method for transparent watermarking using a custom bidirectional imaging device. The two innovative concepts of our approach are reflectance coding and multiview imaging. In reflectance coding, information is embedded in the angular space of the bidirectional reflectance distribution function (BRDF) and this information can vary at each surface point. In order to achieve a transparent watermark, reflectance coding is implemented using a spatial variation of the Brewster angle. The novel multiview imaging method measures the reflectance over a range of viewing and illumination angles in order to instantly reveal the unknown Brewster angle. Unlike typical in-lab measurements of the Brewster angle or the refractive index, this method does not require accurate prior knowledge of the surface normal so that imaging in non-lab conditions is feasible. Furthermore, a range of incident angles are examined simultaneously, eliminating the need for scanning incidence angles. The approach is well-suited for transparent watermarking where the observer cannot see the watermark because it is comprised of spatial variations of refractive index. The transparency and angular coding of the watermark has great utility in deterring counterfeit attempts. In this paper, we present the imaging device and demonstrate it's effectiveness in detecting and measuring changes in refractive index. This device acts as the decoder in a transparent watermark system.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117172727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Cristani, Anna Pesarin, C. Drioli, A. Tavano, A. Perina, Vittorio Murino
{"title":"Auditory dialog analysis and understanding by generative modelling of interactional dynamics","authors":"M. Cristani, Anna Pesarin, C. Drioli, A. Tavano, A. Perina, Vittorio Murino","doi":"10.1109/CVPRW.2009.5204265","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204265","url":null,"abstract":"In the last few years, the interest in the analysis of human behavioral schemes has dramatically grown, in particular for the interpretation of the communication modalities called social signals. They represent well defined interaction patterns, possibly unconscious, characterizing different conversational situations and behaviors in general. In this paper, we illustrate an automatic system based on a generative structure able to analyze conversational scenarios. The generative model is composed by integrating a Gaussian mixture model and the (observed) influence model, and it is fed with a novel kind of simple low-level auditory social signals, which are termed steady conversational periods (SCPs). These are built on duration of continuous slots of silence or speech, taking also into account conversational turn-taking. The interactional dynamics built upon the transitions among SCPs provide a behavioral blueprint of conversational settings without relying on segmental or continuous phonetic features. Our contribution here is to show the effectiveness of our model when applied on dialogs classification and clustering tasks, considering dialogs between adults and between children and adults, in both flat and arguing discussions, and showing excellent performances also in comparison with state-of-the-art frameworks.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123268718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Posture invariant gender classification for 3D human models","authors":"S. Wuhrer, Chang Shu, M. Rioux","doi":"10.1109/CVPRW.2009.5204295","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204295","url":null,"abstract":"We study the behaviorally important task of gender classification based on the human body shape. We propose a new technique to classify by gender human bodies represented by possibly incomplete triangular meshes obtained using laser range scanners. The classification algorithm is invariant of the posture of the human body. Geodesic distances on the mesh are used for classification. Our results indicate that the geodesic distances between the chest and the wrists and the geodesic distances between the lower back and the face are the most important ones for gender classification. The classification is shown to perform well for different postures of the human subjects. We model the geodesic distance distributions as Gaussian distributions and compute the quality of the classification for three standard methods in pattern recognition: linear discriminant functions, Bayesian discriminant functions, and support vector machines. All of the experiments yield high classification accuracy. For instance, when support vector machines are used, the classification accuracy is at least 93% for all of our experiments. This shows that geodesic distances are suitable to discriminate humans by gender.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117007756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tunable tensor voting improves grouping of membrane-bound macromolecules","authors":"Leandro A. Loss, G. Bebis, B. Parvin","doi":"10.1109/CVPRW.2009.5204047","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204047","url":null,"abstract":"Membrane-bound macromolecules are responsible for structural support and mediation of cell-cell adhesion in tissues. Quantitative analysis of these macromolecules provides morphological indices for damage or loss of tissue, for example as a result of exogenous stimuli. From an optical point of view, a membrane signal may have nonuniform intensity around the cell boundary, be punctate or diffused, and may even be perceptual at certain locations along the boundary. In this paper, a method for the detection and grouping of punctate, diffuse curvilinear signals is proposed. Our work builds upon the tensor voting and the iterative voting frameworks to propose an efficient method to detect and refine perceptually interesting curvilinear structures in images. The novelty of our method lies on the idea of iteratively tuning the tensor voting fields, which allows the concentration of the votes only over areas of interest. We validate the utility of our system with synthetic and annotated real data. The effectiveness of the tunable tensor voting is demonstrated on complex phenotypic signals that are representative of membrane-bound macromolecular structures.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128465438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High level activity recognition using low resolution wearable vision","authors":"Sudeep Sundaram, W. Mayol-Cuevas","doi":"10.1109/CVPRW.2009.5204355","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204355","url":null,"abstract":"This paper presents a system aimed to serve as the enabling platform for a wearable assistant. The method observes manipulations from a wearable camera and classifies activities from roughly stabilized low resolution images (160 × 120 pixels) with the help of a 3-level Dynamic Bayesian Network and adapted temporal templates. Our motivation is to explore robust but computationally inexpensive visual methods to perform as much activity inference as possible without resorting to more complex object or hand detectors. The description of the method and results obtained are presented, as well as the motivation for further work in the area of wearable visual sensing.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126969137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A user-friendly method to geometrically calibrate projector-camera systems","authors":"S. Audet, M. Okutomi","doi":"10.1109/CVPRW.2009.5204319","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204319","url":null,"abstract":"Projector-camera systems drive applications in many fields such as measurement and spatial augmented reality. When needed, we can find their internal and external parameters via geometric calibration. For this process, we have to use both a printed pattern and a projector pattern, but they can easily interfere with each other. Current methods compensate by decoupling their calibrations or by leveraging structured light and color channels, but the required manipulations are not user-friendly. Therefore, we cannot expect normal users to execute the procedure, which can also become a burden for researchers. Although not always required, knowledge of the geometric parameters can often facilitate development of new systems. To make the calibration process easier, we propose a method that uses fiducial markers, from which we can easily derive a prewarp that, once applied to the projector calibration pattern, prevents its interference. Using our method, we confirmed that users can easily calibrate a projector-camera system in less than one minute, which we consider to be user-friendly, while still achieving typical subpixel accuracy.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126973018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Illustrating motion through DLP photography","authors":"S. Koppal, S. Narasimhan","doi":"10.1109/CVPRW.2009.5204315","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204315","url":null,"abstract":"Strobe-light photography creates beautiful high-frequency effects by capturing multiple object copies. Single-chip DLP projectors produce a similar effect, with two important distinctions. Firstly, strobing occurs at different frequencies: at 10000 Hz, due to the DMD chip, and at 120 Hz, due to the colorwheel. Secondly, DLP illumination lacks the perception of `on-off' flashing that characterizes a strobe-light, since these frequencies are beyond human perception. While higher spatial frequencies are preserved in a motion-blurred DLP photograph, deblurring is still difficult, especially for articulated and deformable objects, since the deconvolution kernel can be different at each pixel. Instead we process DLP photographs to create new images that either summarize a dynamic scene or illustrate its motion. We conclude by discussing the frequencies present in DLP photographs, comparing them to images taken under skylight and fluorescent light.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121107289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image registration by minimization of Mapping Complexity","authors":"A. Myronenko, Xubo B. Song","doi":"10.1109/CVPRW.2009.5204345","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204345","url":null,"abstract":"The criterion for the correct spatial alignment is a key component in image registration. We formulate the registration problem as one that finds the spatial and intensity mappings of minimal complexity that make images exactly equal. We do not assume any parametric forms of these functions, and estimate them within variational calculus. We analytically solve for non-stationary intensity mapping, eliminate it from the objective function and arrive with a new similarity measure. We name it the mapping complexity (MC) similarity measure, because it achieves the optimum when intensity and spatial mappings are of minimal complexity. Due to its general formulation, the similarity measure works both for complex intensity relationships (e.g. multimodal registration) and for spatially-varying intensity distortions. Our similarity measure can be interpreted as the one that favors one image to lie mostly within a span of the leading eigenvectors of the kernel matrix, where the kernel matrix is constructed from the second image. We introduce a fast algorithm to compute the similarity measure. In particular, we introduce a fast kernel vector product (FKVP) algorithm, which is of general interest in computer vision. We demonstrate the accuracy of the new similarity measure on several mono- and multi-modal examples with complex intensity non-uniformities.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124416014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring changes in face appearance through aging","authors":"M. Ortega, L. Brodo, M. Bicego, M. Tistarelli","doi":"10.1109/CVPRW.2009.5204302","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204302","url":null,"abstract":"Facial aging has been only partially studied in the past and mostly in a qualitative way. This paper presents a novel approach to the estimation of facial aging aimed to the quantitative evaluation of the changes in facial appearance over time. In particular, the changes both in face shape and texture, due to short-time aging, are considered. The developed framework exploits the concept of “distinctiveness” of facial features and the temporal evolution of such measure. The analysis is performed both at a global and local level to define the features which are more stable over time. Several experiments are performed on publicly available databases with image sequences densely sampled over a time span of several years. The reported results clearly show the potential of the methodology to a number of applications in biometric identification from human faces.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126346310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}