2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops最新文献

筛选
英文 中文
Robust real-time 3D modeling of static scenes using solely a Time-of-Flight sensor 鲁棒的实时三维建模静态场景仅使用一个飞行时间传感器
J. Feulner, J. Penne, E. Kollorz, J. Hornegger
{"title":"Robust real-time 3D modeling of static scenes using solely a Time-of-Flight sensor","authors":"J. Feulner, J. Penne, E. Kollorz, J. Hornegger","doi":"10.1109/CVPRW.2009.5205204","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5205204","url":null,"abstract":"An algorithm is proposed for the 3D modeling of static scenes solely based on the range and intensity data acquired by a time-of-flight camera during an arbitrary movement. No additional scene acquisition devices, like inertia sensor, positioning robots or intensity based cameras are incorporated. The current pose is estimated by maximizing the uncentered correlation coefficient between edges detected in the current and a preceding frame at a minimum frame rate of four fps and an average accuracy of 45 mm. The paper also describes several extensions for robust registration like multiresolution hierarchies and projection Iterative Closest Point algorithm. The basic registration algorithm and its extensions were intensively evaluated against ground truth data to validate the accuracy, robustness and real-time-capability.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115035570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Transparent watermarking using bidirectional imaging 透明水印使用双向成像
Kristin J. Dana, G. Livescu, R. Makonahalli
{"title":"Transparent watermarking using bidirectional imaging","authors":"Kristin J. Dana, G. Livescu, R. Makonahalli","doi":"10.1109/CVPRW.2009.5204321","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204321","url":null,"abstract":"We present a method for transparent watermarking using a custom bidirectional imaging device. The two innovative concepts of our approach are reflectance coding and multiview imaging. In reflectance coding, information is embedded in the angular space of the bidirectional reflectance distribution function (BRDF) and this information can vary at each surface point. In order to achieve a transparent watermark, reflectance coding is implemented using a spatial variation of the Brewster angle. The novel multiview imaging method measures the reflectance over a range of viewing and illumination angles in order to instantly reveal the unknown Brewster angle. Unlike typical in-lab measurements of the Brewster angle or the refractive index, this method does not require accurate prior knowledge of the surface normal so that imaging in non-lab conditions is feasible. Furthermore, a range of incident angles are examined simultaneously, eliminating the need for scanning incidence angles. The approach is well-suited for transparent watermarking where the observer cannot see the watermark because it is comprised of spatial variations of refractive index. The transparency and angular coding of the watermark has great utility in deterring counterfeit attempts. In this paper, we present the imaging device and demonstrate it's effectiveness in detecting and measuring changes in refractive index. This device acts as the decoder in a transparent watermark system.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117172727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Auditory dialog analysis and understanding by generative modelling of interactional dynamics 通过互动动态的生成建模来分析和理解听觉对话
M. Cristani, Anna Pesarin, C. Drioli, A. Tavano, A. Perina, Vittorio Murino
{"title":"Auditory dialog analysis and understanding by generative modelling of interactional dynamics","authors":"M. Cristani, Anna Pesarin, C. Drioli, A. Tavano, A. Perina, Vittorio Murino","doi":"10.1109/CVPRW.2009.5204265","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204265","url":null,"abstract":"In the last few years, the interest in the analysis of human behavioral schemes has dramatically grown, in particular for the interpretation of the communication modalities called social signals. They represent well defined interaction patterns, possibly unconscious, characterizing different conversational situations and behaviors in general. In this paper, we illustrate an automatic system based on a generative structure able to analyze conversational scenarios. The generative model is composed by integrating a Gaussian mixture model and the (observed) influence model, and it is fed with a novel kind of simple low-level auditory social signals, which are termed steady conversational periods (SCPs). These are built on duration of continuous slots of silence or speech, taking also into account conversational turn-taking. The interactional dynamics built upon the transitions among SCPs provide a behavioral blueprint of conversational settings without relying on segmental or continuous phonetic features. Our contribution here is to show the effectiveness of our model when applied on dialogs classification and clustering tasks, considering dialogs between adults and between children and adults, in both flat and arguing discussions, and showing excellent performances also in comparison with state-of-the-art frameworks.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123268718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Posture invariant gender classification for 3D human models 三维人体模型的姿势不变性别分类
S. Wuhrer, Chang Shu, M. Rioux
{"title":"Posture invariant gender classification for 3D human models","authors":"S. Wuhrer, Chang Shu, M. Rioux","doi":"10.1109/CVPRW.2009.5204295","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204295","url":null,"abstract":"We study the behaviorally important task of gender classification based on the human body shape. We propose a new technique to classify by gender human bodies represented by possibly incomplete triangular meshes obtained using laser range scanners. The classification algorithm is invariant of the posture of the human body. Geodesic distances on the mesh are used for classification. Our results indicate that the geodesic distances between the chest and the wrists and the geodesic distances between the lower back and the face are the most important ones for gender classification. The classification is shown to perform well for different postures of the human subjects. We model the geodesic distance distributions as Gaussian distributions and compute the quality of the classification for three standard methods in pattern recognition: linear discriminant functions, Bayesian discriminant functions, and support vector machines. All of the experiments yield high classification accuracy. For instance, when support vector machines are used, the classification accuracy is at least 93% for all of our experiments. This shows that geodesic distances are suitable to discriminate humans by gender.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117007756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Tunable tensor voting improves grouping of membrane-bound macromolecules 可调张量投票改善了膜结合大分子的分组
Leandro A. Loss, G. Bebis, B. Parvin
{"title":"Tunable tensor voting improves grouping of membrane-bound macromolecules","authors":"Leandro A. Loss, G. Bebis, B. Parvin","doi":"10.1109/CVPRW.2009.5204047","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204047","url":null,"abstract":"Membrane-bound macromolecules are responsible for structural support and mediation of cell-cell adhesion in tissues. Quantitative analysis of these macromolecules provides morphological indices for damage or loss of tissue, for example as a result of exogenous stimuli. From an optical point of view, a membrane signal may have nonuniform intensity around the cell boundary, be punctate or diffused, and may even be perceptual at certain locations along the boundary. In this paper, a method for the detection and grouping of punctate, diffuse curvilinear signals is proposed. Our work builds upon the tensor voting and the iterative voting frameworks to propose an efficient method to detect and refine perceptually interesting curvilinear structures in images. The novelty of our method lies on the idea of iteratively tuning the tensor voting fields, which allows the concentration of the votes only over areas of interest. We validate the utility of our system with synthetic and annotated real data. The effectiveness of the tunable tensor voting is demonstrated on complex phenotypic signals that are representative of membrane-bound macromolecular structures.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128465438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
High level activity recognition using low resolution wearable vision 使用低分辨率可穿戴视觉的高水平活动识别
Sudeep Sundaram, W. Mayol-Cuevas
{"title":"High level activity recognition using low resolution wearable vision","authors":"Sudeep Sundaram, W. Mayol-Cuevas","doi":"10.1109/CVPRW.2009.5204355","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204355","url":null,"abstract":"This paper presents a system aimed to serve as the enabling platform for a wearable assistant. The method observes manipulations from a wearable camera and classifies activities from roughly stabilized low resolution images (160 × 120 pixels) with the help of a 3-level Dynamic Bayesian Network and adapted temporal templates. Our motivation is to explore robust but computationally inexpensive visual methods to perform as much activity inference as possible without resorting to more complex object or hand detectors. The description of the method and results obtained are presented, as well as the motivation for further work in the area of wearable visual sensing.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126969137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
A user-friendly method to geometrically calibrate projector-camera systems 一个用户友好的方法几何校准投影仪-摄像机系统
S. Audet, M. Okutomi
{"title":"A user-friendly method to geometrically calibrate projector-camera systems","authors":"S. Audet, M. Okutomi","doi":"10.1109/CVPRW.2009.5204319","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204319","url":null,"abstract":"Projector-camera systems drive applications in many fields such as measurement and spatial augmented reality. When needed, we can find their internal and external parameters via geometric calibration. For this process, we have to use both a printed pattern and a projector pattern, but they can easily interfere with each other. Current methods compensate by decoupling their calibrations or by leveraging structured light and color channels, but the required manipulations are not user-friendly. Therefore, we cannot expect normal users to execute the procedure, which can also become a burden for researchers. Although not always required, knowledge of the geometric parameters can often facilitate development of new systems. To make the calibration process easier, we propose a method that uses fiducial markers, from which we can easily derive a prewarp that, once applied to the projector calibration pattern, prevents its interference. Using our method, we confirmed that users can easily calibrate a projector-camera system in less than one minute, which we consider to be user-friendly, while still achieving typical subpixel accuracy.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126973018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 109
Illustrating motion through DLP photography 通过DLP摄影说明运动
S. Koppal, S. Narasimhan
{"title":"Illustrating motion through DLP photography","authors":"S. Koppal, S. Narasimhan","doi":"10.1109/CVPRW.2009.5204315","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204315","url":null,"abstract":"Strobe-light photography creates beautiful high-frequency effects by capturing multiple object copies. Single-chip DLP projectors produce a similar effect, with two important distinctions. Firstly, strobing occurs at different frequencies: at 10000 Hz, due to the DMD chip, and at 120 Hz, due to the colorwheel. Secondly, DLP illumination lacks the perception of `on-off' flashing that characterizes a strobe-light, since these frequencies are beyond human perception. While higher spatial frequencies are preserved in a motion-blurred DLP photograph, deblurring is still difficult, especially for articulated and deformable objects, since the deconvolution kernel can be different at each pixel. Instead we process DLP photographs to create new images that either summarize a dynamic scene or illustrate its motion. We conclude by discussing the frequencies present in DLP photographs, comparing them to images taken under skylight and fluorescent light.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121107289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Image registration by minimization of Mapping Complexity 最小化映射复杂度的图像配准
A. Myronenko, Xubo B. Song
{"title":"Image registration by minimization of Mapping Complexity","authors":"A. Myronenko, Xubo B. Song","doi":"10.1109/CVPRW.2009.5204345","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204345","url":null,"abstract":"The criterion for the correct spatial alignment is a key component in image registration. We formulate the registration problem as one that finds the spatial and intensity mappings of minimal complexity that make images exactly equal. We do not assume any parametric forms of these functions, and estimate them within variational calculus. We analytically solve for non-stationary intensity mapping, eliminate it from the objective function and arrive with a new similarity measure. We name it the mapping complexity (MC) similarity measure, because it achieves the optimum when intensity and spatial mappings are of minimal complexity. Due to its general formulation, the similarity measure works both for complex intensity relationships (e.g. multimodal registration) and for spatially-varying intensity distortions. Our similarity measure can be interpreted as the one that favors one image to lie mostly within a span of the leading eigenvectors of the kernel matrix, where the kernel matrix is constructed from the second image. We introduce a fast algorithm to compute the similarity measure. In particular, we introduce a fast kernel vector product (FKVP) algorithm, which is of general interest in computer vision. We demonstrate the accuracy of the new similarity measure on several mono- and multi-modal examples with complex intensity non-uniformities.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124416014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Measuring changes in face appearance through aging 通过衰老测量面部外观的变化
M. Ortega, L. Brodo, M. Bicego, M. Tistarelli
{"title":"Measuring changes in face appearance through aging","authors":"M. Ortega, L. Brodo, M. Bicego, M. Tistarelli","doi":"10.1109/CVPRW.2009.5204302","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204302","url":null,"abstract":"Facial aging has been only partially studied in the past and mostly in a qualitative way. This paper presents a novel approach to the estimation of facial aging aimed to the quantitative evaluation of the changes in facial appearance over time. In particular, the changes both in face shape and texture, due to short-time aging, are considered. The developed framework exploits the concept of “distinctiveness” of facial features and the temporal evolution of such measure. The analysis is performed both at a global and local level to define the features which are more stable over time. Several experiments are performed on publicly available databases with image sequences densely sampled over a time span of several years. The reported results clearly show the potential of the methodology to a number of applications in biometric identification from human faces.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126346310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信