2009 IEEE Conference on Computer Vision and Pattern Recognition最新文献

筛选
英文 中文
Disambiguating the recognition of 3D objects 消除三维物体识别的歧义
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206683
Gutemberg Guerra-Filho
{"title":"Disambiguating the recognition of 3D objects","authors":"Gutemberg Guerra-Filho","doi":"10.1109/CVPR.2009.5206683","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206683","url":null,"abstract":"We propose novel algorithms for the detection, segmentation, recognition, and pose estimation of three-dimensional objects. Our approach initially infers geometric primitives to describe the set of 3D objects. A hierarchical structure is constructed to organize the objects in terms of shared primitives and relations between different primitives in the same object. This structure is shown to disambiguate the object models and to improve recognition rates. The primitives are obtained through our new Invariant Hough Transform. This algorithm uses geometric invariants to compute relations for subsets of points in a specific object. Each relation is stored in a hash table according to the invariant value. The hash table is used to find potential corresponding points between objects. With point matches, pose estimation is achieved by building a probability distribution of transformations. We evaluate our methods with experiments using synthetic and real 3D objects.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131302227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards high-resolution large-scale multi-view stereo 迈向高分辨率大尺度多视点立体
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206617
Hoang-Hiep Vu, R. Keriven, Patrick Labatut, Jean-Philippe Pons
{"title":"Towards high-resolution large-scale multi-view stereo","authors":"Hoang-Hiep Vu, R. Keriven, Patrick Labatut, Jean-Philippe Pons","doi":"10.1109/CVPR.2009.5206617","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206617","url":null,"abstract":"Boosted by the Middlebury challenge, the precision of dense multi-view stereovision methods has increased drastically in the past few years. Yet, most methods, although they perform well on this benchmark, are still inapplicable to large-scale data sets taken under uncontrolled conditions. In this paper, we propose a multi-view stereo pipeline able to deal at the same time with very large scenes while still producing highly detailed reconstructions within very reasonable time. The keys to these benefits are twofold: (i) a minimum s-t cut based global optimization that transforms a dense point cloud into a visibility consistent mesh, followed by (ii) a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization and adaptive resolution. Our method has been tested on numerous large-scale outdoor scenes. The accuracy of our reconstructions is also measured on the recent dense multi-view benchmark proposed by Strecha et al., showing our results to compare more than favorably with the current state-of-the-art.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131847950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 279
Learning rotational features for filament detection 学习旋转特征的灯丝检测
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206511
Germán González, F. Fleuret, P. Fua
{"title":"Learning rotational features for filament detection","authors":"Germán González, F. Fleuret, P. Fua","doi":"10.1109/CVPR.2009.5206511","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206511","url":null,"abstract":"State-of-the-art approaches for detecting filament-like structures in noisy images rely on filters optimized for signals of a particular shape, such as an ideal edge or ridge. While these approaches are optimal when the image conforms to these ideal shapes, their performance quickly degrades on many types of real data where the image deviates from the ideal model, and when noise processes violate a Gaussian assumption. In this paper, we show that by learning rotational features, we can outperform state-of-the-art filament detection techniques on many different kinds of imagery. More specifically, we demonstrate superior performance for the detection of blood vessel in retinal scans, neurons in brightfield microscopy imagery, and streets in satellite imagery.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124383463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
A similarity measure between vector sequences with application to handwritten word image retrieval 向量序列之间的相似性度量及其在手写文字图像检索中的应用
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206783
José A. Rodríguez-Serrano, F. Perronnin, J. Lladós, Gemma Sánchez
{"title":"A similarity measure between vector sequences with application to handwritten word image retrieval","authors":"José A. Rodríguez-Serrano, F. Perronnin, J. Lladós, Gemma Sánchez","doi":"10.1109/CVPR.2009.5206783","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206783","url":null,"abstract":"This article proposes a novel similarity measure between vector sequences. Recently, a model-based approach was introduced to address this issue. It consists in modeling each sequence with a continuous Hidden Markov Model (CHMM) and computing a probabilistic measure of similarity between C-HMMs. In this paper we propose to model sequences with semi-continuous HMMs (SC-HMMs): the Gaussians of the SC-HMMs are constrained to belong to a shared pool of Gaussians. This constraint provides two major benefits. First, the a priori information contained in the common set of Gaussians leads to a more accurate estimate of the HMM parameters. Second, the computation of a probabilistic similarity between two SC-HMMs can be simplified to a Dynamic Time Warping (DTW) between their mixture weight vectors, which reduces significantly the computational cost. Experimental results on a handwritten word retrieval task show that the proposed similarity outperforms the traditional DTW between the original sequences, and the model-based approach which uses C-HMMs. We also show that this increase in accuracy can be traded against a significant reduction of the computational cost (up to 100 times).","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114413139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Coded exposure deblurring: Optimized codes for PSF estimation and invertibility 编码曝光去模糊:PSF估计和可逆性的优化代码
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206685
Amit K. Agrawal, Yi Xu
{"title":"Coded exposure deblurring: Optimized codes for PSF estimation and invertibility","authors":"Amit K. Agrawal, Yi Xu","doi":"10.1109/CVPR.2009.5206685","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206685","url":null,"abstract":"We consider the problem of single image object motion deblurring from a static camera. It is well-known that deblurring of moving objects using a traditional camera is ill-posed, due to the loss of high spatial frequencies in the captured blurred image. A coded exposure camera modulates the integration pattern of light by opening and closing the shutter within the exposure time using a binary code. The code is chosen to make the resulting point spread function (PSF) invertible, for best deconvolution performance. However, for a successful deconvolution algorithm, PSF estimation is as important as PSF invertibility. We show that PSF estimation is easier if the resulting motion blur is smooth and the optimal code for PSF invertibility could worsen PSF estimation, since it leads to non-smooth blur. We show that both criterions of PSF invertibility and PSF estimation can be simultaneously met, albeit with a slight increase in the deconvolution noise. We propose design rules for a code to have good PSF estimation capability and outline two search criteria for finding the optimal code for a given length. We present theoretical analysis comparing the performance of the proposed code with the code optimized solely for PSF invertibility. We also show how to easily implement coded exposure on a consumer grade machine vision camera with no additional hardware. Real experimental results demonstrate the effectiveness of the proposed codes for motion deblurring.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123445806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
Motion pattern interpretation and detection for tracking moving vehicles in airborne video 机载视频中运动车辆跟踪的运动模式解释与检测
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206541
Qian Yu, G. Medioni
{"title":"Motion pattern interpretation and detection for tracking moving vehicles in airborne video","authors":"Qian Yu, G. Medioni","doi":"10.1109/CVPR.2009.5206541","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206541","url":null,"abstract":"Detection and tracking of moving vehicles in airborne videos is a challenging problem. Many approaches have been proposed to improve motion segmentation on frame-by-frame and pixel-by-pixel bases, however, little attention has been paid to analyze the long-term motion pattern, which is a distinctive property for moving vehicles in airborne videos. In this paper, we provide a straightforward geometric interpretation of a general motion pattern in 4D space (x, y, vx, vy). We propose to use the tensor voting computational framework to detect and segment such motion patterns in 4D space. Specifically, in airborne videos, we analyze the essential difference in motion patterns caused by parallax and independent moving objects, which leads to a practical method for segmenting motion patterns (flows) created by moving vehicles in stabilized airborne videos. The flows are used in turn to facilitate detection and tracking of each individual object in the flow. Conceptually, this approach is similar to “track-before-detect” techniques, which involves temporal information in the process as early as possible. As shown in the experiments, many difficult cases in airborne videos, such as parallax, noisy background modeling and long term occlusions, can be addressed by our approach.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116810879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Fast multiple shape correspondence by pre-organizing shape instances 通过预先组织形状实例,快速实现多个形状对应
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/cvpr.2009.5206611
B. Munsell, Andrew Temlyakov, Song Wang
{"title":"Fast multiple shape correspondence by pre-organizing shape instances","authors":"B. Munsell, Andrew Temlyakov, Song Wang","doi":"10.1109/cvpr.2009.5206611","DOIUrl":"https://doi.org/10.1109/cvpr.2009.5206611","url":null,"abstract":"Accurately identifying corresponded landmarks from a population of shape instances is the major challenge in constructing statistical shape models. In general, shape-correspondence methods can be grouped into one of two categories: global methods and pair-wise methods. In this paper, we develop a new method that attempts to address the limitations of both the global and pair-wise methods. In particular, we reorganize the input population into a tree structure that incorporates global information about the population of shape instances, where each node in the tree represents a shape instance and each edge connects two very similar shape instances. Using this organized tree, neighboring shape instances can be corresponded efficiently and accurately by a pair-wise method. In the experiments, we evaluate the proposed method and compare its performance to five available shape correspondence methods and show the proposed method achieves the accuracy of a global method with speed of a pair-wise method.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123982331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Towards a practical face recognition system: Robust registration and illumination by sparse representation 一个实用的人脸识别系统:稀疏表示的鲁棒配准和照明
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206654
Andrew Wagner, John Wright, Arvind Ganesh, Zihan Zhou, Yi Ma
{"title":"Towards a practical face recognition system: Robust registration and illumination by sparse representation","authors":"Andrew Wagner, John Wright, Arvind Ganesh, Zihan Zhou, Yi Ma","doi":"10.1109/CVPR.2009.5206654","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206654","url":null,"abstract":"Most contemporary face recognition algorithms work well under laboratory conditions but degrade when tested in less-controlled environments. This is mostly due to the difficulty of simultaneously handling variations in illumination, alignment, pose, and occlusion. In this paper, we propose a simple and practical face recognition system that achieves a high degree of robustness and stability to all these variations. We demonstrate how to use tools from sparse representation to align a test face image with a set of frontal training images in the presence of significant registration error and occlusion. We thoroughly characterize the region of attraction for our alignment algorithm on public face datasets such as Multi-PIE. We further study how to obtain a sufficient set of training illuminations for linearly interpolating practical lighting conditions. We have implemented a complete face recognition system, including a projector-based training acquisition system, in order to evaluate how our algorithms work under practical testing conditions. We show that our system can efficiently and effectively recognize faces under a variety of realistic conditions, using only frontal images under the proposed illuminations as training.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123988885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 211
A nonparametric Riemannian framework for processing high angular resolution diffusion images (HARDI) 处理高角分辨率扩散图像的非参数黎曼框架
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206843
A. Goh, C. Lenglet, P. Thompson, R. Vidal
{"title":"A nonparametric Riemannian framework for processing high angular resolution diffusion images (HARDI)","authors":"A. Goh, C. Lenglet, P. Thompson, R. Vidal","doi":"10.1109/CVPR.2009.5206843","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206843","url":null,"abstract":"High angular resolution diffusion imaging has become an important magnetic resonance technique for in vivo imaging. Most current research in this field focuses on developing methods for computing the orientation distribution function (ODF), which is the probability distribution function of water molecule diffusion along any angle on the sphere. In this paper, we present a Riemannian framework to carry out computations on an ODF field. The proposed framework does not require that the ODFs be represented by any fixed parameterization, such as a mixture of von Mises-Fisher distributions or a spherical harmonic expansion. Instead, we use a non-parametric representation of the ODF, and exploit the fact that under the square-root re-parameterization, the space of ODFs forms a Riemannian manifold, namely the unit Hilbert sphere. Specifically, we use Riemannian operations to perform various geometric data processing algorithms, such as interpolation, convolution and linear and nonlinear filtering. We illustrate these concepts with numerical experiments on synthetic and real datasets.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129725605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Efficient planar graph cuts with applications in Computer Vision 高效平面图形切割及其在计算机视觉中的应用
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206863
Frank R. Schmidt, Eno Töppe, D. Cremers
{"title":"Efficient planar graph cuts with applications in Computer Vision","authors":"Frank R. Schmidt, Eno Töppe, D. Cremers","doi":"10.1109/CVPR.2009.5206863","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206863","url":null,"abstract":"We present a fast graph cut algorithm for planar graphs. It is based on the graph theoretical work and leads to an efficient method that we apply on shape matching and image segmentation. In contrast to currently used methods in computer vision, the presented approach provides an upper bound for its runtime behavior that is almost linear. In particular, we are able to match two different planar shapes of N points in O(N2 log N) and segment a given image of N pixels in O(N log N). We present two experimental benchmark studies which demonstrate that the presented method is also in practice faster than previously proposed graph cut methods: On planar shape matching and image segmentation we observe a speed-up of an order of magnitude, depending on resolution.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127013163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信