2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops最新文献

筛选
英文 中文
3D priors for scene learning from a single view 3D先验的场景学习从单一视图
D. Rother, K. A. Patwardhan, I. Aganj, G. Sapiro
{"title":"3D priors for scene learning from a single view","authors":"D. Rother, K. A. Patwardhan, I. Aganj, G. Sapiro","doi":"10.1109/CVPRW.2008.4563034","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563034","url":null,"abstract":"A framework for scene learning from a single still video camera is presented in this work. In particular, the camera transformation and the direction of the shadows are learned using information extracted from pedestrians walking in the scene. The proposed approach poses the scene learning estimation as a likelihood maximization problem, efficiently solved via factorization and dynamic programming, and amenable to an online implementation. We introduce a 3D prior to model the pedestrianpsilas appearance from any viewpoint, and learn it using a standard off-the-shelf consumer video camera and the Radon transform. This 3D prior or ldquoappearance modelrdquo is used to quantify the agreement between the tentative parameters and the actual video observations, taking into account not only the pixels occupied by the pedestrian, but also those occupied by the his shadows and/or reflections. The presentation of the framework is complemented with an example of a casual video scene showing the importance of the learned 3D pedestrian prior and the accuracy of the proposed approach.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130778601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A methodology for quality assessment in tensor images 张量图像质量评价方法
E. Muñoz-Moreno, S. Aja‐Fernández, M. Martín-Fernández
{"title":"A methodology for quality assessment in tensor images","authors":"E. Muñoz-Moreno, S. Aja‐Fernández, M. Martín-Fernández","doi":"10.1109/CVPRW.2008.4562965","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562965","url":null,"abstract":"Since tensor usage has become more and more popular in image processing, the assessment of the quality between tensor images is necessary for the evaluation of the advanced processing algorithms that deal with this kind of data. In this paper, we expose the methodology that should be followed to extend well-known image quality measures to tensor data. Two of these measures based on structural comparison are adapted to tensor images and their performance is shown by a set of examples. By means of these experiments the advantages of structural based measures will be highlighted, as well as the need for considering all the tensor components in the quality assessment.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132851702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Active sampling via tracking 主动跟踪采样
P. Roth, H. Bischof
{"title":"Active sampling via tracking","authors":"P. Roth, H. Bischof","doi":"10.1109/CVPRW.2008.4563069","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563069","url":null,"abstract":"To learn an object detector labeled training data is required. Since unlabeled training data is often given as an image sequence we propose a tracking-based approach to minimize the manual effort when learning an object detector. The main idea is to apply a tracker within an active on-line learning framework for selecting and labeling unlabeled samples. For that purpose the current classifier is evaluated on a test image and the obtained detection result is verified by the tracker. In this way the most valuable samples can be estimated and used for updating the classifier. Thus, the number of needed samples can be reduced and an incrementally better detector is obtained. To enable efficient learning (i.e., to have real-time performance) and to assure robust tracking results, we apply on-line boosting for both, learning and tracking. If the tracker can be initialized automatically no user interaction is needed and we have an autonomous learning/labeling system. In the experiments the approach is evaluated in detail for learning a face detector. In addition, to show the generality, also results for completely different objects are presented.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"51 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132006732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Adaptive color classification for structured light systems 结构光系统的自适应色彩分类
P. Fechteler, P. Eisert
{"title":"Adaptive color classification for structured light systems","authors":"P. Fechteler, P. Eisert","doi":"10.1109/CVPRW.2008.4563048","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563048","url":null,"abstract":"We present a system to capture high accuracy 3D models of faces by taking just one photo without the need of specialized hardware, just a consumer grade digital camera and beamer. The proposed 3D face scanner utilizes structured light techniques: A colored pattern is projected into the face of interest while a photo is taken. Then, the 3D geometry is calculated based on the distortions of the pattern detected in the face. This is performed by triangulating the pattern found in the captured image with the projected one.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130604870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
Embedded contours extraction for high-speed scene dynamics based on a neuromorphic temporal contrast vision sensor 基于神经形态时间对比视觉传感器的高速场景动态嵌入轮廓提取
A. Belbachir, M. Hofstätter, Nenad Milosevic, P. Schön
{"title":"Embedded contours extraction for high-speed scene dynamics based on a neuromorphic temporal contrast vision sensor","authors":"A. Belbachir, M. Hofstätter, Nenad Milosevic, P. Schön","doi":"10.1109/CVPRW.2008.4563153","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563153","url":null,"abstract":"The paper presents a compact vision system for efficient contours extraction in high-speed applications. By exploiting the ultra high temporal resolution and the sparse representation of the sensorpsilas data in reacting to scene dynamics, the system fosters efficient embedded computer vision for ultra high-speed applications. The results reported in this paper show the sensor output quality for a wide range of object velocity (5-40 m/s), and demonstrate the object data volume independence from the velocity as well as the steadiness of the object quality. The influence of object velocity on high-performance embedded computer vision is also discussed.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130285923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Gromov-Hausdorff distances in Euclidean spaces 欧几里德空间中的Gromov-Hausdorff距离
Facundo Mémoli
{"title":"Gromov-Hausdorff distances in Euclidean spaces","authors":"Facundo Mémoli","doi":"10.1109/CVPRW.2008.4563074","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563074","url":null,"abstract":"The purpose of this paper is to study the relationship between measures of dissimilarity between shapes in Euclidean space. We first concentrate on the pair Gromov-Hausdorff distance (GH) versus Hausdorff distance under the action of Euclidean isometries (EH). Then, we (1) show they are comparable in a precise sense that is not the linear behaviour one would expect and (2) explain the source of this phenomenon via explicit constructions. Finally, (3) by conveniently modifying the expression for the GH distance, we recover the EH distance. This allows us to uncover a connection that links the problem of computing GH and EH and the family of Euclidean Distance Matrix completion problems. The second pair of dissimilarity notions we study is the so called Lp-Gromov-Hausdorff distance versus the Earth Moverpsilas distance under the action of Euclidean isometries. We obtain results about comparability in this situation as well.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130441785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
Entropy-based active learning for object recognition 基于熵的目标识别主动学习
Alex Holub, P. Perona, M. Burl
{"title":"Entropy-based active learning for object recognition","authors":"Alex Holub, P. Perona, M. Burl","doi":"10.1109/CVPRW.2008.4563068","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563068","url":null,"abstract":"Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126500225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 252
A parallel color-based particle filter for object tracking 一种用于目标跟踪的基于颜色的并行粒子滤波器
Henry Medeiros, Johnny Park, A. Kak
{"title":"A parallel color-based particle filter for object tracking","authors":"Henry Medeiros, Johnny Park, A. Kak","doi":"10.1109/CVPRW.2008.4563148","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563148","url":null,"abstract":"Porting well known computer vision algorithms to low power, high performance computing devices such as SIMD linear processor arrays can be a challenging task. One especially useful such algorithm is the color-based particle filter, which has been applied successfully by many research groups to the problem of tracking non-rigid objects. In this paper, we propose an implementation of the color-based particle filter suitable for SIMD processors. The main focus of our work is on the parallel computation of the particle weights. This step is the major bottleneck of standard implementations of the color-based particle filter since it requires the knowledge of the histograms of the regions surrounding each hypothesized target position. We expect this approach to perform faster in an SIMD processor than an implementation in a standard desktop computer even running at much lower clock speeds.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121191365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Codomain scale space and regularization for high angular resolution diffusion imaging 高角分辨率扩散成像的上域尺度空间和正则化
L. Florack
{"title":"Codomain scale space and regularization for high angular resolution diffusion imaging","authors":"L. Florack","doi":"10.1109/CVPRW.2008.4562967","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562967","url":null,"abstract":"Regularization is an important aspect in high angular resolution diffusion imaging (HARDI), since, unlike with classical diffusion tensor imaging (DTI), there is no a priori regularity of raw data in the co-domain, i.e. considered as a multispectral signal for fixed spatial position. HARDI preprocessing is therefore a crucial step prior to any subsequent analysis, and some insight in regularization paradigms and their interrelations is compulsory. In this paper we posit a codomain scale space regularization paradigm that has hitherto not been applied in the context of HARDI. Unlike previous (first and second order) schemes it is based on infinite order regularization, yet can be fully operationalized. We furthermore establish a closed-form relation with first order Tikhonov regularization via the Laplace transform.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121212919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Rotational flows for interpolation between sampled surfaces 用于采样表面之间插值的旋转流
J. Levy, M. Foskey, S. Pizer
{"title":"Rotational flows for interpolation between sampled surfaces","authors":"J. Levy, M. Foskey, S. Pizer","doi":"10.1109/CVPRW.2008.4563017","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563017","url":null,"abstract":"We introduce a locally defined shape-maintaining method for interpolating between corresponding oriented samples (vertices) from a pair of surfaces. We have applied this method to interpolate synthetic data sets in two and three dimensions and to interpolate medially represented shape models of anatomical objects in three dimensions. In the plane, each oriented vertex follows a circular arc as if it was rotating to its destination. In three dimensions, each oriented vertex moves along a helical path that combines in-plane rotation with translation along the axis of rotation. We show that our planar method provides shape-maintaining interpolations when the reference and target objects are similar. Moreover, the interpolations are size maintaining when the reference and target objects are congruent. In three dimensions, similar objects are interpolated by an affine transformation. We use measurements of the fractional anisotropy of such global affine transformations to demonstrate that our method is generally more-shape preserving than the alternative of interpolating vertices along linear paths irrespective of changes in orientation. In both two and three dimensions we have experimental evidence that when non-shape-preserving deformations are applied to template shapes, the interpolation tends to be visually satisfying with each intermediate object appearing to belong to the same class of objects as the end points.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121225081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信