Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)最新文献

筛选
英文 中文
On plane-based camera calibration: A general algorithm, singularities, applications 基于平面的摄像机标定:通用算法,奇异点,应用
P. Sturm, S. Maybank
{"title":"On plane-based camera calibration: A general algorithm, singularities, applications","authors":"P. Sturm, S. Maybank","doi":"10.1109/CVPR.1999.786974","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786974","url":null,"abstract":"We present a general algorithm for plane-based calibration that can deal with arbitrary numbers of views and calibration planes. The algorithm can simultaneously calibrate different views from a camera with variable intrinsic parameters and it is easy to incorporate known values of intrinsic parameters. For some minimal cases, we describe all singularities, naming the parameters that can not be estimated. Experimental results of our method are shown that exhibit the singularities while revealing good performance in non-singular conditions. Several applications of plane-based 3D geometry inference are discussed as well.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"60 1","pages":"432-437 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89320146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 665
Edge detector evaluation using empirical ROC curves 基于经验ROC曲线的边缘检测器评价
K. Bowyer, C. Kranenburg, Sean Dougherty
{"title":"Edge detector evaluation using empirical ROC curves","authors":"K. Bowyer, C. Kranenburg, Sean Dougherty","doi":"10.1109/CVPR.1999.786963","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786963","url":null,"abstract":"A method is demonstrated to evaluate edge detector performance using receiver operating characteristic curves. It involves matching edges to manually specified ground truth to count true positive and false positive detections. Edge detector parameter settings are trained and tested on different images, and aggregate test ROC curves presented for two sets of 10 images. The performance of eight different edge detectors is compared. The Canny and Heitger detectors provide the best performance.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"27 1","pages":"354-359 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87273370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 412
A new structure-from-motion ambiguity 一种新的结构-运动歧义
J. Oliensis
{"title":"A new structure-from-motion ambiguity","authors":"J. Oliensis","doi":"10.1109/CVPR.1999.786937","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786937","url":null,"abstract":"This paper demonstrates the existence of a generic approximate ambiguity in Euclidean structure from motion (SFM) which applies to scenes with large depth variation. In projective SFM the ambiguity is absent, but the maximum-likelihood reconstruction is more likely to have occasional very large errors. The analysis gives a semi-quantitative characterization of the least-squares error surface over a domain complementary to that analyzed by Jepson/Heeger/Maybank.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"79 4 1","pages":"185-191 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87942399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Deformable shape detection and description via model-based region grouping 基于模型的区域分组的可变形形状检测和描述
S. Sclaroff, Lifeng Liu
{"title":"Deformable shape detection and description via model-based region grouping","authors":"S. Sclaroff, Lifeng Liu","doi":"10.1109/CVPR.1999.784603","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784603","url":null,"abstract":"A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions based on any image homogeneity predicate; e.g., texture, color or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"17 1","pages":"21-27 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85047888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 151
Unifying boundary and region-based information for geodesic active tracking 统一基于边界和区域的测地线主动跟踪信息
N. Paragios, R. Deriche
{"title":"Unifying boundary and region-based information for geodesic active tracking","authors":"N. Paragios, R. Deriche","doi":"10.1109/CVPR.1999.784648","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784648","url":null,"abstract":"This paper addresses the problem of tracking several non-rigid objects over a sequence of frames acquired from a static observer using boundary and region-based information under a coupled geodesic active contour framework. Given the current frame, a statistical analysis is performed on the observed difference frame which provides a measurement that distinguishes between the static and mobile regions in terms of conditional probabilities. An objective function is defined that integrates boundary-based and region-based module by seeking curves that attract the object boundaries and maximize the a posteriori segmentation probability on the interior curve regions with respect to intensity and motion properties. This function is minimized using a gradient descent method. The associated Euler-Lagrange PDE is implemented using a Level-Set approach, where a very fast front propagation algorithm evolves the initial curve towards the final tracking result. Very promising experimental results are provided using real video sequences.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"127 1","pages":"300-305 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86587250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
Progressive probabilistic Hough transform for line detection 渐进式概率霍夫变换用于线路检测
C. Galambos, J. Kittler, Jiri Matas
{"title":"Progressive probabilistic Hough transform for line detection","authors":"C. Galambos, J. Kittler, Jiri Matas","doi":"10.1109/CVPR.1999.786993","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786993","url":null,"abstract":"We present a novel Hough Transform algorithm referred to as Progressive Probabilistic Hough Transform (PPHT). Unlike the Probabilistic HT where Standard HT is performed on a pre-selected fraction of input points, PPHT minimises the amount of computation needed to detect lines by exploiting the difference an the fraction of votes needed to detect reliably lines with different numbers of supporting points. The fraction of points used for voting need not be specified ad hoc or using a priori knowledge, as in the probabilistic HT; it is a function of the inherent complexity of the input data. The algorithm is ideally suited for real-time applications with a fixed amount of available processing time, since voting and line detection is interleaved. The most salient features are likely to be detected first. Experiments show that in many circumstances PPHT has advantages over the Standard HT.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"1 1","pages":"554-560 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89712361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 193
Spatial filter selection for illumination-invariant color texture discrimination 光照不变颜色纹理识别的空间滤波器选择
Bea Thai, Glenn Healey
{"title":"Spatial filter selection for illumination-invariant color texture discrimination","authors":"Bea Thai, Glenn Healey","doi":"10.1109/CVPR.1999.784623","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784623","url":null,"abstract":"Color texture contains a large amount of spectral and spatial structure that can be exploited for recognition. Recent work has demonstrated that spatial filters offer a convenient means of extracting illumination invariant spatial information from a color image. In this paper, we address the problem of deriving optimal fillers for illumination-invariant color texture discrimination. Color textures are represented by a set of illumination-invariant features that characterize the color distribution of a filtered image region. Given a pair of color textures, we derive a spatial filter that maximizes the distance between these textures in feature space. We provide a method for using the pair-wise result to obtain a filter that maximizes discriminability among multiple classes. A set of experiments on a database of deterministic and random color textures obtained under different illumination conditions demonstrates the improved discriminatory power achieved by using an optimized filler.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"7 1","pages":"154-159 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90494267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Separating reflections and lighting using independent components analysis 使用独立组件分析分离反射和照明
H. Farid, E. Adelson
{"title":"Separating reflections and lighting using independent components analysis","authors":"H. Farid, E. Adelson","doi":"10.1109/CVPR.1999.786949","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786949","url":null,"abstract":"The image of an object can vary dramatically depending on lighting, specularities/reflections and shadows. It is often advantageous to separate these incidental variations from the intrinsic aspects of an image. This paper describes how the statistical tool of independent components analysis can be used to separate some of these incidental components. We describe the details of this method and show its efficacy with examples of separating reflections off glass, and separating the relative contributions of individual light sources.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"429 1","pages":"262-267 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78191242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 153
Shape from video 来自视频的形状
T. Brodský, C. Fermüller, Y. Aloimonos
{"title":"Shape from video","authors":"T. Brodský, C. Fermüller, Y. Aloimonos","doi":"10.1109/CVPR.1999.784622","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784622","url":null,"abstract":"This paper presents a novel technique for recovering the shape of a static scene from a video sequence due to a rigidly moving camera. The solution procedure consists of two stages. In the first stage, the rigid motion of the camera at each instant in time is recovered. This provides the transformation between successive viewing positions. The solution is achieved through new constraints which relate 3D motion and shape directly to the image derivatives. These constraints allow to combine the processes of 3D motion estimation and segmentation by exploiting the geometry and statistics inherent in the data. In the second stage the scene surfaces are reconstructed through an optimization procedure which utilizes data from all the frames of the video sequence. A number of experimental results demonstrate the potential of the approach.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"78 1","pages":"146-151 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79061110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A probabilistic framework for embedded face and facial expression recognition 嵌入式人脸和面部表情识别的概率框架
A. Colmenarez, B. Frey, Thomas S. Huang
{"title":"A probabilistic framework for embedded face and facial expression recognition","authors":"A. Colmenarez, B. Frey, Thomas S. Huang","doi":"10.1109/CVPR.1999.786999","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786999","url":null,"abstract":"We present a Bayesian recognition framework in which a model of the whole face is enhanced by models of facial feature position and appearances. Face recognition and facial expression recognition are carried out using maximum likelihood decisions. The algorithm finds the model and facial expression that maximizes the likelihood of a test image. In this framework, facial appearance matching is improved by facial expression matching. Also, changes in facial features due to expressions are used together with facial deformation. Patterns to jointly perform expression recognition. In our current implementation, the face is divided into 9 facial features grouped in 4 regions which are detected and tracked automatically in video segments. The feature images are modeled using Gaussian distributions on a principal component sub-space. The training procedure is supervised; we use video segments of people in which the facial expressions have been segmented and labeled by hand. We report results on face and facial expression recognition using a video database of 18 people and 6 expressions.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"20 1","pages":"592-597 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85591623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信