Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)最新文献

筛选
英文 中文
Progressive probabilistic Hough transform for line detection 渐进式概率霍夫变换用于线路检测
C. Galambos, J. Kittler, Jiri Matas
{"title":"Progressive probabilistic Hough transform for line detection","authors":"C. Galambos, J. Kittler, Jiri Matas","doi":"10.1109/CVPR.1999.786993","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786993","url":null,"abstract":"We present a novel Hough Transform algorithm referred to as Progressive Probabilistic Hough Transform (PPHT). Unlike the Probabilistic HT where Standard HT is performed on a pre-selected fraction of input points, PPHT minimises the amount of computation needed to detect lines by exploiting the difference an the fraction of votes needed to detect reliably lines with different numbers of supporting points. The fraction of points used for voting need not be specified ad hoc or using a priori knowledge, as in the probabilistic HT; it is a function of the inherent complexity of the input data. The algorithm is ideally suited for real-time applications with a fixed amount of available processing time, since voting and line detection is interleaved. The most salient features are likely to be detected first. Experiments show that in many circumstances PPHT has advantages over the Standard HT.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89712361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 193
Edge detector evaluation using empirical ROC curves 基于经验ROC曲线的边缘检测器评价
K. Bowyer, C. Kranenburg, Sean Dougherty
{"title":"Edge detector evaluation using empirical ROC curves","authors":"K. Bowyer, C. Kranenburg, Sean Dougherty","doi":"10.1109/CVPR.1999.786963","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786963","url":null,"abstract":"A method is demonstrated to evaluate edge detector performance using receiver operating characteristic curves. It involves matching edges to manually specified ground truth to count true positive and false positive detections. Edge detector parameter settings are trained and tested on different images, and aggregate test ROC curves presented for two sets of 10 images. The performance of eight different edge detectors is compared. The Canny and Heitger detectors provide the best performance.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87273370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 412
A new structure-from-motion ambiguity 一种新的结构-运动歧义
J. Oliensis
{"title":"A new structure-from-motion ambiguity","authors":"J. Oliensis","doi":"10.1109/CVPR.1999.786937","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786937","url":null,"abstract":"This paper demonstrates the existence of a generic approximate ambiguity in Euclidean structure from motion (SFM) which applies to scenes with large depth variation. In projective SFM the ambiguity is absent, but the maximum-likelihood reconstruction is more likely to have occasional very large errors. The analysis gives a semi-quantitative characterization of the least-squares error surface over a domain complementary to that analyzed by Jepson/Heeger/Maybank.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87942399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Unifying boundary and region-based information for geodesic active tracking 统一基于边界和区域的测地线主动跟踪信息
N. Paragios, R. Deriche
{"title":"Unifying boundary and region-based information for geodesic active tracking","authors":"N. Paragios, R. Deriche","doi":"10.1109/CVPR.1999.784648","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784648","url":null,"abstract":"This paper addresses the problem of tracking several non-rigid objects over a sequence of frames acquired from a static observer using boundary and region-based information under a coupled geodesic active contour framework. Given the current frame, a statistical analysis is performed on the observed difference frame which provides a measurement that distinguishes between the static and mobile regions in terms of conditional probabilities. An objective function is defined that integrates boundary-based and region-based module by seeking curves that attract the object boundaries and maximize the a posteriori segmentation probability on the interior curve regions with respect to intensity and motion properties. This function is minimized using a gradient descent method. The associated Euler-Lagrange PDE is implemented using a Level-Set approach, where a very fast front propagation algorithm evolves the initial curve towards the final tracking result. Very promising experimental results are provided using real video sequences.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86587250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
A probabilistic framework for embedded face and facial expression recognition 嵌入式人脸和面部表情识别的概率框架
A. Colmenarez, B. Frey, Thomas S. Huang
{"title":"A probabilistic framework for embedded face and facial expression recognition","authors":"A. Colmenarez, B. Frey, Thomas S. Huang","doi":"10.1109/CVPR.1999.786999","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786999","url":null,"abstract":"We present a Bayesian recognition framework in which a model of the whole face is enhanced by models of facial feature position and appearances. Face recognition and facial expression recognition are carried out using maximum likelihood decisions. The algorithm finds the model and facial expression that maximizes the likelihood of a test image. In this framework, facial appearance matching is improved by facial expression matching. Also, changes in facial features due to expressions are used together with facial deformation. Patterns to jointly perform expression recognition. In our current implementation, the face is divided into 9 facial features grouped in 4 regions which are detected and tracked automatically in video segments. The feature images are modeled using Gaussian distributions on a principal component sub-space. The training procedure is supervised; we use video segments of people in which the facial expressions have been segmented and labeled by hand. We report results on face and facial expression recognition using a video database of 18 people and 6 expressions.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85591623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Face recognition using shape and texture 基于形状和纹理的人脸识别
Chengjun Liu, H. Wechsler
{"title":"Face recognition using shape and texture","authors":"Chengjun Liu, H. Wechsler","doi":"10.1109/CVPR.1999.787000","DOIUrl":"https://doi.org/10.1109/CVPR.1999.787000","url":null,"abstract":"We introduce in this paper a new face coding and recognition method which employs the Enhanced FLD (Fisher Linear Discrimimant) Model (EFM) on integrated shape (vector) and texture ('shape-free' image) information. Shape encodes the feature geometry of a face while texture provides a normalized shape-free image by warping the original face image to the mean shape, i.e., the average of aligned shapes. The dimensionalities of the shape and the texture spaces are first reduced using Principal Component Analysis (PCA). The corresponding but reduced shape find texture features are then integrated through a normalization procedure to form augmented features. The dimensionality reduction procedure, constrained by EFM for enhanced generalization, maintains a proper balance between the spectral energy needs of PCA for adequate representation, and the FLD discrimination requirements, that the eigenvalues of the within-class scatter matrix should not include small trailing values after the dimensionality reduction procedure as they appear in the denominator.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82261098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Bias field estimation and adaptive segmentation of MRI data using a modified fuzzy C-means algorithm 基于改进模糊c均值算法的MRI数据偏置场估计与自适应分割
M. N. Ahmed, S. Yamany, A. Farag, T. Moriarty
{"title":"Bias field estimation and adaptive segmentation of MRI data using a modified fuzzy C-means algorithm","authors":"M. N. Ahmed, S. Yamany, A. Farag, T. Moriarty","doi":"10.1109/CVPR.1999.786947","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786947","url":null,"abstract":"In this paper, we present a novel algorithm for adaptive fuzzy segmentation of MRI data and estimation of intensity inhomogeneities using fuzzy logic. MRI intensity inhomogeneities can be attributed to imperfections in the RF coils or some problems associated with the acquisition sequences. The result is a slowly-varying shading artifact over the image that can produce errors with conventional intensity-based classification. Our algorithm is formulated by modifying the objective function of the standard fuzzy c-means (FCM) algorithm to compensate for such inhomogeneities and to allow the labeling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The neighborhood effect acts as a regularizer and biases the solution towards piecewise-homogeneous labelings. Such a regularization is useful in segmenting scans corrupted by salt and pepper noise. Experimental results on both synthetic images and MR data are given to demonstrate the effectiveness and efficiency of the proposed algorithm.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76507021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
Separating reflections and lighting using independent components analysis 使用独立组件分析分离反射和照明
H. Farid, E. Adelson
{"title":"Separating reflections and lighting using independent components analysis","authors":"H. Farid, E. Adelson","doi":"10.1109/CVPR.1999.786949","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786949","url":null,"abstract":"The image of an object can vary dramatically depending on lighting, specularities/reflections and shadows. It is often advantageous to separate these incidental variations from the intrinsic aspects of an image. This paper describes how the statistical tool of independent components analysis can be used to separate some of these incidental components. We describe the details of this method and show its efficacy with examples of separating reflections off glass, and separating the relative contributions of individual light sources.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78191242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 153
Shape from video 来自视频的形状
T. Brodský, C. Fermüller, Y. Aloimonos
{"title":"Shape from video","authors":"T. Brodský, C. Fermüller, Y. Aloimonos","doi":"10.1109/CVPR.1999.784622","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784622","url":null,"abstract":"This paper presents a novel technique for recovering the shape of a static scene from a video sequence due to a rigidly moving camera. The solution procedure consists of two stages. In the first stage, the rigid motion of the camera at each instant in time is recovered. This provides the transformation between successive viewing positions. The solution is achieved through new constraints which relate 3D motion and shape directly to the image derivatives. These constraints allow to combine the processes of 3D motion estimation and segmentation by exploiting the geometry and statistics inherent in the data. In the second stage the scene surfaces are reconstructed through an optimization procedure which utilizes data from all the frames of the video sequence. A number of experimental results demonstrate the potential of the approach.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79061110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Color edge detection with the compass operator 颜色边缘检测与罗盘操作符
Mark A. Ruzon, Carlo Tomasi
{"title":"Color edge detection with the compass operator","authors":"Mark A. Ruzon, Carlo Tomasi","doi":"10.1109/CVPR.1999.784624","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784624","url":null,"abstract":"The compass operator detects step edges without assuming that the regions on either side have constant color. Using distributions of pixel colors rather than the mean, the operator finds the orientation of a diameter that maximizes the difference between two halves of a circular window. Junctions can also be detected by exploiting their lack of bilateral symmetry. This approach is superior to a multi-dimensional gradient method in situations that often result in false negatives, and it localizes edges better as scale increases.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74706909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 197
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信