2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)最新文献

筛选
英文 中文
Active Graph Cuts 活动图切割
Olivier Juan, Yuri Boykov
{"title":"Active Graph Cuts","authors":"Olivier Juan, Yuri Boykov","doi":"10.1109/CVPR.2006.47","DOIUrl":"https://doi.org/10.1109/CVPR.2006.47","url":null,"abstract":"This paper adds a number of novel concepts into global s/t cut methods improving their efficiency and making them relevant for a wider class of applications in vision where algorithms should ideally run in real-time. Our new Active Cuts (AC) method can effectively use a good approximate solution (initial cut) that is often available in dynamic, hierarchical, and multi-label optimization problems in vision. In many problems AC works faster than the state-of-the-art max-flow methods [2] even if initial cut is far from the optimal one. Moreover, empirical speed improves several folds when initial cut is spatially close to the optima. Before converging to a global minima, Active Cuts outputs a multitude of intermediate solutions (intermediate cuts) that, for example, can be used be accelerate iterative learning-based methods or to improve visual perception of graph cuts realtime performance when large volumetric data is segmented. Finally, it can also be combined with many previous methods for accelerating graph cuts.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116214869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 132
An Efficiency Criterion for 2D Shape Model Selection. 二维形状模型选择的效率准则。
Kathryn Leonard
{"title":"An Efficiency Criterion for 2D Shape Model Selection.","authors":"Kathryn Leonard","doi":"10.1109/CVPR.2006.53","DOIUrl":"https://doi.org/10.1109/CVPR.2006.53","url":null,"abstract":"We propose efficiency of representation as a criterion for evaluating shape models, then apply this criterion to compare the boundary curve representation with the medial axis. We estimate the å-entropy of two compact classes of curves. We then construct two adaptive encodings for noncompact classes of shapes, one using the boundary curve and the other using the medial axis, and determine precise conditions for when the medial axis is more efficient. Along the way we construct explicit near-optimal boundarybased approximations for compact classes of shapes and an explicit compression scheme for non-compact classes of shapes based on the medial axis. We end with an application of the criterion to shape data.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122485635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Contour-Based Structure from Reflection 来自反射的基于轮廓的结构
Po-Hao Huang, S. Lai
{"title":"Contour-Based Structure from Reflection","authors":"Po-Hao Huang, S. Lai","doi":"10.1109/CVPR.2006.88","DOIUrl":"https://doi.org/10.1109/CVPR.2006.88","url":null,"abstract":"In this paper, we propose a novel contour-based algorithm for 3D object reconstruction from a single uncalibrated image acquired under the setting of two plane mirrors. With the epipolar geometry recovered from the image and the properties of mirror reflection, metric reconstruction of an arbitrary rigid object is accomplished without knowing the camera parameters and the mirror poses. For this mirror setup, the epipoles can be estimated from the correspondences between the object and its reflection, which can be established automatically from the tangent lines of their contours. By using the property of mirror reflection as well as the relationship between the mirror plane normal with the epipole and camera intrinsic, we can estimate the camera intrinsic, plane normals and the orientation of virtual cameras. The positions of the virtual cameras are determined by minimizing the distance between the object contours and the projected visual cone for a reference view. After the camera parameters are determined, the 3D object model is constructed via the image-based visual hulls (IBVH) technique. The 3D model can be refined by integrating the multiple models reconstructed from different views. The main advantage of the proposed contour-based Structure from Reflection (SfR) algorithm is that it can achieve metric reconstruction from an uncalibrated image without feature point correspondences. Experimental results on synthetic and real images are presented to show its performance.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114190154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Ultrasound-Specific Segmentation via Decorrelation and Statistical Region-Based Active Contours 基于去相关和统计区域的活动轮廓的超声特异性分割
G. Slabaugh, Gözde B. Ünal, T. Fang, M. Wels
{"title":"Ultrasound-Specific Segmentation via Decorrelation and Statistical Region-Based Active Contours","authors":"G. Slabaugh, Gözde B. Ünal, T. Fang, M. Wels","doi":"10.1109/CVPR.2006.318","DOIUrl":"https://doi.org/10.1109/CVPR.2006.318","url":null,"abstract":"Segmentation of ultrasound images is often a very challenging task due to speckle noise that contaminates the image. It is well known that speckle noise exhibits an asymmetric distribution as well as significant spatial correlation. Since these attributes can be difficult to model, many previous ultrasound segmentation methods oversimplify the problem by assuming that the noise is white and/or Gaussian, resulting in generic approaches that are actually more suitable to MR and X-ray segmentation than ultrasound. Unlike these methods, in this paper we present an ultrasound-specific segmentation approach that first decorrelates the image, and then performs segmentation on the whitened result using statistical region-based active contours. In particular, we design a gradient ascent flow that evolves the active contours to maximize a log likelihood functional based on the Fisher-Tippett distribution. We present experimental results that demonstrate the effectiveness of our method.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114390810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Efficient Optimal Kernel Placement for Reliable Visual Tracking 有效的最优内核放置可靠的视觉跟踪
Zhimin Fan, Ming Yang, Ying Wu, G. Hua, Ting Yu
{"title":"Efficient Optimal Kernel Placement for Reliable Visual Tracking","authors":"Zhimin Fan, Ming Yang, Ying Wu, G. Hua, Ting Yu","doi":"10.1109/CVPR.2006.109","DOIUrl":"https://doi.org/10.1109/CVPR.2006.109","url":null,"abstract":"This paper describes a novel approach to optimal kernel placement in kernel-based tracking. If kernels are placed at arbitrary places, kernel-based methods are likely to be trapped in ill-conditioned locations, which prevents the reliable recovery of the motion parameters and jeopardizes the tracking performance. The theoretical analysis presented in this paper indicates that the optimal kernel placement can be evaluated based on a closed-form criterion, and achieved efficiently by a novel gradient-based algorithm. Based on that, new methods for temporal-stable multiple kernel placement and scale-invariant kernel placement are proposed. These new theoretical results and new algorithms greatly advance the study of kernel-based tracking in both theory and practice. Extensive real-time experimental results demonstrate the improved tracking reliability.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114595634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
BoostMotion: Boosting a Discriminative Similarity Function for Motion Estimation BoostMotion:增强用于运动估计的判别相似函数
S. Zhou, B. Georgescu, D. Comaniciu, Jie Shao
{"title":"BoostMotion: Boosting a Discriminative Similarity Function for Motion Estimation","authors":"S. Zhou, B. Georgescu, D. Comaniciu, Jie Shao","doi":"10.1109/CVPR.2006.73","DOIUrl":"https://doi.org/10.1109/CVPR.2006.73","url":null,"abstract":"Motion estimation for applications where appearance undergoes complex changes is challenging due to lack of an appropriate similarity function. In this paper, we propose to learn a discriminative similarity function based on an annotated database that exemplifies the appearance variations. We invoke the LogitBoost algorithm to selectively combine weak learners into one strong similarity function. The weak learners based on local rectangle features are constructed as nonparametric 2D piecewise constant functions, using the feature responses from both images, to strengthen the modeling power and accommodate fast evaluation. Because the negatives possess a location parameter measuring their closeness to the positives, we present a locationsensitive cascade training procedure, which bootstraps negatives for later stages of the cascade from the regions closer to the positives. This allows viewing a large number of negatives and steering the training process to yield lower training and test errors. In experiments of estimating the motion for the endocardial wall of the left ventricle in echocardiography, we compare the learned similarity function with conventional ones and obtain improved performances. We also contrast the proposed method with a learning-based detection algorithm to demonstrate the importance of temporal information in motion estimation. Finally, we insert the learned similarity function into a simple contour tracking algorithm and find that it reduces drifting.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122192496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
An Intensity-augmented Ordinal Measure for Visual Correspondence 视觉对应的增强序数测度
Anurag Mittal, Visvanathan Ramesh
{"title":"An Intensity-augmented Ordinal Measure for Visual Correspondence","authors":"Anurag Mittal, Visvanathan Ramesh","doi":"10.1109/CVPR.2006.56","DOIUrl":"https://doi.org/10.1109/CVPR.2006.56","url":null,"abstract":"Determining the correspondence of image patches is one of the most important problems in Computer Vision. When the intensity space is variant due to several factors such as the camera gain or gamma correction, one needs methods that are robust to such transformations. While the most common assumption is that of a linear transformation, a more general assumption is that the change is monotonic. Therefore, methods have been developed previously that work on the rankings between different pixels as opposed to the intensities themselves. In this paper, we develop a new matching method that improves upon existing methods by using a combination of intensity and rank information. The method considers the difference in the intensities of the changed pixels in order to achieve greater robustness to Gaussian noise. Furthermore, only uncorrelated order changes are considered, which makes the method robust to changes in a single or a few pixels. These properties make the algorithm quite robust to different types of noise and other artifacts such as camera shake or image compression. Experiments illustrate the potential of the approach in several different applications such as change detection and feature matching.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129512527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Animals on the Web 网络上的动物
Tamara L. Berg, D. Forsyth
{"title":"Animals on the Web","authors":"Tamara L. Berg, D. Forsyth","doi":"10.1109/CVPR.2006.57","DOIUrl":"https://doi.org/10.1109/CVPR.2006.57","url":null,"abstract":"We demonstrate a method for identifying images containing categories of animals. The images we classify depict animals in a wide range of aspects, configurations and appearances. In addition, the images typically portray multiple species that differ in appearance (e.g. ukari’s, vervet monkeys, spider monkeys, rhesus monkeys, etc.). Our method is accurate despite this variation and relies on four simple cues: text, color, shape and texture. Visual cues are evaluated by a voting method that compares local image phenomena with a number of visual exemplars for the category. The visual exemplars are obtained using a clustering method applied to text on web pages. The only supervision required involves identifying which clusters of exemplars refer to which sense of a term (for example, \"monkey\" can refer to an animal or a bandmember). Because our method is applied to web pages with free text, the word cue is extremely noisy. We show unequivocal evidence that visual information improves performance for our task. Our method allows us to produce large, accurate and challenging visual datasets mostly automatically.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128464929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 204
Coupled Bayesian Framework for Dual Energy Image Registration 双能量图像配准的耦合贝叶斯框架
Hao Wu, Yunqiang Chen, T. Fang
{"title":"Coupled Bayesian Framework for Dual Energy Image Registration","authors":"Hao Wu, Yunqiang Chen, T. Fang","doi":"10.1109/CVPR.2006.93","DOIUrl":"https://doi.org/10.1109/CVPR.2006.93","url":null,"abstract":"Image registration for X-ray dual energy imaging is challenging due to the overlaid transparent layers (i.e., the bone and soft tissue) and the different appearances between the dual images acquired with X-rays at different energy spectra. Moreover, subpixel accuracy is necessary for good reconstruction of the bone and soft-tissue layers. This paper addresses these problems with a novel coupled Bayesian framework, in which the registration and reconstruction can effectively reinforce each other. With the reconstruction results, we can design accurate matching criteria for aligning the dual images, instead of treating them as multi-modality registration. Furthermore, prior knowledge of the bone and soft tissue can be exploited to detect poor reconstruction due to inaccurate registration; and hence correct registration errors in the coupled framework. A multiscale freeform registration algorithm is implemented to achieve subpixel registration accuracy. Promising results are obtained in the experiments.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129473982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Landmark-Based Geodesic Computation for Heuristically Driven Path Planning 启发式路径规划中基于地标的测地线计算
G. Peyré, L. Cohen
{"title":"Landmark-Based Geodesic Computation for Heuristically Driven Path Planning","authors":"G. Peyré, L. Cohen","doi":"10.1109/CVPR.2006.163","DOIUrl":"https://doi.org/10.1109/CVPR.2006.163","url":null,"abstract":"This paper presents a new method to quickly extract geodesic paths on images and 3D meshes. We use a heuristic to drive the front propagation procedure of the classical Fast Marching. This results in a modification of the Fast Marching algorithm that is similar to the A algorithm used in artificial intelligence. In order to find very quickly geodesic paths between any given couples of points, we advocate for the initial computation of distance maps to a set of landmark points and make use of these distance maps through a relevant heuristic. We show that our method brings a large speed up for large scale applications that require the extraction of geodesics on images and 3D meshes. We introduce two distortion metrics in order to find an optimal seeding of landmark points for the targeted applications. We also propose a compression scheme to reduce the memory requirement without impacting the quality of the extracted paths.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130519585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信