2009 IEEE Conference on Computer Vision and Pattern Recognition最新文献

筛选
英文 中文
A graph-based approach to skin mole matching incorporating template-normalized coordinates 结合模板归一化坐标的基于图的皮肤痣匹配方法
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206725
H. Mirzaalian, G. Hamarneh, Tim K. Lee
{"title":"A graph-based approach to skin mole matching incorporating template-normalized coordinates","authors":"H. Mirzaalian, G. Hamarneh, Tim K. Lee","doi":"10.1109/CVPR.2009.5206725","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206725","url":null,"abstract":"Density of moles is a strong predictor of malignant melanoma. Some dermatologists advocate periodic full-body scan for high-risk patients. In current practice, physicians compare images taken at different time instances to recognize changes. There is an important clinical need to follow changes in the number of moles and their appearance (size, color, texture, shape) in images from two different times. In this paper, we propose a method for finding corresponding moles in patient's skin back images at different scanning times. At first, a template is defined for the human back to calculate the moles' normalized spatial coordinates. Next, matching moles across images is modeled as a graph matching problem and algebraic relations between nodes and edges in the graphs are induced in the matching cost function, which contains terms reflecting proximity regularization, angular agreement between mole pairs, and agreement between the moles' normalized coordinates calculated in the unwarped back template. We propose and discuss alternative approaches for evaluating the goodness of matching. We evaluate our method on a large set of synthetic data (hundreds of pairs) as well as 56 pairs of real dermatological images. Our proposed method compares favorably with the state-of-the-art.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124286214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Multiple instance fFeature for robust part-based object detection 多实例特征鲁棒的基于零件的目标检测
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206858
Zhe L. Lin, G. Hua, L. Davis
{"title":"Multiple instance fFeature for robust part-based object detection","authors":"Zhe L. Lin, G. Hua, L. Davis","doi":"10.1109/CVPR.2009.5206858","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206858","url":null,"abstract":"Feature misalignment in object detection refers to the phenomenon that features which fire up in some positive detection windows do not fire up in other positive detection windows. Most often it is caused by pose variation and local part deformation. Previous work either totally ignores this issue, or naively performs a local exhaustive search to better position each feature. We propose a learning framework to mitigate this problem, where a boosting algorithm is performed to seed the position of the object part, and a multiple instance boosting algorithm further pursues an aggregated feature for this part, namely multiple instance feature. Unlike most previous boosting based object detectors, where each feature value produces a single classification result, the value of the proposed multiple instance feature is the Noisy-OR integration of a bag of classification results. Our approach is applied to the task of human detection and is tested on two popular benchmarks. The proposed approach brings significant improvement in performance, i.e., smaller number of features used in the cascade and better detection accuracy.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126195298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Shape of Gaussians as feature descriptors 高斯函数的形状作为特征描述符
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206506
Liyu Gong, Tianjiang Wang, Fang Liu
{"title":"Shape of Gaussians as feature descriptors","authors":"Liyu Gong, Tianjiang Wang, Fang Liu","doi":"10.1109/CVPR.2009.5206506","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206506","url":null,"abstract":"This paper introduces a feature descriptor called shape of Gaussian (SOG), which is based on a general feature descriptor design framework called shape of signal probability density function (SOSPDF). SOSPDF takes the shape of a signal's probability density function (pdf) as its feature. Under such a view, both histogram and region covariance often used in computer vision are SOSPDF features. Histogram describes SOSPDF by a discrete approximation way. Region covariance describes SOSPDF as an incomplete parameterized multivariate Gaussian distribution. Our proposed SOG descriptor is a full parameterized Gaussian, so it has all the advantages of region covariance and is more effective. Furthermore, we identify that SOGs form a Lie group. Based on Lie group theory, we propose a distance metric for SOG. We test SOG features in tracking problem. Experiments show better tracking results compared with region covariance. Moreover, experiment results indicate that SOG features attempt to harvest more useful information and are less sensitive against noise.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126401348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Angular embedding: From jarring intensity differences to perceived luminance 角嵌入:从不和谐的强度差异到感知亮度
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206673
Stella X. Yu
{"title":"Angular embedding: From jarring intensity differences to perceived luminance","authors":"Stella X. Yu","doi":"10.1109/CVPR.2009.5206673","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206673","url":null,"abstract":"Our goal is to turn an intensity image into its perceived luminance without parsing it into depths, surfaces, or scene illuminations. We start with jarring intensity differences at two scales mixed according to edges, identified by a pixel-centric edge detector. We propose angular embedding as a more robust, efficient, and versatile alternative to LS, LLE, and NCUTS for obtaining a global brightness ordering from local differences. Our model explains a variety of brightness illusions with a single algorithm. Brightness of a pixel can be understood locally as its intensity deviating in the gradient direction and globally as finding its rank relative to others, particularly the lightest and darkest ones.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130113898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Visual tracking with online Multiple Instance Learning 视觉跟踪与在线多实例学习
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206737
Boris Babenko, Ming-Hsuan Yang, Serge J. Belongie
{"title":"Visual tracking with online Multiple Instance Learning","authors":"Boris Babenko, Ming-Hsuan Yang, Serge J. Belongie","doi":"10.1109/CVPR.2009.5206737","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206737","url":null,"abstract":"In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128490070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1986
Resolution-Invariant Image Representation and its applications 分辨率不变图像表示及其应用
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206679
Jinjun Wang, Shenghuo Zhu, Yihong Gong
{"title":"Resolution-Invariant Image Representation and its applications","authors":"Jinjun Wang, Shenghuo Zhu, Yihong Gong","doi":"10.1109/CVPR.2009.5206679","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206679","url":null,"abstract":"We present a resolution-invariant image representation (RIIR) framework in this paper. The RIIR framework includes the methods of building a set of multi-resolution bases from training images, estimating the optimal sparse resolution-invariant representation of any image, and reconstructing the missing patches of any resolution level. As the proposed RIIR framework has many potential resolution enhancement applications, we discuss three novel image magnification applications in this paper. In the first application, we apply the RIIR framework to perform Multi-Scale Image Magnification where we also introduced a training strategy to built a compact RIIR set. In the second application, the RIIR framework is extended to conduct Continuous Image Scaling where a new base at any resolution level can be generated using existing RIIR set on the fly. In the third application, we further apply the RIIR framework onto Content-Base Automatic Zooming applications. The experimental results show that in all these applications, our RIIR based method outperforms existing methods in various aspects.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124540445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Joint and implicit registration for face recognition 人脸识别的联合和隐式配准
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206607
Peng Li, S. Prince
{"title":"Joint and implicit registration for face recognition","authors":"Peng Li, S. Prince","doi":"10.1109/CVPR.2009.5206607","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206607","url":null,"abstract":"Contemporary face recognition algorithms rely on precise localization of keypoints (corner of eye, nose etc.). Unfortunately, finding keypoints reliably and accurately remains a hard problem. In this paper we pose two questions. First, is it possible to exploit the gallery image in order to find keypoints in the probe image? For instance, consider finding the left eye in the probe image. Rather than using a generic eye model, we use a model that is informed by the appearance of the eye in the gallery image. To this end we develop a probabilistic model which combines recognition and keypoint localization. Second, is it necessary to localize keypoints? Alternatively we can consider keypoint position as a hidden variable which we marginalize over in a Bayesian manner. We demonstrate that both of these innovations improve performance relative to conventional methods in both frontal and cross-pose face recognition.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124567460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Global optimization for alignment of generalized shapes 广义形状对齐的全局优化
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206548
Hongsheng Li, Tian Shen, Xiaolei Huang
{"title":"Global optimization for alignment of generalized shapes","authors":"Hongsheng Li, Tian Shen, Xiaolei Huang","doi":"10.1109/CVPR.2009.5206548","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206548","url":null,"abstract":"In this paper, we introduce a novel algorithm to solve global shape registration problems. We use gray-scale “images” to represent source shapes, and propose a novel two-component Gaussian Mixtures (GM) distance map representation for target shapes. Based on this flexible asymmetric image-based representation, a new energy function is defined. It proves to be a more robust shape dissimilarity metric that can be computed efficiently. Such high efficiency is essential for global optimization methods. We adopt one of them, the Particle Swarm Optimization (PSO), to effectively estimate the global optimum of the new energy function. Experiments and comparison performed on generalized shape data including continuous shapes, unstructured sparse point sets, and gradient maps, demonstrate the robustness and effectiveness of the algorithm.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124595576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A projector-based movable hand-held display system 一种基于投影仪的可移动手持显示系统
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206658
M. Leung, K. Lee, K. Wong, M. Chang
{"title":"A projector-based movable hand-held display system","authors":"M. Leung, K. Lee, K. Wong, M. Chang","doi":"10.1109/CVPR.2009.5206658","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206658","url":null,"abstract":"In this paper, we proposed a movable hand-held display system which uses a projector to project display content onto an ordinary cardboard which can move freely within the projection area. Such a system can give users greater freedom of control of the display such as the viewing angle and distance. At the same time, the size of the cardboard can be made to a size that fits one's application. A projector-camera pair is calibrated and used as the tracking and projection system. We present a vision based algorithm to detect an ordinary cardboard and track its subsequent motion. Display content is then pre-warped and projected onto the cardboard at the correct position. Experimental results show that our system can project onto the cardboard in reasonable precision.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121100659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Motion capture using joint skeleton tracking and surface estimation 基于关节骨架跟踪和表面估计的运动捕捉
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206755
Juergen Gall, Carsten Stoll, Edilson de Aguiar, C. Theobalt, B. Rosenhahn, H. Seidel
{"title":"Motion capture using joint skeleton tracking and surface estimation","authors":"Juergen Gall, Carsten Stoll, Edilson de Aguiar, C. Theobalt, B. Rosenhahn, H. Seidel","doi":"10.1109/CVPR.2009.5206755","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206755","url":null,"abstract":"This paper proposes a method for capturing the performance of a human or an animal from a multi-view video sequence. Given an articulated template model and silhouettes from a multi-view image sequence, our approach recovers not only the movement of the skeleton, but also the possibly non-rigid temporal deformation of the 3D surface. While large scale deformations or fast movements are captured by the skeleton pose and approximate surface skinning, true small scale deformations or non-rigid garment motion are captured by fitting the surface to the silhouette. We further propose a novel optimization scheme for skeleton-based pose estimation that exploits the skeleton's tree structure to split the optimization problem into a local one and a lower dimensional global one. We show on various sequences that our approach can capture the 3D motion of animals and humans accurately even in the case of rapid movements and wide apparel like skirts.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125439184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 455
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信