2007 IEEE Conference on Computer Vision and Pattern Recognition最新文献

筛选
英文 中文
Secure Biometric Templates from Fingerprint-Face Features 从指纹-面部特征安全的生物识别模板
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383385
Y. Sutcu, Qiming Li, N. Memon
{"title":"Secure Biometric Templates from Fingerprint-Face Features","authors":"Y. Sutcu, Qiming Li, N. Memon","doi":"10.1109/CVPR.2007.383385","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383385","url":null,"abstract":"Since biometric data cannot be easily replaced or revoked, it is important that biometric templates used in biometric applications should be constructed and stored in a secure way, such that attackers would not be able to forge biometric data easily even when the templates are compromised. This is a challenging goal since biometric data are \"noisy\" by nature, and the matching algorithms are often complex, which make it difficult to apply traditional cryptographic techniques, especially when multiple modalities are considered. In this paper, we consider a \"fusion \" of a minutiae-based fingerprint authentication scheme and an SVD-based face authentication scheme, and show that by employing a recently proposed cryptographic primitive called \"secure sketch \", and a known geometric transformation on minutiae, we can make it easier to combine different modalities, and at the same time make it computationally infeasible to forge an \"original\" combination of fingerprint and face image that passes the authentication. We evaluate the effectiveness of our scheme using real fingerprints and face images from publicly available sources.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126163241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 129
Learning Color Names from Real-World Images 从真实世界的图像中学习颜色名称
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383218
Joost van de Weijer, C. Schmid, J. Verbeek
{"title":"Learning Color Names from Real-World Images","authors":"Joost van de Weijer, C. Schmid, J. Verbeek","doi":"10.1109/CVPR.2007.383218","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383218","url":null,"abstract":"Within a computer vision context color naming is the action of assigning linguistic color labels to image pixels. In general, research on color naming applies the following paradigm: a collection of color chips is labelled with color names within a well-defined experimental setup by multiple test subjects. The collected data set is subsequently used to label RGB values in real-world images with a color name. Apart from the fact that this collection process is time consuming, it is unclear to what extent color naming within a controlled setup is representative for color naming in real-world images. Therefore we propose to learn color names from real-world images. Furthermore, we avoid test subjects by using Google Image to collect a data set. Due to limitations of Google Image this data set contains a substantial quantity of wrongly labelled data. The color names are learned using a PLSA model adapted to this task. Experimental results show that color names learned from real-world images significantly outperform color names learned from labelled color chips on retrieval and classification.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125295554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 199
High-Speed Measurement of BRDF using an Ellipsoidal Mirror and a Projector 利用椭球镜和投影仪高速测量BRDF
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383467
Y. Mukaigawa, K. Sumino, Y. Yagi
{"title":"High-Speed Measurement of BRDF using an Ellipsoidal Mirror and a Projector","authors":"Y. Mukaigawa, K. Sumino, Y. Yagi","doi":"10.1109/CVPR.2007.383467","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383467","url":null,"abstract":"Measuring BRDF (bi-directional reflectance distribution function) requires huge amounts of time because a target object must be illuminated from all incident angles and the reflected lights must be measured from all reflected angles. In this paper, we present a high-speed method to measure BRDFs using an ellipsoidal mirror and a projector. Our method makes it possible to change incident angles without a mechanical drive. Moreover, the omni-directional reflected lights from the object can be measured by one static camera at once. Our prototype requires only fifty minutes to measure anisotropic BRDFs, even if the lighting interval is one degree.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126621132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
A Nine-point Algorithm for Estimating Para-Catadioptric Fundamental Matrices 拟反射性基本矩阵的九点估计算法
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383065
Christopher Geyer, Henrik Stewénius
{"title":"A Nine-point Algorithm for Estimating Para-Catadioptric Fundamental Matrices","authors":"Christopher Geyer, Henrik Stewénius","doi":"10.1109/CVPR.2007.383065","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383065","url":null,"abstract":"We present a minimal-point algorithm for finding fundamental matrices for catadioptric cameras of the parabolic type. Central catadioptric cameras-an optical combination of a mirror and a lens that yields an imaging device equivalent within hemispheres to perspective cameras-have found wide application in robotics, tele-immersion and providing enhanced situational awareness for remote operation. We use an uncalibrated structure-from-motion framework developed for these cameras to consider the problem of estimating the fundamental matrix for such cameras. We present a solution that can compute the para-catadioptirc fundamental matrix with nine point correspondences, the smallest number possible. We compare this algorithm to alternatives and show some results of using the algorithm in conjunction with random sample consensus (RANSAC).","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126627898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Partially Occluded Object-Specific Segmentation in View-Based Recognition 基于视图的识别中部分遮挡的特定对象分割
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383268
Minsu Cho, Kyoung Mu Lee
{"title":"Partially Occluded Object-Specific Segmentation in View-Based Recognition","authors":"Minsu Cho, Kyoung Mu Lee","doi":"10.1109/CVPR.2007.383268","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383268","url":null,"abstract":"We present a novel object-specific segmentation method which can be used in view-based object recognition systems. Previous object segmentation approaches generate inexact results especially in partially occluded and cluttered environment because their top-down strategies fail to explain the details of various specific objects. On the contrary, our segmentation method efficiently exploits the information of the matched model views in view-based recognition because the aligned model view to the input image can serve as the best top-down cue for object segmentation. In this paper, we cast the problem of partially occluded object segmentation as that of labelling displacement and foreground status simultaneously for each pixel between the aligned model view and an input image. The problem is formulated by a maximum a posteriori Markov random field (MAP-MRF) model which minimizes a particular energy function. Our method overcomes complex occlusion and clutter and provides accurate segmentation boundaries by combining a bottom-up segmentation cue together. We demonstrate the efficiency and robustness of it by experimental results on various objects under occluded and cluttered environments.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126759863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Robust Warping Method for Fingerprint Matching 一种用于指纹匹配的鲁棒翘曲方法
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383391
Dongjin Kwon, I. Yun, Sang Uk Lee
{"title":"A Robust Warping Method for Fingerprint Matching","authors":"Dongjin Kwon, I. Yun, Sang Uk Lee","doi":"10.1109/CVPR.2007.383391","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383391","url":null,"abstract":"This paper presents a robust warping method for minutiae based fingerprint matching approaches. In this method, a deformable fingerprint surface is described using a triangular mesh model. For given two extracted minutiae sets and their correspondences, the proposed method constructs an energy function using a robust correspondence energy estimator and smoothness measuring of the mesh model. We obtain a convergent deformation pattern using an efficient gradient based energy optimization method. This energy optimization approach deals successfully with deformation errors caused by outliers, which are more difficult problems for the thin-plate spline (TPS) model. The proposed method is fast and the run-time performance is comparable with the method based on the TPS model. In the experiments, we provide a visual inspection of warping results on given correspondences and quantitative results using database.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115031432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Scaled Motion Dynamics for Markerless Motion Capture 缩放运动动力学无标记运动捕捉
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383128
B. Rosenhahn, T. Brox, H. Seidel
{"title":"Scaled Motion Dynamics for Markerless Motion Capture","authors":"B. Rosenhahn, T. Brox, H. Seidel","doi":"10.1109/CVPR.2007.383128","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383128","url":null,"abstract":"This work proposes a way to use a-priori knowledge on motion dynamics for markerless human motion capture (MoCap). Specifically, we match tracked motion patterns to training patterns in order to predict states in successive frames. Thereby, modeling the motion by means of twists allows for a proper scaling of the prior. Consequently, there is no need for training data of different frame rates or velocities. Moreover, the method allows to combine very different motion patterns. Experiments in indoor and outdoor scenarios demonstrate the continuous tracking of familiar motion patterns in case of artificial frame drops or in situations insufficiently constrained by the image data.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115053142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Bilattice-based Logical Reasoning for Human Detection 基于双栅格的人类检测逻辑推理
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383133
V. Shet, J. Neumann, Visvanathan Ramesh, L. Davis
{"title":"Bilattice-based Logical Reasoning for Human Detection","authors":"V. Shet, J. Neumann, Visvanathan Ramesh, L. Davis","doi":"10.1109/CVPR.2007.383133","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383133","url":null,"abstract":"The capacity to robustly detect humans in video is a critical component of automated visual surveillance systems. This paper describes a bilattice based logical reasoning approach that exploits contextual information and knowledge about interactions between humans, and augments it with the output of different low level detectors for human detection. Detections from low level parts-based detectors are treated as logical facts and used to reason explicitly about the presence or absence of humans in the scene. Positive and negative information from different sources, as well as uncertainties from detections and logical rules, are integrated within the bilattice framework. This approach also generates proofs or justifications for each hypothesis it proposes. These justifications (or lack thereof) are further employed by the system to explain and validate, or reject potential hypotheses. This allows the system to explicitly reason about complex interactions between humans and handle occlusions. These proofs are also available to the end user as an explanation of why the system thinks a particular hypothesis is actually a human. We employ a boosted cascade of gradient histograms based detector to detect individual body parts. We have applied this framework to analyze the presence of humans in static images from different datasets.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116393667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 124
Learning the Compositional Nature of Visual Objects 学习视觉对象的组成性质
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383154
B. Ommer, J. Buhmann
{"title":"Learning the Compositional Nature of Visual Objects","authors":"B. Ommer, J. Buhmann","doi":"10.1109/CVPR.2007.383154","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383154","url":null,"abstract":"The compositional nature of visual objects significantly limits their representation complexity and renders learning of structured object models tractable. Adopting this modeling strategy we both (i) automatically decompose objects into a hierarchy of relevant compositions and we (ii) learn such a compositional representation for each category without supervision. The compositional structure supports feature sharing already on the lowest level of small image patches. Compositions are represented as probability distributions over their constituent parts and the relations between them. The global shape of objects is captured by a graphical model which combines all compositions. Inference based on the underlying statistical model is then employed to obtain a category level object recognition system. Experiments on large standard benchmark datasets underline the competitive recognition performance of this approach and they provide insights into the learned compositional structure of objects.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"27 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116408361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
The Hierarchical Isometric Self-Organizing Map for Manifold Representation 流形表示的层次等距自组织映射
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383402
Haiying Guan, M. Turk
{"title":"The Hierarchical Isometric Self-Organizing Map for Manifold Representation","authors":"Haiying Guan, M. Turk","doi":"10.1109/CVPR.2007.383402","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383402","url":null,"abstract":"We present an algorithm, Hierarchical ISOmetric Self-Organizing Map (H-ISOSOM), for a concise, organized manifold representation of complex, non-linear, large scale, high-dimensional input data in a low dimensional space. The main contribution of our algorithm is threefold. First, we modify the previous ISOSOM algorithm by a local linear interpolation (LLl) technique, which maps the data samples from low dimensional space back to high dimensional space and makes the complete mapping pseudo-invertible. The modified-ISOSOM (M-ISOSOM) follows the global geometric structure of the data, and also preserves local geometric relations to reduce the nonlinear mapping distortion and make the learning more accurate. Second, we propose the H-ISOSOM algorithm for the computational complexity problem of Isomap, SOM and LLI and the nonlinear complexity problem of the highly twisted manifold. H-ISOSOM learns an organized structure of a non-convex, large scale manifold and represents it by a set of hierarchical organized maps. The hierarchical structure follows a coarse-to-fine strategy. According to the coarse global structure, it \"unfolds \" the manifold at the coarse level and decomposes the sample data into small patches, then iteratively learns the nonlinearity of each patch in finer levels. The algorithm simultaneously reorganizes and clusters the data samples in a low dimensional space to obtain the concise representation. Third, we give quantitative comparisons of the proposed method with similar methods on standard data sets. Finally, we apply H-ISOSOM to the problem of appearance-based hand pose estimation. Encouraging experimental results validate the effectiveness and efficiency of H-ISOSOM.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121881890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信