2007 IEEE Conference on Computer Vision and Pattern Recognition最新文献

筛选
英文 中文
Connecting the Out-of-Sample and Pre-Image Problems in Kernel Methods 结合核方法中的样本外和预像问题
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383038
P. Arias, G. Randall, G. Sapiro
{"title":"Connecting the Out-of-Sample and Pre-Image Problems in Kernel Methods","authors":"P. Arias, G. Randall, G. Sapiro","doi":"10.1109/CVPR.2007.383038","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383038","url":null,"abstract":"Kernel methods have been widely studied in the field of pattern recognition. These methods implicitly map, \"the kernel trick,\" the data into a space which is more appropriate for analysis. Many manifold learning and dimensionality reduction techniques are simply kernel methods for which the mapping is explicitly computed. In such cases, two problems related with the mapping arise: The out-of-sample extension and the pre-image computation. In this paper we propose a new pre-image method based on the Nystrom formulation for the out-of-sample extension, showing the connections between both problems. We also address the importance of normalization in the feature space, which has been ignored by standard pre-image algorithms. As an example, we apply these ideas to the Gaussian kernel, and relate our approach to other popular pre-image methods. Finally, we show the application of these techniques in the study of dynamic shapes.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115715975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Integration of Motion Cues in Optical and Sonar Videos for 3-D Positioning 运动线索在光学和声纳视频中的集成用于三维定位
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383354
S. Negahdaripour, H. Pirsiavash, H. Sekkati
{"title":"Integration of Motion Cues in Optical and Sonar Videos for 3-D Positioning","authors":"S. Negahdaripour, H. Pirsiavash, H. Sekkati","doi":"10.1109/CVPR.2007.383354","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383354","url":null,"abstract":"Target-based positioning and 3-D target reconstruction are critical capabilities in deploying submersible platforms for a range of underwater applications, e.g., search and inspection missions. While optical cameras provide high-resolution and target details, they are constrained by limited visibility range. In highly turbid waters, target at up to distances of 10 s of meters can be recorded by high-frequency (MHz) 2-D sonar imaging systems that have become introduced to the commercial market in years. Because of lower resolution and SNR level and inferior target details compared to optical camera in favorable visibility conditions, the integration of both sensing modalities can enable operation in a wider range of conditions with generally better performance compared to deploying either system alone. In this paper, estimate of the 3-D motion of the integrated system and the 3-D reconstruction of scene features are addressed. We do not require establishing matches between optical and sonar features, referred to as opti-acoustic correspondences, but rather matches in either the sonar or optical motion sequences. In addition to improving the motion estimation accuracy, advantages of the system comprise overcoming certain inherent ambiguities of monocular vision, e.g., the scale-factor ambiguity, and dual interpretation of planar scenes. We discuss how the proposed solution provides an effective strategy to address the rather complex opti-acoustic stereo matching problem. Experiment with real data demonstrate our technical contribution.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124367543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Opti-Acoustic Stereo Imaging, System Calibration and 3-D Reconstruction 光声立体成像,系统校准和三维重建
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383361
S. Negahdaripour, H. Sekkati, H. Pirsiavash
{"title":"Opti-Acoustic Stereo Imaging, System Calibration and 3-D Reconstruction","authors":"S. Negahdaripour, H. Sekkati, H. Pirsiavash","doi":"10.1109/CVPR.2007.383361","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383361","url":null,"abstract":"Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of so-called range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124393283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Shape from Shading Based on Lax-Friedrichs Fast Sweeping and Regularization Techniques With Applications to Document Image Restoration 基于拉克斯-弗里德里希快速扫描和正则化技术的阴影形状及其在文档图像恢复中的应用
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383287
Li Zhang, A. Yip, C. Tan
{"title":"Shape from Shading Based on Lax-Friedrichs Fast Sweeping and Regularization Techniques With Applications to Document Image Restoration","authors":"Li Zhang, A. Yip, C. Tan","doi":"10.1109/CVPR.2007.383287","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383287","url":null,"abstract":"In this paper, we describe a 2-pass iterative scheme to solve the general partial differential equation (PDE) related to the Shape-from-Shading (SFS) problem under both distant and close point light sources. In particular, we discuss its applications in restoring warped document images that often appear in the daily snapshots. The proposed method consists of two steps. First the image irradiance equation is formulated as a static Hamilton-Jacobi (HJ) equation and solved using a fast sweeping strategy with Lax-Friedrichs Hamiltonian. However, abrupt errors may arise when applying to real document images due to noises in the approximated shading image. To reduce the noise sensitivity, a minimization method thus follows to smooth out the abrupt ridges in the initial result and produce a better reconstruction. Experiments on synthetic surfaces show promising results comparing to the ground truth data. Moreover, a general framework is developed, which demonstrates that the SFS method can help to remove both geometric and photometric distortions in warped document images for better visual appearance and higher recognition rate.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124535731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Kinematics from Lines in a Single Rolling Shutter Image 单张滚动快门图像中线条的运动学
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383119
Omar Ait-Aider, A. Bartoli, N. Andreff
{"title":"Kinematics from Lines in a Single Rolling Shutter Image","authors":"Omar Ait-Aider, A. Bartoli, N. Andreff","doi":"10.1109/CVPR.2007.383119","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383119","url":null,"abstract":"Recent work shows that recovering pose and velocity from a single view of a moving rigid object is possible with a rolling shutter camera, based on feature point correspondences. We extend this method to line correspondences. Owing to the combined effect of rolling shutter and object motion, straight lines are distorted to curves as they get imaged with a rolling shutter camera. Lines thus capture more information than points, which is not the case with standard projection models for which both points and lines give two constraints. We extend the standard line reprojection error, and propose a nonlinear method for retrieving a solution to the pose and velocity computation problem. A careful inspection of the design matrix in the normal equations reveals that it is highly sparse and patterned. We propose a blockwise solution procedure based on bundle-adjustment-like sparse inversion. This makes nonlinear optimization fast and numerically stable. The method is validated using real data.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114753974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Layered Depth Panoramas 分层深度全景
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383295
K. Zheng, S. B. Kang, Michael F. Cohen, R. Szeliski
{"title":"Layered Depth Panoramas","authors":"K. Zheng, S. B. Kang, Michael F. Cohen, R. Szeliski","doi":"10.1109/CVPR.2007.383295","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383295","url":null,"abstract":"Representations for interactive photorealistic visualization of scenes range from compact 2D panoramas to data-intensive 4D light fields. In this paper, we propose a technique for creating a layered representation from a sparse set of images taken with a hand-held camera. This representation, which we call a layered depth panorama (LDP), allows the user to experience 3D by off-axis panning. It combines the compelling experience of panoramas with limited 3D navigation. Our choice of representation is motivated by ease of capture and compactness. We formulate the problem of constructing the LDP as the recovery of color and geometry in a multi-perspective cylindrical disparity space. We leverage a graph cut approach to sequentially determine the disparity and color of each layer using multi-view stereo. Geometry visible through the cracks at depth discontinuities in a frontmost layer is determined and assigned to layers behind the frontmost layer. All layers are then used to render novel panoramic views with parallax. We demonstrate our approach on a variety of complex outdoor and indoor scenes.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116027198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Improved Video Registration using Non-Distinctive Local Image Features 改进视频配准使用非显著的局部图像特征
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.382989
Robin Hess, Alan Fern
{"title":"Improved Video Registration using Non-Distinctive Local Image Features","authors":"Robin Hess, Alan Fern","doi":"10.1109/CVPR.2007.382989","DOIUrl":"https://doi.org/10.1109/CVPR.2007.382989","url":null,"abstract":"The task of registering video frames with a static model is a common problem in many computer vision domains. The standard approach to registration involves finding point correspondences between the video and the model and using those correspondences to numerically determine registration transforms. Current methods locate video-to-model point correspondences by assembling a set of reference images to represent the model and then detecting and matching invariant local image features between the video frames and the set of reference images. These methods work well when all video frames can be guaranteed to contain a sufficient number of distinctive visual features. However, as we demonstrate, these methods are prone to severe misregistration errors in domains where many video frames lack distinctive image features. To overcome these errors, we introduce a concept of local distinctiveness which allows us to find model matches for nearly all video features, regardless of their distinctiveness on a global scale. We present results from the American football domain-where many video frames lack distinctive image features-which show a drastic improvement in registration accuracy over current methods. In addition, we introduce a simple, empirical stability test that allows our method to be fully automated. Finally, we present a registration dataset from the American football domain we hope can be used as a benchmarking tool for registration methods.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116364786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
Bottom-up Recognition and Parsing of the Human Body 人体自底向上的识别与解析
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1007/978-3-540-74198-5_13
Praveen Srinivasan, Jianbo Shi
{"title":"Bottom-up Recognition and Parsing of the Human Body","authors":"Praveen Srinivasan, Jianbo Shi","doi":"10.1007/978-3-540-74198-5_13","DOIUrl":"https://doi.org/10.1007/978-3-540-74198-5_13","url":null,"abstract":"","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"107 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123507852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Quantifying Facial Expression Abnormality in Schizophrenia by Combining 2D and 3D Features 结合二维和三维特征量化精神分裂症患者面部表情异常
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383061
Peng Wang, Christiane Köhler, Fred Barrett, R. Gur, R. Gur, R. Verma
{"title":"Quantifying Facial Expression Abnormality in Schizophrenia by Combining 2D and 3D Features","authors":"Peng Wang, Christiane Köhler, Fred Barrett, R. Gur, R. Gur, R. Verma","doi":"10.1109/CVPR.2007.383061","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383061","url":null,"abstract":"Most of current computer-based facial expression analysis methods focus on the recognition of perfectly posed expressions, and hence are incapable of handling the individuals with expression impairments. In particular, patients with schizophrenia usually have impaired expressions in the form of \"flat\" or \"inappropriate\" affects, which make the quantification of their facial expressions a challenging problem. This paper presents methods to quantify the group differences between patients with schizophrenia and healthy controls, by extracting specialized features and analyzing group differences on a feature manifold. The features include 2D and 3D geometric features, and the moment invariants combining both 3D geometry and 2D textures. Facial expression recognition experiments on actors demonstrate that our combined features can better characterize facial expressions than either 2D geometric or texture features. The features are then embedded into an ISOMAP manifold to quantify the group differences between controls and patients. Experiments show that our results are strongly supported by the human rating results and clinical findings, thus providing a framework that is able to quantify the abnormality in patients with schizophrenia.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"363 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123557271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Fusion of Face and Palmprint for Personal Identification Based on Ordinal Features 基于顺序特征的人脸与掌纹融合身份识别
2007 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2007-06-17 DOI: 10.1109/CVPR.2007.383522
R. Chu, Shengcai Liao, Yufei Han, Zhenan Sun, S. Li, T. Tan
{"title":"Fusion of Face and Palmprint for Personal Identification Based on Ordinal Features","authors":"R. Chu, Shengcai Liao, Yufei Han, Zhenan Sun, S. Li, T. Tan","doi":"10.1109/CVPR.2007.383522","DOIUrl":"https://doi.org/10.1109/CVPR.2007.383522","url":null,"abstract":"In this paper, we present a face and palmprint multimodal biometric identification method and system to improve the identification performance. Effective classifiers based on ordinal features are constructed for faces and palmprints, respectively. Then, the matching scores from the two classifiers are combined using several fusion strategies. Experimental results on a middle-scale data set have demonstrated the effectiveness of the proposed system.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123617676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信