2009 IEEE Conference on Computer Vision and Pattern Recognition最新文献

筛选
英文 中文
Noninvasive volumetric imaging of cardiac electrophysiology 心脏电生理的无创容积成像
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206717
Linwei Wang, Heye Zhang, Ken C. L. Wong, Huafeng Liu, P. Shi
{"title":"Noninvasive volumetric imaging of cardiac electrophysiology","authors":"Linwei Wang, Heye Zhang, Ken C. L. Wong, Huafeng Liu, P. Shi","doi":"10.1109/CVPR.2009.5206717","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206717","url":null,"abstract":"Volumetric details of cardiac electrophysiology, such as transmembrane potential dynamics and tissue excitability of the myocardium, are of fundamental importance for understanding normal and pathological cardiac mechanisms, and for aiding the diagnosis and treatment of cardiac arrhythmia. Noninvasive observations, however, are made on body surface as an integration-projection of the volumetric phenomena inside patient's heart. We present a physiological-model-constrained statistical framework where prior knowledge of general myocardial electrical activity is used to guide the reconstruction of patient-specific volumetric cardiac electrophysiological details from body surface potential data. Sequential data assimilation with proper computational reduction is developed to estimate transmembrane potential and myocardial excitability inside the heart, which are then utilized to depict arrhythmogenic substrates. Effectiveness and validity of the framework is demonstrated through its application to evaluate the location and extent of myocardial infract using real patient data.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115772469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A streaming framework for seamless building reconstruction from large-scale aerial LiDAR data 基于大规模航空激光雷达数据的无缝建筑重建流框架
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206760
Qian-Yi Zhou, U. Neumann
{"title":"A streaming framework for seamless building reconstruction from large-scale aerial LiDAR data","authors":"Qian-Yi Zhou, U. Neumann","doi":"10.1109/CVPR.2009.5206760","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206760","url":null,"abstract":"We present a streaming framework for seamless building reconstruction from huge aerial LiDAR point sets. By storing data as stream files on hard disk and using main memory as only a temporary storage for ongoing computation, we achieve efficient out-of-core data management. This gives us the ability to handle data sets with hundreds of millions of points in a uniform manner. By adapting a building modeling pipeline into our streaming framework, we create the whole urban model of Atlanta from 17.7 GB LiDAR data with 683 M points in under 25 hours using less than 1 GB memory. To integrate this complex modeling pipeline with our streaming framework, we develop a state propagation mechanism, and extend current reconstruction algorithms to handle the large scale of data.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115881360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Image deblurring for less intrusive iris capture 图像去模糊,较少侵入虹膜捕获
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206700
Xinyu Huang, Liu Ren, Ruigang Yang
{"title":"Image deblurring for less intrusive iris capture","authors":"Xinyu Huang, Liu Ren, Ruigang Yang","doi":"10.1109/CVPR.2009.5206700","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206700","url":null,"abstract":"For most iris capturing scenarios, captured iris images could easily blur when the user is out of the depth of field (DOF) of the camera, or when he or she is moving. The common solution is to let the user try the capturing process again as the quality of these blurred iris images is not good enough for recognition. In this paper, we propose a novel iris deblurring algorithm that can be used to improve the robustness and nonintrusiveness for iris capture. Unlike other iris deblurring algorithms, the key feature of our algorithm is that we use the domain knowledge inherent in iris images and iris capture settings to improve the performance, which could be in the form of iris image statistics, characteristics of pupils or highlights, or even depth information from the iris capturing system itself. Our experiments on both synthetic and real data demonstrate that our deblurring algorithm can significantly restore blurred iris patterns and therefore improve the robustness of iris capture.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115551023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Marked point processes for crowd counting 人群计数的标记点过程
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206621
Weina Ge, R. Collins
{"title":"Marked point processes for crowd counting","authors":"Weina Ge, R. Collins","doi":"10.1109/CVPR.2009.5206621","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206621","url":null,"abstract":"A Bayesian marked point process (MPP) model is developed to detect and count people in crowded scenes. The model couples a spatial stochastic process governing number and placement of individuals with a conditional mark process for selecting body shape. We automatically learn the mark (shape) process from training video by estimating a mixture of Bernoulli shape prototypes along with an extrinsic shape distribution describing the orientation and scaling of these shapes for any given image location. The reversible jump Markov Chain Monte Carlo framework is used to efficiently search for the maximum a posteriori configuration of shapes, leading to an estimate of the count, location and pose of each person in the scene. Quantitative results of crowd counting are presented for two publicly available datasets with known ground truth.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114337045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 263
Active volume models for 3D medical image segmentation 三维医学图像分割的活动体模型
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206563
Tian Shen, Hongsheng Li, Z. Qian, Xiaolei Huang
{"title":"Active volume models for 3D medical image segmentation","authors":"Tian Shen, Hongsheng Li, Z. Qian, Xiaolei Huang","doi":"10.1109/CVPR.2009.5206563","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206563","url":null,"abstract":"In this paper, we propose a novel predictive model for object boundary, which can integrate information from any sources. The model is a dynamic “object” model whose manifestation includes a deformable surface representing shape, a volumetric interior carrying appearance statistics, and an embedded classifier that separates object from background based on current feature information. Unlike Snakes, Level Set, Graph Cut, MRF and CRF approaches, the model is “self-contained” in that it does not model the background, but rather focuses on an accurate representation of the foreground object's attributes. As we will show, however, the model is capable of reasoning about the background statistics thus can detect when is change sufficient to invoke a boundary decision. The shape of the 3D model is considered as an elastic solid, with a simplex-mesh (i.e. finite element triangulation) surface made of thousands of vertices. Deformations of the model are derived from a linear system that encodes external forces from the boundary of a Region of Interest (ROI), which is a binary mask representing the object region predicted by the current model. Efficient optimization and fast convergence of the model are achieved using the Finite Element Method (FEM). Other advantages of the model include the ease of dealing with topology changes and its ability to incorporate human interactions. Segmentation and validation results are presented for experiments on noisy 3D medical images.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114521024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Discriminatively trained particle filters for complex multi-object tracking 用于复杂多目标跟踪的判别训练粒子滤波器
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206801
Robin Hess, Alan Fern
{"title":"Discriminatively trained particle filters for complex multi-object tracking","authors":"Robin Hess, Alan Fern","doi":"10.1109/CVPR.2009.5206801","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206801","url":null,"abstract":"This work presents a discriminative training method for particle filters in the context of multi-object tracking. We are motivated by the difficulty of hand-tuning the many model parameters for such applications and also by results in many application domains indicating that discriminative training is often superior to generative training methods. Our learning approach is tightly integrated into the actual inference process of the filter and attempts to directly optimize the filter parameters in response to observed errors. We present experimental results in the challenging domain of American football where our filter is trained to track all 22 players throughout football plays. The training method is shown to significantly improve performance of the tracker and to significantly outperform two recent particle-based multi-object tracking methods.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116079486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 135
An instance selection approach to Multiple instance Learning 多实例学习的实例选择方法
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206655
Zhouyu Fu, A. Robles-Kelly
{"title":"An instance selection approach to Multiple instance Learning","authors":"Zhouyu Fu, A. Robles-Kelly","doi":"10.1109/CVPR.2009.5206655","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206655","url":null,"abstract":"Multiple-instance learning (MIL) is a new paradigm of supervised learning that deals with the classification of bags. Each bag is presented as a collection of instances from which features are extracted. In MIL, we have usually confronted with a large instance space for even moderately sized data sets since each bag may contain many instances. Hence it is important to design efficient instance pruning and selection techniques to speed up the learning process without compromising on the performance. In this paper, we address the issue of instance selection in multiple instance learning and propose the IS-MIL, an instance selection framework for MIL, to tackle large-scale MIL problems. IS-MIL is based on an alternative optimisation framework by iteratively repeating the steps of instance selection/updating and classifier learning, which is guaranteed to converge. Experimental results demonstrate the utility and efficiency of the proposed approach compared to the alternatives.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123316711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
A family of contextual measures of similarity between distributions with application to image retrieval 分布间相似性的一组上下文度量及其在图像检索中的应用
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206505
F. Perronnin, Yan Liu, J. Renders
{"title":"A family of contextual measures of similarity between distributions with application to image retrieval","authors":"F. Perronnin, Yan Liu, J. Renders","doi":"10.1109/CVPR.2009.5206505","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206505","url":null,"abstract":"We introduce a novel family of contextual measures of similarity between distributions: the similarity between two distributions q and p is measured in the context of a third distribution u. In our framework any traditional measure of similarity / dissimilarity has its contextual counterpart. We show that for two important families of divergences (Bregman and Csisz'ar), the contextual similarity computation consists in solving a convex optimization problem. We focus on the case of multinomials and explain how to compute in practice the similarity for several well-known measures. These contextual measures are then applied to the image retrieval problem. In such a case, the context u is estimated from the neighbors of a query q. One of the main benefits of our approach lies in the fact that using different contexts, and especially contexts at multiple scales (i.e. broad and narrow contexts), provides different views on the same problem. Combining the different views can improve retrieval accuracy. We will show on two very different datasets (one of photographs, the other of document images) that the proposed measures have a relatively small positive impact on macro Average Precision (which measures purely ranking) and a large positive impact on micro Average Precision (which measures both ranking and consistency of the scores across multiple queries).","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123443148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Discriminative subvolume search for efficient action detection 判别子卷搜索,有效的动作检测
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206671
Junsong Yuan, Zicheng Liu, Ying Wu
{"title":"Discriminative subvolume search for efficient action detection","authors":"Junsong Yuan, Zicheng Liu, Ying Wu","doi":"10.1109/CVPR.2009.5206671","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206671","url":null,"abstract":"Actions are spatio-temporal patterns which can be characterized by collections of spatio-temporal invariant features. Detection of actions is to find the re-occurrences (e.g. through pattern matching) of such spatio-temporal patterns. This paper addresses two critical issues in pattern matching-based action detection: (1) efficiency of pattern search in 3D videos and (2) tolerance of intra-pattern variations of actions. Our contributions are two-fold. First, we propose a discriminative pattern matching called naive-Bayes based mutual information maximization (NBMIM) for multi-class action categorization. It improves the state-of-the-art results on standard KTH dataset. Second, a novel search algorithm is proposed to locate the optimal subvolume in the 3D video space for efficient action detection. Our method is purely data-driven and does not rely on object detection, tracking or background subtraction. It can well handle the intra-pattern variations of actions such as scale and speed variations, and is insensitive to dynamic and clutter backgrounds and even partial occlusions. The experiments on versatile datasets including KTH and CMU action datasets demonstrate the effectiveness and efficiency of our method.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121698973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 317
Symmetry integrated region-based image segmentation 基于区域的对称集成图像分割
2009 IEEE Conference on Computer Vision and Pattern Recognition Pub Date : 2009-06-20 DOI: 10.1109/CVPR.2009.5206570
Yu Sun, B. Bhanu
{"title":"Symmetry integrated region-based image segmentation","authors":"Yu Sun, B. Bhanu","doi":"10.1109/CVPR.2009.5206570","DOIUrl":"https://doi.org/10.1109/CVPR.2009.5206570","url":null,"abstract":"Symmetry is an important cue for machine perception that involves high-level knowledge of image components. Unlike most of the previous research that only computes symmetry in an image, this paper integrates symmetry with image segmentation to improve the segmentation performance. The symmetry integration is used to optimize both the segmentation and the symmetry of regions simultaneously. Interesting points are initially extracted from an image and they are further refined for detecting symmetry axis. A symmetry affinity matrix is used explicitly as a constraint in a region growing algorithm in order to refine the symmetry of segmented regions. Experimental results and comparisons from a wide domain of images indicate a promising improvement by symmetry integrated image segmentation compared to other image segmentation methods that do not exploit symmetry.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123959175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信