2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops最新文献

筛选
英文 中文
Physiological modelling for improved reliability in silhouette-driven gradient-based hand tracking 提高轮廓驱动的基于梯度的手部跟踪可靠性的生理建模
Paris Kaimakis, Joan Lasenby
{"title":"Physiological modelling for improved reliability in silhouette-driven gradient-based hand tracking","authors":"Paris Kaimakis, Joan Lasenby","doi":"10.1109/CVPRW.2009.5204252","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204252","url":null,"abstract":"We present a gradient-based motion capture system that robustly tracks a human hand, based on abstracted visual information - silhouettes. Despite the ambiguity in the visual data and despite the vulnerability of gradient-based methods in the face of such ambiguity, we minimise problems related to misfit by using a model of the hand's physiology, which is entirely non-visual, subject-invariant, and assumed to be known a priori. By modelling seven distinct aspects of the hand's physiology we derive prior densities which are incorporated into the tracking system within a Bayesian framework. We demonstrate how the posterior is formed, and how our formulation leads to the extraction of the maximum a posteriori estimate using a gradient-based search. Our results demonstrate an enormous improvement in tracking precision and reliability, while also achieving near real-time performance.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114930204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Event detection using local binary pattern based dynamic textures 使用基于局部二进制模式的动态纹理进行事件检测
Yunqian Ma, P. Císar̆
{"title":"Event detection using local binary pattern based dynamic textures","authors":"Yunqian Ma, P. Císar̆","doi":"10.1109/CVPRW.2009.5204204","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204204","url":null,"abstract":"Detecting suspicious events from video surveillance cameras has been an important task recently. Many trajectory based descriptors were developed, such as to detect people running or moving in opposite direction. However, these trajectory based descriptors are not working well in the crowd environments like airports, rail stations, because those descriptors assume perfect motion/object segmentation. In this paper, we present an event detection method using dynamic texture descriptor. The dynamic texture descriptor is an extension of the local binary patterns. The image sequences are divided into regions. A flow is formed based on the similarity of the dynamic texture descriptors on the regions. We used real dataset for experiments. The results are promising.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124238674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Robust facial action recognition from real-time 3D streams 鲁棒面部动作识别实时3D流
F. Tsalakanidou, S. Malassiotis
{"title":"Robust facial action recognition from real-time 3D streams","authors":"F. Tsalakanidou, S. Malassiotis","doi":"10.1109/CVPRW.2009.5204281","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204281","url":null,"abstract":"This paper presents a completely automated facial action and facial expression recognition system using 2D + 3D images recorded in real-time by a structured light sensor. It is based on local feature tracking and rule-based classification of geometric, appearance and surface curvature measurements. Good performance is achieved under relatively non-controlled conditions.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116822982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Incremental Bayesian learning of feature points from natural images 自然图像特征点的增量贝叶斯学习
M. Toivanen, J. Lampinen
{"title":"Incremental Bayesian learning of feature points from natural images","authors":"M. Toivanen, J. Lampinen","doi":"10.1109/CVPRW.2009.5204292","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204292","url":null,"abstract":"Selecting automatically feature points of an object appearing in images is a difficult but vital task for learning the feature point based representation of the object model. In this work we present an incremental Bayesian model that learns the feature points of an object from natural un-annotated images by matching the corresponding points. The training set is recursively expanded and the model parameters updated after matching each image. The set of nodes in the first image is matched in the second image, by sampling the un-normalized posterior distribution with particle filters. For each matched node the model assigns a probability for it to be associated with the object, and having matched few images, the nodes with low association probabilities are replaced with new ones to increase the number of the object nodes. A feature point based representation of the object model is formed from the matched corresponding points. In the tested images, the model matches the corresponding points better than the well-known elastic bunch graph matching batch method and gives promising results in recognizing learned object models in novel images.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123432864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Computer vision on tap 计算机视觉开启
Kevin Chiu, R. Raskar
{"title":"Computer vision on tap","authors":"Kevin Chiu, R. Raskar","doi":"10.1109/CVPRW.2009.5204229","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204229","url":null,"abstract":"We demonstrate a concept of computer vision as a secure, live service on the Internet. We show a platform to distribute a real time vision algorithm using simple widely available Web technologies, such as Adobe Flash. We allow a user to access this service without downloading an executable or sharing the image stream with anyone. We support developers to publish without distribution complexity. Finally the platform supports user-permitted aggregation of data for computer vision research or analysis. We describe results for a simple distributed motion detection algorithm. We discuss future scenarios for organically extending the horizon of computer vision research.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124577953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Beyond one-to-one feature correspondence: The need for many-to-many matching and image abstraction 除了一对一的特征对应:需要多对多匹配和图像抽象
Sven J. Dickinson
{"title":"Beyond one-to-one feature correspondence: The need for many-to-many matching and image abstraction","authors":"Sven J. Dickinson","doi":"10.1109/CVPRW.2009.5204333","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204333","url":null,"abstract":"Summary form only given: In this paper briefly review three formulations of the many-to-many matching problem as applied to model acquisition, model indexing, and object recognition. In the first scenario, I will describe the problem of learning a prototypical shape model from a set of exemplars in which the exemplars may not share a single local feature in common. We formulate the problem as a search through the intractable space of feature combinations, or abstractions, to find the \"lowest common abstraction\" that is derivable from each input exemplar. This abstraction, in turn, defines a many-to-many feature correspondence among the extracted input features.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121774606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic detection of body parts in x-ray images x射线图像中人体部位的自动检测
V. Jeanne, D. Ünay, Vincent Jacquet
{"title":"Automatic detection of body parts in x-ray images","authors":"V. Jeanne, D. Ünay, Vincent Jacquet","doi":"10.1109/CVPRW.2009.5204353","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204353","url":null,"abstract":"The number of digital images that needs to be acquired, analyzed, classified, stored and retrieved in the medical centers is exponentially growing with the advances in medical imaging technology. Accordingly, medical image classification and retrieval has become a popular topic in the recent years. Despite many projects focusing on this problem, proposed solutions are still far from being sufficiently accurate for real-life implementations. Interpreting medical image classification and retrieval as a multi-class classification task, in this work, we investigate the performance of five different feature types in a SVM-based learning framework for classification of human body X-Ray images into classes corresponding to body parts. Our comprehensive experiments show that four conventional feature types provide performances comparable to the literature with low per-class accuracies, whereas local binary patterns produce not only very good global accuracy but also good class-specific accuracies with respect to the features used in the literature.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126577090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Robust estimation of stem cell lineages using local graph matching 利用局部图匹配稳健估计干细胞谱系
Min Liu, A. Roy-Chowdhury, G. Reddy
{"title":"Robust estimation of stem cell lineages using local graph matching","authors":"Min Liu, A. Roy-Chowdhury, G. Reddy","doi":"10.1109/CVPRW.2009.5204045","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204045","url":null,"abstract":"In this paper, we present a local graph matching based method for tracking cells and cell divisions. This will allow us to estimate the lineages of the cells in a 4D spatio-temporal image stack obtained using fluorescence imaging techniques. We work with plant cells, where the cells are tightly clustered in space and computing correspondences in space and time can be very challenging. The local graph matching method is able to compute the lineages even when significant portions of the images are corrupted due to sensor noise in the imaging process or segmentation errors. The geometric structure and topology of the cells' relative positions are efficiently exploited to solve the tracking problem using the local graph matching technique. The process not only computes the correspondences of cells across spatial and temporal image slices, but is also able to find out where and when cells divide, identify new cells and detect missing ones. Using this method we show experimental results to track the properly segmented cells, and compute cell lineages from images captured over 72 hours, even when some of those images are highly noisy (e.g., missing cells).","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126935679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Automatic symmetry-integrated brain injury detection in MRI sequences MRI序列中对称性集成脑损伤自动检测
Yu Sun, B. Bhanu, Shiv Bhanu
{"title":"Automatic symmetry-integrated brain injury detection in MRI sequences","authors":"Yu Sun, B. Bhanu, Shiv Bhanu","doi":"10.1109/CVPRW.2009.5204052","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204052","url":null,"abstract":"This paper presents a fully automated symmetry-integrated brain injury detection method for magnetic resonance imaging (MRI) sequences. One of the limitations of current injury detection methods often involves a large amount of training data or a prior model that is only applicable to a limited domain of brain slices, with low computational efficiency and robustness. Our proposed approach can detect injuries from a wide variety of brain images since it makes use of symmetry as a dominant feature, and does not rely on any prior models and training phases. The approach consists of the following steps: (a) symmetry integrated segmentation of brain slices based on symmetry affinity matrix, (b) computation of kurtosis and skewness of symmetry affinity matrix to find potential asymmetric regions, (c) clustering of the pixels in symmetry affinity matrix using a 3D relaxation algorithm, (d) fusion of the results of (b) and (c) to obtain refined asymmetric regions, (e) Gaussian mixture model for unsupervised classification of potential asymmetric regions as the set of regions corresponding to brain injuries. Experimental results are carried out to demonstrate the efficacy of the approach.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132318944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Deformable tree models for 2D and 3D branching structures extraction 可变形树模型用于二维和三维分支结构提取
J. Mille, L. Cohen
{"title":"Deformable tree models for 2D and 3D branching structures extraction","authors":"J. Mille, L. Cohen","doi":"10.1109/CVPRW.2009.5204049","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204049","url":null,"abstract":"The proposed model is devoted to the segmentation and reconstruction of branching structures, like vascular trees. We rely on an explicit representation of a deformable tree, where topological relationships between segments are modeled. This allows easy posterior interactions and quantitative analysis, such as measuring diameters or lengths of vessels. Starting from a unique user-provided root point, an initial tree is built with a technique relying on minimal paths. Within the constructed tree, the central curve of each segment and an associated variable radius function evolve in order to satisfy a region homogeneity criterion.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130811485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信