2010 Canadian Conference on Computer and Robot Vision最新文献

筛选
英文 中文
Automated Place Classification Using Object Detection 使用目标检测的自动地点分类
2010 Canadian Conference on Computer and Robot Vision Pub Date : 2010-05-31 DOI: 10.1109/CRV.2010.49
P. Viswanathan, T. Southey, J. Little, Alan K. Mackworth
{"title":"Automated Place Classification Using Object Detection","authors":"P. Viswanathan, T. Southey, J. Little, Alan K. Mackworth","doi":"10.1109/CRV.2010.49","DOIUrl":"https://doi.org/10.1109/CRV.2010.49","url":null,"abstract":"Places in an environment can be described by the objects they contain. This paper discusses the completely automated integration of object detection and place classification in a single system. We first perform automated learning of object-place relations from an online annotated database. We then train object detectors on some of the most frequently occurring objects. Finally, we use detection scores as well as learned object-place relations to perform place classification of images. We also discuss areas for improvement and the application of this work to informed visual search. As a whole, the system demonstrates the automated acquisition of training data containing labeled instances (i.e. bounding boxes) and the performance of a state-of-the-art object detection technique trained on this data to perform place classification of realistic indoor scenes.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129612321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Probabilistic Framework for Feature-Point Matching 特征点匹配的概率框架
2010 Canadian Conference on Computer and Robot Vision Pub Date : 2010-05-31 DOI: 10.1109/CRV.2010.8
R. Tal, M. Spetsakis
{"title":"Probabilistic Framework for Feature-Point Matching","authors":"R. Tal, M. Spetsakis","doi":"10.1109/CRV.2010.8","DOIUrl":"https://doi.org/10.1109/CRV.2010.8","url":null,"abstract":"In this report we introduce a novel approach for determining correspondence in a sequence of images. We formulate a probabilistic framework that relates a feature's appearance and its position under relaxed statistical assumptions. We employ a Monte-Carlo approximation for the joint probability density of the feature position and its appearance that uses a flexible noise and motion model to generate random samples. The joint probability density is modeled by a Gaussian Mixture. The feature's position given its appearance is then determined by maximizing its posterior. We evaluate our method using real and synthetic sequences and compare its performance with leading or popular algorithms from the literature. The noise robustness of our algorithm is superior under a wide variety of conditions. The method can be applied in the context of optical flow, tracking and any application that needs feature point matching.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121567397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Human Action Recognition Using Salient Opponent-Based Motion Features 基于显著对手运动特征的人类动作识别
2010 Canadian Conference on Computer and Robot Vision Pub Date : 2010-05-31 DOI: 10.1109/CRV.2010.54
A. Shabani, J. Zelek, David A Clausi
{"title":"Human Action Recognition Using Salient Opponent-Based Motion Features","authors":"A. Shabani, J. Zelek, David A Clausi","doi":"10.1109/CRV.2010.54","DOIUrl":"https://doi.org/10.1109/CRV.2010.54","url":null,"abstract":"Human action recognition can be performed using multiscale salient features which encode the local events in the video. Existing feature extraction methods use non-causal spatio-temporal filtering, and hence, they are not biologically plausible. To address this inconsistency, new features extracted from a biologically plausible perception model are introduced. In this model, the opponent-based motion energy is computed using oriented motion filters constructed from a bio-inspired time-causal filtering. The salient features are then extracted from the regions of interest in the motion energy map. The extracted opponent based motion features are then utilized for action classification with a bag-of-words approach. Experiments using a publicly available (Weizmann) data set shows 93:5% classification accuracy which is an improvement over comparable methods.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130765901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Fast FEM-Based Non-Rigid Registration 基于快速有限元法的非刚性配准
2010 Canadian Conference on Computer and Robot Vision Pub Date : 2010-05-31 DOI: 10.1109/CRV.2010.56
K. Popuri, Dana Cobzas, Martin Jägersand
{"title":"Fast FEM-Based Non-Rigid Registration","authors":"K. Popuri, Dana Cobzas, Martin Jägersand","doi":"10.1109/CRV.2010.56","DOIUrl":"https://doi.org/10.1109/CRV.2010.56","url":null,"abstract":"In this paper, we present a fast and accurate implementation of the diffusion-based non-rigid registration algorithm. Traditionally, finite differences are used to implement registration algorithms due to their ease of implementation. However, finite differences are sensitive to noise, and they have a narrow numerical stability range. Further, finite differences employ a uniform grid. This is often not desirable in the case of registration, as finer resolution is needed to capture the displacement field in regions that have a high number of image features, as opposed to homogeneous regions with fewer features. On the other hand, the less explored Finite Element Methods are ideal for the non-rigid registration task, as they use a non-uniform discretization of the image domain, placing points based on the local image-feature information. We present such an FEM-based implementation of a popular diffusion-based registration algorithm~cite{stefanescu04}. Originally, this algorithm was implemented using finite differences. Experimentally, we show that our implementation is much faster than the corresponding finite difference implementation, and that it achieves this In this paper, we present a fast and accurate implementation of the diffusion-based non-rigid registration algorithm. Traditionally, finite differences are used to implement registration algorithms due to their ease of implementation. However, finite differences are sensitive to noise, and they have a narrow numerical stability range. Further, finite differences employ a uniform grid. This is often not desirable in the case of registration, as finer resolution is needed to capture the displacement field in regions that have a high number of image features, as opposed to homogeneous regions with fewer features. On the other hand, the less explored Finite Element Methods are ideal for the non-rigid registration task, as they use a non-uniform discretization of the image domain, placing points based on the local image-feature information. We present such an FEM-based implementation of a popular diffusion-based registration algorithm [8]. Originally, this algorithm was implemented using finite differences. Experimentally, we show that our implementation is much faster than the corresponding finite difference implementation, and that it achieves this computational speed without compromising the accuracy of the non-rigid registration results. computational speed without compromising the accuracy of the non-rigid registration results. [8] R. Stefanescu, X. Pennec, and N. Ayache, \"Grid powered nonlinear image registration with locally adaptive regularization\", Medical Image Analysis, 8(3):325–342, 2004.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116317320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Construction of a 3D Model of Real-world Object Using Range Intensity Images 使用距离强度图像构建真实世界物体的三维模型
2010 Canadian Conference on Computer and Robot Vision Pub Date : 2010-05-31 DOI: 10.1109/CRV.2010.48
Masato Kusanagi, Kenji Terabayashi, K. Umeda, G. Godin, M. Rioux
{"title":"Construction of a 3D Model of Real-world Object Using Range Intensity Images","authors":"Masato Kusanagi, Kenji Terabayashi, K. Umeda, G. Godin, M. Rioux","doi":"10.1109/CRV.2010.48","DOIUrl":"https://doi.org/10.1109/CRV.2010.48","url":null,"abstract":"Texture mapping is useful for constructing a three-dimensional (3D) model because a realistic 3D model can be obtained efficiently and quickly. This paper proposes a system to construct a 3D model using range intensity images. A range intensity image, which is also called a reflectance image, refers to the intensity image that is acquired simultaneously with the range image captured using an active range sensor. Such an image has an important property in which illumination conditions, such as geometrical arrangement and power of illumination, can be controlled at the capture time, which allows the estimation of the reflectance properties of the object. Several methods using range intensity images are improved and combined to construct an effective system, the registration of range images and color images is realized, an omni directional geometric model is constructed by registering and integrating multiple range images with range intensity images, and the influence of the illumination environment that occurs in color images is removed. In addition, a method to estimate the illumination color is introduced to compensate for the color of illumination light. Experiments show the effectiveness of the constructed system for obtaining a realistic 3D model.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129757891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Human Upper Body Pose Recognition Using Adaboost Template for Natural Human Robot Interaction 基于Adaboost模板的自然人机交互人体上半身姿势识别
2010 Canadian Conference on Computer and Robot Vision Pub Date : 2010-05-31 DOI: 10.1109/CRV.2010.55
Liyuan Li, Kah Eng Hoe, Xinguo Yu, Li Dong, Xinqi Chu
{"title":"Human Upper Body Pose Recognition Using Adaboost Template for Natural Human Robot Interaction","authors":"Liyuan Li, Kah Eng Hoe, Xinguo Yu, Li Dong, Xinqi Chu","doi":"10.1109/CRV.2010.55","DOIUrl":"https://doi.org/10.1109/CRV.2010.55","url":null,"abstract":"In this paper, we propose a novel Adaboost template to recognize human upper body poses from disparity images for natural human robot interaction (HRI). First, the upper body poses of standing persons are classified into seven categories of views. For each category, a mean template, variance template, and percentage template are generated. Then, the template region is divided into positive and negative regions, corresponding to the region of bodies and surrounding open space. A weak classifier is designed for each pixel in the template. A new EM-like Adaboost learning algorithm is designed to learn the Adaboost template. Different from existing Adaboost classifiers, we show that the Adaboost template can be used not only for recognition but also for adaptive top-down segmentation. By using Adaboost template, only a few positive samples for each category are required for learning. Comparison with conventional template matching techniques has been made. Experimental results show that significant improvements can be achieved in both cases. The method has been deployed in a social robot to estimate human attentions to the robot in real-time human robot interaction.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133948170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Robot Localization in Rough Terrains: Performance Evaluation 机器人在粗糙地形中的定位:性能评估
2010 Canadian Conference on Computer and Robot Vision Pub Date : 2010-05-31 DOI: 10.1109/CRV.2010.39
E. F. Ersi, John K. Tsotsos
{"title":"Robot Localization in Rough Terrains: Performance Evaluation","authors":"E. F. Ersi, John K. Tsotsos","doi":"10.1109/CRV.2010.39","DOIUrl":"https://doi.org/10.1109/CRV.2010.39","url":null,"abstract":"The goal of this paper is to present an overview of two common processes involved in most visual robot localization techniques: data association and robust motion estimation. For each of them, we review some of the available solutions and compare their performance in the context of outdoor robot localization, where the robot is subject to 6-DOF motion. Our experiments with different combinations of data association and motion estimation techniques show the superiority of the Hessian-Affine feature detector and the SIFT feature descriptor for data association, and the Hough Transform for robust motion estimation.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132502633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Water Flow Detection in a Handwashing Task 洗手任务中的水流检测
2010 Canadian Conference on Computer and Robot Vision Pub Date : 2010-05-31 DOI: 10.1109/CRV.2010.30
B. Taati, Jasper Snoek, David Giesbrecht, Alex Mihailidis
{"title":"Water Flow Detection in a Handwashing Task","authors":"B. Taati, Jasper Snoek, David Giesbrecht, Alex Mihailidis","doi":"10.1109/CRV.2010.30","DOIUrl":"https://doi.org/10.1109/CRV.2010.30","url":null,"abstract":"Older adults suffering from Alzheimer's disease often require assistance with performing simple activities of daily living, such as washing their hands in the bathroom. This severely limits their independence and places a heavy care giving burden on their family and the healthcare system. The motivation for developing a water detection algorithm is for it to be used within a system that provides reminding prompts for Alzheimer's sufferers and to study product usability for older adults with cognitive impairments. Water detection in a video sequence poses a challenging computer vision problem since it is difficult to model the flow of water in a structured manner. A real-time detection system is presented here that estimates the presence of flowing water in a bathroom sink during a hand washing task based on classifying video and audio features with an overall accuracy of 88.76%. Visual features are extracted using temporal image derivatives and hand tracking is used to enhance the robustness in the visual features.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128268462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Fall in! Sorting a Group of Robots with a Continuous Controller 下降!用连续控制器对一组机器人进行排序
2010 Canadian Conference on Computer and Robot Vision Pub Date : 2010-05-31 DOI: 10.1109/CRV.2010.42
Y. Litus, R. Vaughan
{"title":"Fall in! Sorting a Group of Robots with a Continuous Controller","authors":"Y. Litus, R. Vaughan","doi":"10.1109/CRV.2010.42","DOIUrl":"https://doi.org/10.1109/CRV.2010.42","url":null,"abstract":"This paper describes the first robotic system that solves a combinatorial computational problem by means of its own continuous dynamics. The goal of the system is to rearrange a set of robots on a line in a certain predefined order, thereby sorting them. Conventional pairwise between-robot rank comparisons suggested by traditional discrete state sorting algorithms are avoided by coupling robots in a Brockett double bracket flow system. A conventional multi-robot simulation with non-holonomic driving, noisy sensor data, collision avoidance and sensor occlusions suggests that this flow system can withstand perturbations introduced into the ideal dynamics by the physical limitations of real robots.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133331529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Extracting outlined planar clusters of street facades from 3D point clouds 从三维点云中提取街道立面的轮廓平面集群
2010 Canadian Conference on Computer and Robot Vision Pub Date : 2010-05-31 DOI: 10.1109/CRV.2010.23
K. Hammoudi, F. Dornaika, B. Soheilian, N. Paparoditis
{"title":"Extracting outlined planar clusters of street facades from 3D point clouds","authors":"K. Hammoudi, F. Dornaika, B. Soheilian, N. Paparoditis","doi":"10.1109/CRV.2010.23","DOIUrl":"https://doi.org/10.1109/CRV.2010.23","url":null,"abstract":"This paper presents an approach for extracting 3D outlined planar clusters of street facades. Terrestrial laser data are acquired using a Mobile Mapping System (MMS). Mapping of street facades is of great interest in various digital mapping and robotic research topics. After a filtering step of the 3D point cloud, the dominant hypothetical facade planes are detected using an adapted Progressive Probabilistic Hough Transform (PPHT). The corresponding planar clusters are extracted using a priori geometric knowledge of street. The clusters are horizontally and vertically delimited using heuristic approaches. The adapted PPHT allows the automatic extraction of georeferenced planar clusters of facades with a fine detection of dominant facade lines and a low computation time. The adopted approach has been tested on a set of point cloud acquired in the city of Paris under real conditions. Examples and experimental results show the efficiency and the potential of the proposed approach.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123028627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信