The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)最新文献

筛选
英文 中文
Automatic Classification of Outdoor Images by Region Matching 基于区域匹配的户外图像自动分类
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06) Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.15
O. V. Kaick, Greg Mori
{"title":"Automatic Classification of Outdoor Images by Region Matching","authors":"O. V. Kaick, Greg Mori","doi":"10.1109/CRV.2006.15","DOIUrl":"https://doi.org/10.1109/CRV.2006.15","url":null,"abstract":"This paper presents a novel method for image classification. It differs from previous approaches by computing image similarity based on region matching. Firstly, the images to be classified are segmented into regions or partitioned into regular blocks. Next, low-level features are extracted from each segment or block, and the similarity between two images is computed as the cost of a pairwise matching of regions according to their related features. Experiments are performed to verify that the proposed approach improves the quality of image classification. In addition, unsupervised clustering results are presented to verify the efficacy of this image similarity measure.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116046159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Extracting and tracking Colon’s "Pattern" from Colonoscopic Images 从结肠镜图像中提取和跟踪colons“模式”
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06) Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.35
Hanene Chettaoui, G. Thomann, C. Amar, T. Redarce
{"title":"Extracting and tracking Colon’s \"Pattern\" from Colonoscopic Images","authors":"Hanene Chettaoui, G. Thomann, C. Amar, T. Redarce","doi":"10.1109/CRV.2006.35","DOIUrl":"https://doi.org/10.1109/CRV.2006.35","url":null,"abstract":"In this paper, we propose a new method for \"pattern\" extraction and tracking from endoscopic images. During colonoscopic intervention, the endoscope advance slowly. Therefore the displacement of the endoscope tool between two successive images is small. In this condition, it is possible to predict the set of possible positions of the target. We use this idea to develop two methods. The first method presented is based on the region growth. The information of continuity was used to extract and track colon \"pattern\" with resolving the traditional problem of this technique: the identification of the seed point. In the second method, we introduce a notion of distance between two successive images that the \"pattern\" cannot exceed. We also propose criteria of shape to identify diverticula. A set of endoscopic images is tested to demonstrate the effectiveness of the proposed approaches. An interpretation of the results and the possible amelioration is presented.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121101338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Integrating Animated Pedagogical Agent as Motivational Supporter into Interactive System 将动画教学主体作为激励支持者融入互动系统
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06) Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.43
P. D. Silva, A. Madurapperuma, A. Marasinghe, M. Osano
{"title":"Integrating Animated Pedagogical Agent as Motivational Supporter into Interactive System","authors":"P. D. Silva, A. Madurapperuma, A. Marasinghe, M. Osano","doi":"10.1109/CRV.2006.43","DOIUrl":"https://doi.org/10.1109/CRV.2006.43","url":null,"abstract":"In modern world, children are interested in interacting with computers in many ways, for e.g. game playing, ELearning, chatting etc. This interest could be effectively exploited to develop their personality by creating interactive systems that adapt to different emotional states and intensities of children interacting with them. Many of the existing games are designed to beat the children rather than encourage them to win. Further, many of these systems do not take neither the emotional state nor the intensity of emotions into consideration. In this paper we present an interactive multi-agent based system that recognizes child’s emotion. A social agent uses cognitive and non-cognitive factors to estimate a child’s intensity of emotions in real time and an autonomous/intelligent agent uses cognitive and non-cognitive factors to estimate a child’s intensity of emotions in real time and an autonomous/intelligent agent uses an adaptation model based on the intensity of child’s emotion to change the game status. An animated pedagogical agent gives motivational help to encourage the adaptation of the system in an interactive manner. Results show that affective gesture recognition model recognizes a child’s emotion with a considerably higher rate of over 82.5% and the social agent (estimate intensity of emotion) has strong relationship with observers’ feedback except in low intensity levels.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126632295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Tracking 3D free form object in video sequence 跟踪视频序列中的3D自由形式对象
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06) Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.79
D. Merad, Jean-Yves Didier, Mihaela Scuturici
{"title":"Tracking 3D free form object in video sequence","authors":"D. Merad, Jean-Yves Didier, Mihaela Scuturici","doi":"10.1109/CRV.2006.79","DOIUrl":"https://doi.org/10.1109/CRV.2006.79","url":null,"abstract":"In this paper we describe an original method for the 3D free form object tracking in monocular vision. The main contribution of this article is the use of the skeleton of an object in order to recognize, locate and track this object in real time. Indeed, the use of this kind of representation made it possible to avoid difficulties related to the absence of prominent elements in free form objects (which makes the matching process easier). The skeleton is a lower dimension representation of the object, it is homotopic and it has a graph structure. This allowed us to use powerful tools of the graph theory in order to perform matching between scene objects and models (recognition step). Thereafter, we used skeleton extremities as interest points for the tracking. Keywords: Tracking, 3D free form object, Skeletonization, Graph matching.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128459864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Colour-Gradient Redundancy for Real-time Spatial Pose Tracking in Autonomous Robot Navigation 自主机器人导航中实时空间姿态跟踪的颜色梯度冗余
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06) Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.22
H. D. Ruiter, B. Benhabib
{"title":"Colour-Gradient Redundancy for Real-time Spatial Pose Tracking in Autonomous Robot Navigation","authors":"H. D. Ruiter, B. Benhabib","doi":"10.1109/CRV.2006.22","DOIUrl":"https://doi.org/10.1109/CRV.2006.22","url":null,"abstract":"Mobile-robot interception or rendezvous with a maneuvering target requires the target’s pose to be tracked. This paper presents a novel 6 degree-of-freedom pose tracking algorithm. This algorithm incorporates an initial-pose estimation scheme to initiate tracking, operates in real-time, and, is robust to large motions. Initial-pose estimation is performed using the on-screen position and size of the target to extract 3D position, and, Principal Component Analysis (PCA) to extract orientation. Real-time operation is achieved by using GPU-based filters and a novel data-reduction algorithm. This data reduction algorithm exploits an important property of colour images, namely, that the gradients of all colour channels are generally aligned. A processing rate of approximately 60 to 85 fps was obtained. Multi-scale optical-flow has been adapted for use in the tracker, to increase robustness to larger motions.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128254226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Autonomous fish tracking by ROV using Monocular Camera 基于单目摄像机的ROV自主跟踪鱼类
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06) Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.16
Jun Zhou, C. Clark
{"title":"Autonomous fish tracking by ROV using Monocular Camera","authors":"Jun Zhou, C. Clark","doi":"10.1109/CRV.2006.16","DOIUrl":"https://doi.org/10.1109/CRV.2006.16","url":null,"abstract":"This paper concerns the autonomous tracking of fish using a Remotely Operated Vehicle (ROV) equipped with a single camera. An efficient image processing algorithm is presented that enables pose estimation of a particular species of fish - a Large Mouth Bass. The algorithm uses a series of filters including the Gabor filter for texture, projection segmentation, and geometrical shape feature extraction to find the fishes distinctive dark lines that mark the body and tail. Feature based scaling then produces the position and orientation of the fish relative to the ROV. By implementing this algorithm on each frame of a series of video frames, successive relative state estimates can be obtained which are fused across time via a Kalman Filter. Video taken from a VideoRay MicroROV operating within Paradise Lake, Ontario, Canada was used to demonstrate off-line fish state estimation. In the future, this approach will be integrated within a closed-loop controller that allows the robot to autonomously follow the fish and monitor its behavior.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"15 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131070865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
An Enhanced Positioning Algorithm for a Self-Referencing Hand-Held 3D Sensor 一种自参考手持3D传感器的增强定位算法
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06) Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.10
R. Khoury
{"title":"An Enhanced Positioning Algorithm for a Self-Referencing Hand-Held 3D Sensor","authors":"R. Khoury","doi":"10.1109/CRV.2006.10","DOIUrl":"https://doi.org/10.1109/CRV.2006.10","url":null,"abstract":"This study deals with the design of an enhanced selfreferencing algorithm for a typical hand-held 3D sensor. The enhancement we propose takes the form of a new algorithm which forms and matches triangles out of the scatter of observed reference points and the sensor’s list of reference points. Three different techniques to select which triangles to consider in each scatter of points are considered in this paper, and theoretical arguments and experimental results are used to determine the best of the three.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115655542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Underwater 3D Mapping: Experiences and Lessons learned 水下3D制图:经验和教训
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06) Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.80
A. Hogue, A. German, J. Zacher, M. Jenkin
{"title":"Underwater 3D Mapping: Experiences and Lessons learned","authors":"A. Hogue, A. German, J. Zacher, M. Jenkin","doi":"10.1109/CRV.2006.80","DOIUrl":"https://doi.org/10.1109/CRV.2006.80","url":null,"abstract":"This paper provides details on the development of a tool to aid in 3D coral reef mapping designed to be operated by a single diver and later integrated into an autonomous robot. We discuss issues that influence the deployment and development of underwater sensor technology for 6DOF hand-held and robotic mapping. We describe our current underwater vision-based mapping system, some of our experiences, lessons learned, and discuss how this knowledge is being incorporated into our underwater sensor.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131093814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
An Iterative Super-Resolution Reconstruction of Image Sequences using a Bayesian Approach with BTV prior and Affine Block-Based Registration 基于BTV先验和仿射块配准的贝叶斯图像序列迭代超分辨率重建
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06) Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.12
V. Patanavijit, S. Jitapunkul
{"title":"An Iterative Super-Resolution Reconstruction of Image Sequences using a Bayesian Approach with BTV prior and Affine Block-Based Registration","authors":"V. Patanavijit, S. Jitapunkul","doi":"10.1109/CRV.2006.12","DOIUrl":"https://doi.org/10.1109/CRV.2006.12","url":null,"abstract":"The traditional SR image registrations are based on translation motion model therefore super-resolution applications can apply only on the sequences that have simple translation motion. In this paper, we present a novel image registration, the fast affine block-based registration, for performing super-resolution using multiple images. We propose super-resolution reconstruction that uses a high accuracy registration algorithm, the fast affine block-based registration [15], and is based on a maximum a posteriori estimation technique by minimizing a cost function. The L1 norm is used for measuring the difference between the projected estimate of the high-resolution image and each low resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Bilateral regularization is used as prior knowledge for removing outliers, resulting in sharp edges and forcing interpolation along the edges and not across them. The experimental results show that the proposed reconstruction can apply on real sequence such as Suzie.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123702121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Object Boundary Detection in Ultrasound Images 超声图像中的目标边界检测
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06) Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.51
Moi Hoon Yap, E. Edirisinghe, H. Bez
{"title":"Object Boundary Detection in Ultrasound Images","authors":"Moi Hoon Yap, E. Edirisinghe, H. Bez","doi":"10.1109/CRV.2006.51","DOIUrl":"https://doi.org/10.1109/CRV.2006.51","url":null,"abstract":"This paper presents a novel approach to boundary detection of regions-of-interest (ROI) in ultrasound images, more specifically applied to ultrasound breast images. In the proposed method, histogram equalization is used to preprocess the ultrasound images followed by a hybrid filtering stage that consists of a combination of a nonlinear diffusion filter and a linear filter. Subsequently the multifractal dimension is used to analyse the visually distinct areas of the ultrasound image. Finally, using different threshold values, region growing segmentation is used to the partition the image. The partition with the highest Radial Gradient Index (RGI) is selected as the lesion. A total of 200 images have been used in the analysis of the presented results. We compare the performance of our algorithm with two well known methods proposed by Kupinski et al. and Joo et al. We show that the proposed method performs better in solving the boundary detection problem in ultrasound images.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132836051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信