Fourth Canadian Conference on Computer and Robot Vision (CRV '07)最新文献

筛选
英文 中文
Corridor Navigation and Obstacle Avoidance using Visual Potential for Mobile Robot 基于视觉势的移动机器人走廊导航与避障
Fourth Canadian Conference on Computer and Robot Vision (CRV '07) Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.21
N. Ohnishi, A. Imiya
{"title":"Corridor Navigation and Obstacle Avoidance using Visual Potential for Mobile Robot","authors":"N. Ohnishi, A. Imiya","doi":"10.1109/CRV.2007.21","DOIUrl":"https://doi.org/10.1109/CRV.2007.21","url":null,"abstract":"In this paper, we develop an algorithm for corridor navigation and obstacle avoidance using visual potential for visual navigation by an autonomous mobile robot. The robot is equipped with a camera system which dynamically captures the environment. The visual potential is computed from an image sequence and optical flow computed from successive images captured by the camera mounted on the robot. Our robot selects a local pathway using the visual potential observed through its vision system. Our algorithm enables mobile robots to avoid obstacles without any knowledge of a robot workspace. We demonstrate experimental results using image sequences observed with a moving camera in a simulated environment and a real environment. Our algorithm is robust against the fluctuation of displacement caused by mechanical error of the mobile robot, and the fluctuation of planar-region detection caused by a numerical error in the computation of optical flow.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117093315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Dense Stereo Range Sensing with Marching Pseudo-Random Patterns 基于行进伪随机模式的密集立体距离传感
Fourth Canadian Conference on Computer and Robot Vision (CRV '07) Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.22
D. Desjardins, P. Payeur
{"title":"Dense Stereo Range Sensing with Marching Pseudo-Random Patterns","authors":"D. Desjardins, P. Payeur","doi":"10.1109/CRV.2007.22","DOIUrl":"https://doi.org/10.1109/CRV.2007.22","url":null,"abstract":"As an extension to classical structured lighting techniques, the use of bi-dimensional pseudo-random color codes is explored to perform range sensing with variable density from a stereo calibrated rig and a projector. Pseudo-random codes are used to create artificial textures on a scene which are extracted and grouped in a confidence map to ensure reliable feature matching between pairs of images taken from two cameras. Depth estimation is performed on corresponding points with progressive refinement as the pseudo-random pattern projection is marched over the scene to increase the density of matched features, and achieve dense 3D reconstruction. The potential of bi-dimensional pseudo-random color patterns for structured lighting is demonstrated in terms of patterns computation, ease of extraction, matching confidence level, as well as density of depth estimation for 3D reconstruction.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115956677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Computer Assisted Detection of Polycystic Ovary Morphology in Ultrasound Images 超声图像中多囊卵巢形态的计算机辅助检测
Fourth Canadian Conference on Computer and Robot Vision (CRV '07) Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.18
Maryruth J. Lawrence, M. Eramian, R. Pierson, E. Neufeld
{"title":"Computer Assisted Detection of Polycystic Ovary Morphology in Ultrasound Images","authors":"Maryruth J. Lawrence, M. Eramian, R. Pierson, E. Neufeld","doi":"10.1109/CRV.2007.18","DOIUrl":"https://doi.org/10.1109/CRV.2007.18","url":null,"abstract":"Polycystic ovary syndrome (PCOS) is an endocrine abnormality with multiple diagnostic criteria due to its heterogenic manifestations. One of the diagnostic criteria includes analysis of ultrasound images of ovaries for the detection of number, size, and distribution of follicles within the ovary. This involves manual tracing and counting of follicles on the ultrasound images to determine the presence of a polycystic ovary (PCO). We describe a novel method that automates PCO detection. Our algorithm involves segmentation of follicles from ultrasound images, quantifying the attributes of the automatically segmented follicles using stereology, storing follicle attributes as feature vectors, and finally classification of the feature vector into two categories. The classification categories are: PCO present and PCO absent. An automatic PCO diagnostic tool would save considerable time spent on manual tracing of follicles and measuring the length and width of every follicle. Our procedure was able to achieve classification accuracy of 92.86% using a linear discriminant classifier. Our classifier will improve the rapidity and accuracy of PCOS diagnosis, reducing the risk of the severe complications that can arise from delayed diagnosis.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126774305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Figure-ground segmentation using a hierarchical conditional random field 使用分层条件随机场的图像-地面分割
Fourth Canadian Conference on Computer and Robot Vision (CRV '07) Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.32
Jordan Reynolds, Kevin P. Murphy
{"title":"Figure-ground segmentation using a hierarchical conditional random field","authors":"Jordan Reynolds, Kevin P. Murphy","doi":"10.1109/CRV.2007.32","DOIUrl":"https://doi.org/10.1109/CRV.2007.32","url":null,"abstract":"We propose an approach to the problem of detecting and segmenting generic object classes that combines three \"off the shelf\" components in a novel way. The components are a generic image segmenter that returns a set of \"super pixels\" at different scales; a generic classifier that can determine if an image region (such as one or more super pixels) contains (part of) the foreground object or not; and a generic belief propagation (BP) procedure for tree-structured graphical models. Our system combines the regions together into a hierarchical, tree-structured conditional random field, applies the classifier to each node (region), and fuses all the information together using belief propagation. Since our classifiers only rely on color and texture, they can handle deformable (non-rigid) objects such as animals, even under severe occlusion and rotation. We demonstrate good results for detecting and segmenting cows, cats and cars on the very challenging Pascal VOC dataset.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127985477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Can Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes? 卢卡斯-卡纳德能否用于估计3D混乱场景中的运动视差?
Fourth Canadian Conference on Computer and Robot Vision (CRV '07) Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.15
V. Chapdelaine-Couture, M. Langer
{"title":"Can Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes?","authors":"V. Chapdelaine-Couture, M. Langer","doi":"10.1109/CRV.2007.15","DOIUrl":"https://doi.org/10.1109/CRV.2007.15","url":null,"abstract":"When an observer moves in a 3D static scene, the motion field depends on the depth of the visible objects and on the observer's instantaneous translation and rotation. By computing the difference between nearby motion field vectors, the observer can estimate the direction of local motion parallax and in turn the direction of heading. It has recently been argued that, in 3D cluttered scenes such as a forest, computing local image motion using classical optical flow methods is problematic since these classical methods have problems at depth discontinuities. Hence, estimating local motion parallax from optical flow should be problematic as well. In this paper we evaluate this claim. We use the classical Lucas-Kanade method to estimate optical flow and the Rieger-Lawton method to estimate the direction of motion parallax from the estimated flow. We compare the motion parallax estimates to those of the frequency based method of Mann-Langer. We find that if the Lucas-Kanade estimates are sufficiently pruned, using both an eigenvalue condition and a mean absolute error condition, then the Lucas- Kanade/Rieger-Lawton method can perform as well as or better than the frequency-based method.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114936017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Terrain Modelling for Planetary Exploration 行星探测的地形建模
Fourth Canadian Conference on Computer and Robot Vision (CRV '07) Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.63
Ioannis M. Rekleitis, Jean-Luc Bedwani, S. Gemme, T. Lamarche, E. Dupuis
{"title":"Terrain Modelling for Planetary Exploration","authors":"Ioannis M. Rekleitis, Jean-Luc Bedwani, S. Gemme, T. Lamarche, E. Dupuis","doi":"10.1109/CRV.2007.63","DOIUrl":"https://doi.org/10.1109/CRV.2007.63","url":null,"abstract":"The success of NASA's Mars Exploration Rovers has demonstrated the important benefits that mobility adds to planetary exploration. Very soon, mission requirements will impose that planetary exploration rovers drive autonomously in unknown terrain. This will require an evolution of the methods and technologies currently used. This paper presents our approach to 3D terrain reconstruction from large sparse range data sets, and the data reduction achieved through decimation. The outdoor experimental results demonstrate the effectiveness of the reconstructed terrain model for different types of terrain. We also present a first attempt to classify the terrain based on the scans properties.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129010762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Local Graph Matching for Object Category Recognition 局部图匹配的目标类别识别
Fourth Canadian Conference on Computer and Robot Vision (CRV '07) Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.44
E. F. Ersi, J. Zelek
{"title":"Local Graph Matching for Object Category Recognition","authors":"E. F. Ersi, J. Zelek","doi":"10.1109/CRV.2007.44","DOIUrl":"https://doi.org/10.1109/CRV.2007.44","url":null,"abstract":"A novel model for object category recognition in real-world scenes is proposed. Images in our model are represented by a set of triangular labelled graphs, each containing information on the appearance and geometry of a 3-tuple of distinctive image regions. In the learning stage, our model automatically learns a set of codebooks of model graphs for each object category, where each codebook contains information about which local structures may appear on which parts of the object instances of the target category. A two-stage method for optimal matching is developed, where in the first stage a Bayesian classifier based on ICA factorization is used efficiently to select the matched codebook, and in the second stage a nearest neighbourhood classifier is used to assign the test graph to one of the learned model graphs of the selected codebook. Each matched test graph casts votes for possible identity and poses of an object instance, and then a Hough transformation technique is used in the pose space to identify and localize the object instances. An extensive evaluation on several large datasets validates the robustness of our proposed model in object category recognition and localization in the presence of scale and rotation changes.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117308905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
3D Tree-Structured Object Tracking for Autonomous Ground Vehicles 自主地面车辆的三维树结构目标跟踪
Fourth Canadian Conference on Computer and Robot Vision (CRV '07) Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.1
Changsoo Jeong, A. C. Parker
{"title":"3D Tree-Structured Object Tracking for Autonomous Ground Vehicles","authors":"Changsoo Jeong, A. C. Parker","doi":"10.1109/CRV.2007.1","DOIUrl":"https://doi.org/10.1109/CRV.2007.1","url":null,"abstract":"Safe and effective vision analysis is a key capability for autonomous ground vehicle (AGV) guidance systems. The complexity of natural settings requires the use of a robust image understanding technique. The proposed novel 3D tree-structured object tracking approach is implemented by tracking 2D objects in successive video frames using a wavelet-domain tree structure. It is robust and reliable due to its powerful data structure and also adaptable for moving and stationary object tracking as well as the tracking problem when the vehicle itself is in motion. This approach consists of wavelet decomposition, spatial object detection and temporal object tracking. The results show this approach can produce precise detection and tracking results.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117179625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Efficient camera motion and 3D recovery using an inertial sensor 有效的相机运动和3D恢复使用惯性传感器
Fourth Canadian Conference on Computer and Robot Vision (CRV '07) Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.23
M. Labrie, P. Hébert
{"title":"Efficient camera motion and 3D recovery using an inertial sensor","authors":"M. Labrie, P. Hébert","doi":"10.1109/CRV.2007.23","DOIUrl":"https://doi.org/10.1109/CRV.2007.23","url":null,"abstract":"This paper presents a system for 3D reconstruction using a camera combined with an inertial sensor. The system mainly exploits the orientation obtained from the inertial sensor in order to accelerate and improve the matching process between wide baseline images. The orientation further contributes to incremental 3D reconstruction of a set of feature points from linear equation systems. The processing can be performed online while using consecutive groups of three images overlapping each other. Classic or incremental bundle adjustment is applied to improve the quality of the model. Test validation has been performed on object and camera centric sequences.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121621473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Energy Efficient Robot Rendezvous 节能机器人交会
Fourth Canadian Conference on Computer and Robot Vision (CRV '07) Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.27
Pawel Zebrowski, Y. Litus, R. Vaughan
{"title":"Energy Efficient Robot Rendezvous","authors":"Pawel Zebrowski, Y. Litus, R. Vaughan","doi":"10.1109/CRV.2007.27","DOIUrl":"https://doi.org/10.1109/CRV.2007.27","url":null,"abstract":"We examine the problem of finding a single meeting location for a group of heterogeneous autonomous mobile robots, such that the total system cost of traveling to the rendezvous is minimized. We propose two algorithms that solve this problem. The first method computes an approximate globally optimal meeting point using numerical simplex minimization. The second method is a computationally cheap heuristic that computes a local heading for each robot: by iterating this method, all robots arrive at the globally optimal location. We compare the performance of both methods to a naive algorithm (center of mass). Finally, we show how to extend the methods with inter-robot communication to adapt to new environmental information.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122728226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信