2014 Canadian Conference on Computer and Robot Vision最新文献

筛选
英文 中文
3D Scan Registration Using Curvelet Features 使用曲线特征的3D扫描配准
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.18
Siddhant Ahuja, Steven L. Waslander
{"title":"3D Scan Registration Using Curvelet Features","authors":"Siddhant Ahuja, Steven L. Waslander","doi":"10.1109/CRV.2014.18","DOIUrl":"https://doi.org/10.1109/CRV.2014.18","url":null,"abstract":"Scan registration methods can often suffer from convergence and accuracy issues when the scan points are sparse or the environment violates the assumptions the methods are founded on. We propose an alternative approach to 3D scan registration using the curve let transform that performs multi-resolution geometric analysis to obtain a set of coefficients indexed by scale (coarsest to finest), angle and spatial position. Features are detected in the curve let domain to take advantage of the directional selectivity of the transform. A descriptor is computed for each feature by calculating the 3D spatial histogram of the image gradients, and nearest neighbor based matching is used to calculate the feature correspondences. Correspondence rejection using Random Sample Consensus identifies inliers, and a locally optimal Singular Value Decomposition-based estimation of the rigid-body transformation aligns the laser scans given the re-projected correspondences in the metric space. Experimental results on a publicly available dataset of planetary analogue facility demonstrates improved performance over existing methods.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122189707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards Estimating Bias in Stereo Visual Odometry 立体视觉里程计中偏差估计的研究
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.10
Sara Farboud-Sheshdeh, T. Barfoot, R. Kwong
{"title":"Towards Estimating Bias in Stereo Visual Odometry","authors":"Sara Farboud-Sheshdeh, T. Barfoot, R. Kwong","doi":"10.1109/CRV.2014.10","DOIUrl":"https://doi.org/10.1109/CRV.2014.10","url":null,"abstract":"Stereo visual odometry (VO) is a common technique for estimating a camera's motion, features are tracked across frames and the pose change is subsequently inferred. This position estimation method can play a particularly important role in environments in which the global positioning system (GPS) is not available (e.g., Mars rovers). Recently, some authors have noticed a bias in VO position estimates that grows with distance travelled, this can cause the resulting position estimate to become highly inaccurate. The goals of this paper are (i) to investigate the nature of this bias in VO, (ii) to propose methods of estimating it, and (iii) to provide a correction that can potentially be used online. We identify two effects at play in stereo VO bias: first, the inherent bias in the maximum-likelihood estimation framework, and second, the disparity threshold used to discard far-away and erroneous stereo observations. In order to estimate the bias, we investigate three methods: Monte Carlo sampling, the sigma-point method (with modification), and an existing analytical method in the literature. Based on simulations, we show that our new sigma point method achieves similar accuracy to Monte Carlo, but at a fraction of the computational cost. Finally, we develop a bias correction algorithm by adapting the idea of the bootstrap in statistics, and demonstrate that our bias correction algorithm is capable of reducing approximately 95% of bias in VO problems without incorporating other sensors into the setup.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131517702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Proof-of-Concept Demonstration of Visual Teach and Repeat on a Quadrocopter Using an Altitude Sensor and a Monocular Camera 使用高度传感器和单目摄像机在四旋翼飞行器上进行视觉教学和重复的概念验证演示
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.40
Andreas Pfrunder, Angela P. Schoellig, T. Barfoot
{"title":"A Proof-of-Concept Demonstration of Visual Teach and Repeat on a Quadrocopter Using an Altitude Sensor and a Monocular Camera","authors":"Andreas Pfrunder, Angela P. Schoellig, T. Barfoot","doi":"10.1109/CRV.2014.40","DOIUrl":"https://doi.org/10.1109/CRV.2014.40","url":null,"abstract":"This paper applies an existing vision-based navigation algorithm to a micro aerial vehicle (MAV). The algorithm has previously been used for long-range navigation of ground robots based on on-board 3D vision sensors such as a stereo or Kinect cameras. A teach-and-repeat operational strategy enables a robot to autonomously repeat a manually taught route without relying on an external positioning system such as GPS. For MAVs we show that a monocular downward looking camera combined with an altitude sensor can be used as 3D vision sensor replacing other resource-expensive 3D vision solutions. The paper also includes a simple path tracking controller that uses feedback from the visual and inertial sensors to guide the vehicle along a straight and level path. Preliminary experimental results demonstrate reliable, accurate and fully autonomous flight of an 8-m-long (straight and level) route, which was taught with the quadrocopter fixed to a cart. Finally, we present the successful flight of a more complex, 16-m-long route.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128907013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Multiple Feature Fusion in the Dempster-Shafer Framework for Multi-object Tracking Dempster-Shafer框架中多目标跟踪的多特征融合
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.49
Dorra Riahi, Guillaume-Alexandre Bilodeau
{"title":"Multiple Feature Fusion in the Dempster-Shafer Framework for Multi-object Tracking","authors":"Dorra Riahi, Guillaume-Alexandre Bilodeau","doi":"10.1109/CRV.2014.49","DOIUrl":"https://doi.org/10.1109/CRV.2014.49","url":null,"abstract":"This paper presents a novel multiple object tracking framework based on multiple visual cues. To build tracks by selecting the best matching score between several detections, a set of probability maps is estimated by a function integrating templates using a sparse representation and color information using locality sensitive histograms. All people detected in two consecutive frames are matched with each other based on similarity scores. This last task is performed using the comparison of two models (sparse apparence and color models). A score matrix is then obtained for each model. Those scores are combined by Dempster-Shafer's combination rule. To obtain an optimal selection of the best candidate, a data association step is achieved using a greedy search algorithm. We validated our tracking algorithm on challenging publicly available video sequences and we show that we outperform recent state-of-the-art methods.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129847518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Grid Seams: A Fast Superpixel Algorithm for Real-Time Applications 网格接缝:用于实时应用的快速超像素算法
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.25
P. Siva, A. Wong
{"title":"Grid Seams: A Fast Superpixel Algorithm for Real-Time Applications","authors":"P. Siva, A. Wong","doi":"10.1109/CRV.2014.25","DOIUrl":"https://doi.org/10.1109/CRV.2014.25","url":null,"abstract":"Super pixels are a compact and simple representation of images that has been used for many computer vision applications such as object localization, segmentation and depth estimation. While useful as compact representations of images, the time complexity of super pixel algorithms has prevented their use in real-time applications like video processing. Fast super pixel algorithms have been proposed recently but they lack regular structure or required accuracy for representing image structure. We present Grid Seams, a novel seam carving approach to super pixel generation that preserves image structure information while enforcing a global spatial constraint in the form of a grid structure cost. Using a standard dataset, we show that our approach is faster than existing approaches and can achieve accuracies close to a state-of-the-art super pixel generation algorithms.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125289985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring 非负最小二乘图像去模糊的不可行迭代投影Barzilai-Borwein方法
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.33
Kathleen Fraser, D. Arnold, G. Dellaire
{"title":"Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring","authors":"Kathleen Fraser, D. Arnold, G. Dellaire","doi":"10.1109/CRV.2014.33","DOIUrl":"https://doi.org/10.1109/CRV.2014.33","url":null,"abstract":"We present a non-monotonic gradient descent algorithm with infeasible iterates for the nonnegatively constrained least-squares deblurring of images. The skewness of the intensity values of the deblurred image is used to establish a criterion for when to enforce the nonnegativity constraints. The approach is observed on several test images to either perform comparably to or to outperform a non-monotonic gradient descent approach that does not use infeasible iterates, as well as the gradient projected conjugate gradients algorithm. Our approach is distinguished from the latter by lower memory requirements, making it suitable for use with large, three-dimensional images common in medical imaging.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122497582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Outdoor Ice Accretion Estimation of Wind Turbine Blades Using Computer Vision 基于计算机视觉的风力发电机叶片室外冰积估算
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.41
M. Akhloufi, Nassim Benmesbah
{"title":"Outdoor Ice Accretion Estimation of Wind Turbine Blades Using Computer Vision","authors":"M. Akhloufi, Nassim Benmesbah","doi":"10.1109/CRV.2014.41","DOIUrl":"https://doi.org/10.1109/CRV.2014.41","url":null,"abstract":"In this paper, we present a new computer-vision based methodology to address the problem of remote ice detection and measurement on wind turbines operating in cold climate. Icing has a significant impact, affecting the productivity, causing premature wear, malfunction and damages that are hard to track. Manufacturers and operators are facing unpredictable losses that can reach millions of dollars. Algorithms were developed and experimented on images of wind turbines acquired by digital camera in outdoor conditions. Experiments show interesting and promising results for the future.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131274137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Drums: A Middleware-Aware Distributed Robot Monitoring System 鼓:中间件感知分布式机器人监控系统
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1145/2843966.2843974
Valiallah Monajjemi, Jens Wawerla, R. Vaughan
{"title":"Drums: A Middleware-Aware Distributed Robot Monitoring System","authors":"Valiallah Monajjemi, Jens Wawerla, R. Vaughan","doi":"10.1145/2843966.2843974","DOIUrl":"https://doi.org/10.1145/2843966.2843974","url":null,"abstract":"We introduce Drums, a new tool for monitoring and debugging distributed robot systems, and a complement to robot middleware systems. Drums provides online time-series monitoring of the underlying resources that are partially abstracted away by middleware like ROS. Interfacing with the middleware, Drums provides de-abstraction and de-multiplexing of middleware services to reveal the system-level interactions of your controller code, the middleware, OS and the robot(s) environment. We show worked examples of Drums' utility for debugging realistic problems, and propose it as a tool for quality of service monitoring and introspection for robust autonomous systems.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132650828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Identification of Morphologically Similar Seeds Using Multi-kernel Learning 利用多核学习识别形态相似种子
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.27
Xin Yi, M. Eramian, Ruojing Wang, E. Neufeld
{"title":"Identification of Morphologically Similar Seeds Using Multi-kernel Learning","authors":"Xin Yi, M. Eramian, Ruojing Wang, E. Neufeld","doi":"10.1109/CRV.2014.27","DOIUrl":"https://doi.org/10.1109/CRV.2014.27","url":null,"abstract":"Use of digital image analysis for the identification of seeds has not been recognized as a validated method. Image analysis for seed identification has been previously studied, and good recognition rates have been achieved. However, the data sets used in these experiments either contain very few groups of non-verified specimens or little representation of intra-species variations. This study considered a data set containing seed specimens that were verified to represent the species and a typical population variation, as well as look-alike species that share the same morphological appearance, in particular, seeds from species in the same genus, which can be particularly difficult for even trained professionals to visually distinguish. With representative specimens, the image features and machine learning algorithms described herein can achieve a high recognition rate: >97%. Three different types of features from seed images: colour, shape, and texture were extracted, and a multi-kernel support vector machine was used as the classifier. We compared our features to the previous state-of-the-art features and the results showed that the features we selected performed better on our data set.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122669776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Construction of a Mean Surface for the Variability Study of the Cornea 角膜变异性研究中平均曲面的构造
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.51
A. Polette, E. Auvinet, J. Mari, I. Brunette, J. Meunier
{"title":"Construction of a Mean Surface for the Variability Study of the Cornea","authors":"A. Polette, E. Auvinet, J. Mari, I. Brunette, J. Meunier","doi":"10.1109/CRV.2014.51","DOIUrl":"https://doi.org/10.1109/CRV.2014.51","url":null,"abstract":"In this study, we present an algorithm to build a mean surface, applied to the human cornea, for the study of variability within a population. Due to the smoothness of the corneal surface, there is no anatomical anchor. The main challenge is to match several surfaces from different subjects to build the mean cornea. The key idea is to use a registration step based on a global factor: the volume minimization between two surfaces. We then compute the surface disparity after registration. An iterative algorithm minimizes this disparity to determine the best possible matching. The algorithm re-samples the registered surfaces on a common grid to compute the mean surface. Finally, we compute a disparity map and a mean disparity value after registration to estimate the registration accuracy and to compare our method to the existing one.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123862154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信