2014 Canadian Conference on Computer and Robot Vision最新文献

筛选
英文 中文
Building Better Formlet Codes for Planar Shape 为平面形状构建更好的模板代码
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.19
A. Yakubovich, J. Elder
{"title":"Building Better Formlet Codes for Planar Shape","authors":"A. Yakubovich, J. Elder","doi":"10.1109/CRV.2014.19","DOIUrl":"https://doi.org/10.1109/CRV.2014.19","url":null,"abstract":"The GRID/formlet representation of planar shape has a number of nice properties [4], [10], [3], but there are also limitations: it is slow to converge for shapes with elongated parts, and it can be sensitive to parameterization as well as grossly ill-conditioned. Here we describe a number of innovations on the GRID/formlet model that address these problems: 1) By generalizing the formlet basis to include oriented deformations we achieve faster convergence for elongated parts. 2) By introducing a modest regularizing term that penalizes the total energy of each deformation we limit redundancy in formlet parameters and improve identifiability of the model. 3) By applying a recent contour remapping method [9] we eliminate problems due to drift of the model parameterization during matching pursuit. These innovations are shown to both speed convergence and to improve performance on a shape completion task.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115971486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Adaptive Robotic Contour Following from Low Accuracy RGB-D Surface Profiling and Visual Servoing 基于低精度RGB-D曲面轮廓和视觉伺服的自适应机器人轮廓跟踪
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.15
D. Nakhaeinia, P. Payeur, R. Laganière
{"title":"Adaptive Robotic Contour Following from Low Accuracy RGB-D Surface Profiling and Visual Servoing","authors":"D. Nakhaeinia, P. Payeur, R. Laganière","doi":"10.1109/CRV.2014.15","DOIUrl":"https://doi.org/10.1109/CRV.2014.15","url":null,"abstract":"This paper introduces an adaptive contour following method for robot manipulators that originally combines low accuracy RGB-D sensing with eye-in-hand visual servoing. The main objective is to allow for the detection and following of freely shaped 3D object contours under visual guidance that is initially provided by a fixed Kinect sensor and refined by a single eye-in-hand camera. A path planning algorithm is developed that constrains the end effector to maintain close proximity to the surface of the object while following its contour. To achieve this goal, a RGB-D sensing is used to rapidly acquire information about the 3D location and profile of an object. However, because of the low resolution and noisy information provided by such sensors, accurate contour following is achieved with an extra eye-in-hand camera that is mounted on the robot's end-effector to locally refine the contour definition and to plan an accurate trajectory for the robot., Experiments carried out with a 7-DOF manipulator and the dual sensory stage are reported to validate the reliability of the proposed contour following method.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121224433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Vision-Based Qualitative Path-Following Control of Quadrotor Aerial Vehicle with Speeded-Up Robust Features 具有加速鲁棒特性的四旋翼飞行器视觉定性路径跟踪控制
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.50
Trung Nguyen, G. Mann, R. Gosine
{"title":"Vision-Based Qualitative Path-Following Control of Quadrotor Aerial Vehicle with Speeded-Up Robust Features","authors":"Trung Nguyen, G. Mann, R. Gosine","doi":"10.1109/CRV.2014.50","DOIUrl":"https://doi.org/10.1109/CRV.2014.50","url":null,"abstract":"This paper describes a vision-based 3D navigation technique for path-following control of Quad rotor Aerial Visual-Teach-and-Repeat system. The navigation method is developed on Funnel Lane theory, which defines possible positions to fly straight. The navigation calculation utilizes the reference images and features to compute the desired heading angle and height during path following. The type of feature is Speeded-Up Robust Features (SURF). The tracking feature method between images is performed by matching SURF feature's descriptors. The Quad rotor is able to independently perform path following in indoor environment without the support of an external tracking system. Simulation is conducted on Robot Operating System (ROS) and Gazebo simulator. The application of the proposed method is visual-homing and visual-servoing in GPS-denied environment.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124101540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Meta-Technique for Increasing Density of Local Stereo Methods through Iterative Interpolation and Warping 一种通过迭代插值和翘曲提高局部立体方法密度的元技术
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.59
A. Murarka, Nils Einecke
{"title":"A Meta-Technique for Increasing Density of Local Stereo Methods through Iterative Interpolation and Warping","authors":"A. Murarka, Nils Einecke","doi":"10.1109/CRV.2014.59","DOIUrl":"https://doi.org/10.1109/CRV.2014.59","url":null,"abstract":"Despite much progress in global methods for computing depth from pairs of stereo images, local block matching methods are still immensely popular largely due to low computational cost and ease of implementation. However, such methods usually fail to produce valid depths in several image regions due to various reasons such as violations of a fronto-parallel assumption and lack of texture. In this paper, we present a simple and fast meta-technique for increasing the percentage of valid depths (depth map density) for local methods while keeping the percentage of pixels with erroneous depths, low. In the method, the original disparity map computed by a local stereo method is iteratively improved through a process of depth interpolation and image warping based on the interpolated depth. Image warping gives a mechanism for testing the validity of the interpolated depths allowing for incorrect depths to be discarded. Our results on the KITTI stereo data set demonstrate that, on average, we can increase density by 7-13% after a single iteration, for a 15-29% increase in computation and only a slight change in the outlier percentage, depending on the cost function used for matching.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131223471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D Reconstruction by Fusioning Shadow and Silhouette Information 融合阴影和轮廓信息的三维重建
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.58
Rafik Gouiaa, J. Meunier
{"title":"3D Reconstruction by Fusioning Shadow and Silhouette Information","authors":"Rafik Gouiaa, J. Meunier","doi":"10.1109/CRV.2014.58","DOIUrl":"https://doi.org/10.1109/CRV.2014.58","url":null,"abstract":"In this paper, we propose a new 3D reconstruction method using mainly the shadow and silhouette information of a moving object or person. This method is derived from the well-known Shape From Silhouettes (SFS) approach. A light source can be seen as a camera, which generates an image as a silhouette shadow. Based on this, we propose to replace a multicamera system of SFS by multi-infrared light sources while keeping the same procedure of Visual Hull reconstruction (VH). Therefore, our system consists of infrared light sources and one infrared camera. In this case, in addition to the object silhouette given by the camera, each light source generates an object shadow that reveals the object. Thus, as in SFS, the VH of a given object is reconstructed by intersecting the visual cones. Our method has many advantages compared to SFS and preliminary results, on synthetic and real scene images, showed that the system could be applied in several contexts.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128494152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Camera Matrix Calibration Using Circular Control Points and Separate Correction of the Geometric Distortion Field 基于圆形控制点和几何畸变场单独校正的摄像机矩阵标定
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.34
Victoria Rudakova, P. Monasse
{"title":"Camera Matrix Calibration Using Circular Control Points and Separate Correction of the Geometric Distortion Field","authors":"Victoria Rudakova, P. Monasse","doi":"10.1109/CRV.2014.34","DOIUrl":"https://doi.org/10.1109/CRV.2014.34","url":null,"abstract":"We achieve a precise camera calibration using circular control points by, first, separation of the lens distortion parameters from other camera parameters and computation of the distortion field in advance by using a calibration harp. Second, in order to compensate for perspective bias, which is prone to occur when using a circled pattern, we incorporate conic affine transformation into the minimization error when estimating the homography, and leave all the other calibration steps as they are used in the literature. Such an error function allows to compensate for the perspective bias. Combined with precise key point detection, the approach is shown to be more stable than current state-of-the-art global calibration method.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133240727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Photon Detection and Color Perception at Low Light Levels 弱光下的光子探测和颜色感知
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.45
Mehdi Rezagholizadeh, James J. Clark
{"title":"Photon Detection and Color Perception at Low Light Levels","authors":"Mehdi Rezagholizadeh, James J. Clark","doi":"10.1109/CRV.2014.45","DOIUrl":"https://doi.org/10.1109/CRV.2014.45","url":null,"abstract":"Working under low light conditions is of particular interest in machine vision applications such as night vision, tone-mapping techniques, low-light imaging, photography, and surveillance cameras. This work aims at investigating the perception of color at low light situations imposed by physical principles governing photon emission. The impact of the probabilistic nature of photon emission on our color perception becomes more significant at low light levels. In this regard, physical principles are leveraged to develop a framework to take into account the effects of low light level on color vision. Results of this study shows that the normalized spectral power distribution of light changes with light intensity and becomes more uncertain at low light situation as a result of which the uncertainty of color perception increases. Furthermore, a color patch at low light levels give rise to uncertain color measurements whose chromaticities form an elliptic shape inside the chromaticity diagram around the high intensity chromaticity of the color patch. The size of these ellipses is a function of the light intensity and the chromaticity of color patches however the orientation of the ellipses depends only on the patch chromaticity and not on the light level. Moreover, the results of this work indicate that the spectral composition of light is a determining factor in the size and orientation of the ellipses. The elliptic shape of measured samples is a result of the Poisson distribution governing photon emission together with the form of human cone spectral sensitivity functions and can partly explain the elliptic shape of MacAdam ellipses.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129939389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automated Door Detection with a 3D-Sensor 带有3d传感器的自动门检测
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.44
Sebastian Meyer zu Borgsen, Matthias Schöpfer, Leon Ziegler, S. Wachsmuth
{"title":"Automated Door Detection with a 3D-Sensor","authors":"Sebastian Meyer zu Borgsen, Matthias Schöpfer, Leon Ziegler, S. Wachsmuth","doi":"10.1109/CRV.2014.44","DOIUrl":"https://doi.org/10.1109/CRV.2014.44","url":null,"abstract":"Service robots share the living space of humans. Thus, they should have a similar concept of the environment without having everything labeled beforehand. The detection of closed doors is challenging because they appear with different materials, designs and can even include glass inlays. At the same time their detection is vital in any kind of navigation tasks in domestic environments. A typical 2D object recognition algorithm may not be able to handle the large optical variety of doors. Improvements of low-cost infrared 3D-sensors enable robots to perceive their environment as spatial structure. Therefore we propose a novel door detection algorithm that employs basic structural knowledge about doors and enables to extract parts of doors from point clouds based on constraint region growing. These parts get weighted with Gaussian probabilities and are combined to create an overall probability measure. To show the validity of our approach, a realistic dataset of different doors from different angles and distances was acquired.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125828510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Visual Saliency Improves Autonomous Visual Search 视觉显著性提高自主视觉搜索
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.23
Amir Rasouli, John K. Tsotsos
{"title":"Visual Saliency Improves Autonomous Visual Search","authors":"Amir Rasouli, John K. Tsotsos","doi":"10.1109/CRV.2014.23","DOIUrl":"https://doi.org/10.1109/CRV.2014.23","url":null,"abstract":"Visual search for a specific object in an unknown environment by autonomous robots is a complex task. The key challenge is to locate the object of interest while minimizing the cost of search in terms of time or energy consumption. Given the impracticality of examining all possible views of the search environment, recent studies suggest the use of attentive processes to optimize visual search. In this paper, we describe a method of visual search that exploits the use of attention in the form of a saliency map. This map is used to update the probability distribution of which areas to examine next, increasing the utility of spatial volumes where objects consistent with the target's visual saliency are observed. We present experimental results on a mobile robot and conclude that our method improves the process of visual search in terms of reducing the time and number of actions to be performed to complete the process.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129234594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Trinocular Spherical Stereo Vision for Indoor Surveillance 用于室内监视的三目球面立体视觉
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.56
M. Findeisen, G. Hirtz
{"title":"Trinocular Spherical Stereo Vision for Indoor Surveillance","authors":"M. Findeisen, G. Hirtz","doi":"10.1109/CRV.2014.56","DOIUrl":"https://doi.org/10.1109/CRV.2014.56","url":null,"abstract":"Stereo vision based sensors are widely used for indoor surveillance applications. Besides the demand for increasing performance the reduction of the overall number of sensors is the crucial issue. The central goal is the reduction of complexity and overall cost of the system. One opportunity is to use wide angle view based or even Omni directional stereo vision sensors. We present a powerful approach which uses three Omni directional cameras in order to compute full hemispherical depth information. By employing this, we can cover a complete room using only one sensor.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115935191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信