T. Heitzmann, C. Doignon, Chadi Albitar, P. Graebling
{"title":"Position-based visual servoing using a coded structured light sensor","authors":"T. Heitzmann, C. Doignon, Chadi Albitar, P. Graebling","doi":"10.1109/ROSE.2008.4669193","DOIUrl":null,"url":null,"abstract":"In this paper, we address a novel visual servoing technique in unknown environment with untextured objects by means of a structured light vision system. The scene surfaces are assumed to be piece-wise planar, however they are free to move and subjected to some deformations. We firstly present a robust coded pattern which allows a fast decoding and which quickly solves the correspondence problem between visual features. It can handle partial occlusions and it has been designed to navigate inside the human body with articulated endoscopes (C. Albitar et al., 2007) . To cope with object modeling, we consider the case of locally planar deformable surfaces and we propose a position-based visual servoing (PBVS) approach with structured light onboard. Such a pattern is very appropriate as it is very robust wrt occlusions, a well-known problem with PBVS when several visual features may get out of the camera field of view during the experiments.","PeriodicalId":331909,"journal":{"name":"2008 International Workshop on Robotic and Sensors Environments","volume":"634 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 International Workshop on Robotic and Sensors Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROSE.2008.4669193","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In this paper, we address a novel visual servoing technique in unknown environment with untextured objects by means of a structured light vision system. The scene surfaces are assumed to be piece-wise planar, however they are free to move and subjected to some deformations. We firstly present a robust coded pattern which allows a fast decoding and which quickly solves the correspondence problem between visual features. It can handle partial occlusions and it has been designed to navigate inside the human body with articulated endoscopes (C. Albitar et al., 2007) . To cope with object modeling, we consider the case of locally planar deformable surfaces and we propose a position-based visual servoing (PBVS) approach with structured light onboard. Such a pattern is very appropriate as it is very robust wrt occlusions, a well-known problem with PBVS when several visual features may get out of the camera field of view during the experiments.