{"title":"An Integrated Robotic Multi-Modal Range Sensing System","authors":"P. Curtis, C.S. Yang, P. Payeur","doi":"10.1109/IMTC.2005.1604520","DOIUrl":null,"url":null,"abstract":"Creating 3-D surface representation of large objects or wide working areas is a tedious and error-prone process using the currently available sensor technologies. The primary problem comes from the fact that laser range sensors allow to capture at most one line of points from a given position and orientation, and stereo vision systems accuracy is dependent upon the initial camera calibration, the extraction of features, and the matching of features. When the registration process is not properly controlled, registration errors tend to significantly degrade the accuracy of measurements, which is revealed to be critical in telerobotic operations where occupancy models are built directly from these range measurements. The reliability of range measurements within a singular range sensor technique can drastically distort the registration process, especially within environments unsuitable for the system. Instead of utilizing a single range sensor, we adopt the use of a multi-modal system allowing diverse modes of range sensing techniques to complement each other in the hope that one system's strength could be used to compensate for another system's weakness. Using a mixture of active and passive range sensing techniques, both giving dense and sparse datasets, this multi-modal range sensing system is integrated seamlessly with minimal processing overhead and optimal workspace","PeriodicalId":244878,"journal":{"name":"2005 IEEE Instrumentationand Measurement Technology Conference Proceedings","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2005 IEEE Instrumentationand Measurement Technology Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IMTC.2005.1604520","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Creating 3-D surface representation of large objects or wide working areas is a tedious and error-prone process using the currently available sensor technologies. The primary problem comes from the fact that laser range sensors allow to capture at most one line of points from a given position and orientation, and stereo vision systems accuracy is dependent upon the initial camera calibration, the extraction of features, and the matching of features. When the registration process is not properly controlled, registration errors tend to significantly degrade the accuracy of measurements, which is revealed to be critical in telerobotic operations where occupancy models are built directly from these range measurements. The reliability of range measurements within a singular range sensor technique can drastically distort the registration process, especially within environments unsuitable for the system. Instead of utilizing a single range sensor, we adopt the use of a multi-modal system allowing diverse modes of range sensing techniques to complement each other in the hope that one system's strength could be used to compensate for another system's weakness. Using a mixture of active and passive range sensing techniques, both giving dense and sparse datasets, this multi-modal range sensing system is integrated seamlessly with minimal processing overhead and optimal workspace