Keishi Nishikawa, J. Ohya, T. Matsuzawa, A. Takanishi, H. Ogata, K. Hashimoto
{"title":"Automatic Detection of Valves with Disaster Response Robot on Basis of Depth Camera Information","authors":"Keishi Nishikawa, J. Ohya, T. Matsuzawa, A. Takanishi, H. Ogata, K. Hashimoto","doi":"10.1109/DICTA.2018.8615796","DOIUrl":null,"url":null,"abstract":"In recent years, there has been an increasing demand for disaster response robots designed for working in disaster sites such as nuclear power plants where accidents have occurred. One of the tasks the robots need to complete at these kinds of sites is turning a valve. In order to employ robots to perform this task at real sites, it is desirable that the robots have autonomy for detecting the valves to be manipulated. In this paper, we propose a method that allows a disaster response robot to detect a valve, whose parameters such as position, orientation and size are unknown, based on information captured by a depth camera mounted on the robot. In our proposed algorithm, first the target valve is detected on the basis of an RGB image captured by the depth camera, and 3D point cloud data including the target is reconstructed by combining the detection result and the depth image. Second, the reconstructed point cloud data is processed to estimate parameters describing the target. Experiments were conducted on a simulator, and the results showed that our method could accurately estimate the parameters with a minimum error of 0.0230 m in position, 0.196 % in radius, and 0.00222 degree in orientation.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2018.8615796","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In recent years, there has been an increasing demand for disaster response robots designed for working in disaster sites such as nuclear power plants where accidents have occurred. One of the tasks the robots need to complete at these kinds of sites is turning a valve. In order to employ robots to perform this task at real sites, it is desirable that the robots have autonomy for detecting the valves to be manipulated. In this paper, we propose a method that allows a disaster response robot to detect a valve, whose parameters such as position, orientation and size are unknown, based on information captured by a depth camera mounted on the robot. In our proposed algorithm, first the target valve is detected on the basis of an RGB image captured by the depth camera, and 3D point cloud data including the target is reconstructed by combining the detection result and the depth image. Second, the reconstructed point cloud data is processed to estimate parameters describing the target. Experiments were conducted on a simulator, and the results showed that our method could accurately estimate the parameters with a minimum error of 0.0230 m in position, 0.196 % in radius, and 0.00222 degree in orientation.