M. Attamimi, Delonix Senjaya, D. Purwanto, Ditya Garda Nugraha
{"title":"Pose Estimation of Household Objects Using RGB-D-NIR Camera","authors":"M. Attamimi, Delonix Senjaya, D. Purwanto, Ditya Garda Nugraha","doi":"10.1109/ISITIA59021.2023.10220998","DOIUrl":null,"url":null,"abstract":"Pose estimation is a technique used to predict the pose of an object, i.e., its orientation and its position. This technique is needed by the robot to pick up objects that are placed somewhere. In general, this technique is developed with visual input from a camera. The challenge that is usually faced in this technique is the lighting conditions that are not fixed, where the stability of the estimation results in this environment is highly expected. In this study, we first use an RGB-D-NIR camera to provide color (RGB), depth (D), and near-infrared (NIR) inputs which are expected to complement each other. Second, by utilizing data from the camera, we combine it with Guided Filtering Fusion and Deep learning methods. As a preliminary result of the study, we conducted experiments in normal and dark lightning conditions on household objects, with various input combinations; and obtained quite accurate results in dark conditions.","PeriodicalId":116682,"journal":{"name":"2023 International Seminar on Intelligent Technology and Its Applications (ISITIA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Seminar on Intelligent Technology and Its Applications (ISITIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISITIA59021.2023.10220998","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Pose estimation is a technique used to predict the pose of an object, i.e., its orientation and its position. This technique is needed by the robot to pick up objects that are placed somewhere. In general, this technique is developed with visual input from a camera. The challenge that is usually faced in this technique is the lighting conditions that are not fixed, where the stability of the estimation results in this environment is highly expected. In this study, we first use an RGB-D-NIR camera to provide color (RGB), depth (D), and near-infrared (NIR) inputs which are expected to complement each other. Second, by utilizing data from the camera, we combine it with Guided Filtering Fusion and Deep learning methods. As a preliminary result of the study, we conducted experiments in normal and dark lightning conditions on household objects, with various input combinations; and obtained quite accurate results in dark conditions.