{"title":"Gesture Recognition and Effective Interaction Based Dining Table Cleaning Robot","authors":"J. Moh, T. Kijima, Bin Zhang, Hun-ok Lim","doi":"10.1109/RITAPP.2019.8932802","DOIUrl":null,"url":null,"abstract":"We present a framework for dining table cleaning robot, which enables the robot to detect the cleaning target and perform cleaning task correspondingly to the given instruction, without needing prior information of the cleaning target. A cleaning robot should be able to detect the object efficiently. In order to enable object detection without prior information, the background subtraction method is employed, which is based on the 3D point group data taken by a RGB-D camera. In addition to object detection, a cleaning robot should be able to modify its movement in accordance with the user’s instructions. Therefore, we propose an interaction system which allows the user to use gesture to provide instructions to the robot. A pointing gesture is used to specify the cleaning target. When the information needed for the cleaning task is insufficient, the robot will ask for further information from the user. If multiple objects are detected, the robot will rank all the objects according to their distance from the pointed coordinate. The user can re-designate the cleaning target with preregistered gesture commands. Once the robot has collected enough information for its duty, it will execute the cleaning task specified by the user.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RITAPP.2019.8932802","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
We present a framework for dining table cleaning robot, which enables the robot to detect the cleaning target and perform cleaning task correspondingly to the given instruction, without needing prior information of the cleaning target. A cleaning robot should be able to detect the object efficiently. In order to enable object detection without prior information, the background subtraction method is employed, which is based on the 3D point group data taken by a RGB-D camera. In addition to object detection, a cleaning robot should be able to modify its movement in accordance with the user’s instructions. Therefore, we propose an interaction system which allows the user to use gesture to provide instructions to the robot. A pointing gesture is used to specify the cleaning target. When the information needed for the cleaning task is insufficient, the robot will ask for further information from the user. If multiple objects are detected, the robot will rank all the objects according to their distance from the pointed coordinate. The user can re-designate the cleaning target with preregistered gesture commands. Once the robot has collected enough information for its duty, it will execute the cleaning task specified by the user.