A. Freddi, M. Goffi, S. Longhi, A. Monteriù, D. Ortenzi, D. P. Pagnotta
{"title":"A Gestures Recognition Based Approach for Human-Robot-Interaction","authors":"A. Freddi, M. Goffi, S. Longhi, A. Monteriù, D. Ortenzi, D. P. Pagnotta","doi":"10.1109/ZINC.2018.8448601","DOIUrl":null,"url":null,"abstract":"Ahstract─ This work proposes a robotic manipulator assistant for disabled users and/or for elderly people with limited motor skills. In detail, the interaction among the robot and the user is based on the user gestures recognition. The user chooses an object among those available by moving his/her arm in a specific pose, which is recognized by using an external camera. Then, images of the objects accessible to the robot are acquired via the robot camera, located at the end of the robot arm, and are analyzed by a Support Vector Machine classifier in order to recognize the user selected object. Finally, the manipulator picks the object and places it on the user's hand, whose location in the Cartesian space is determined via the external camera and updated online.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ZINC.2018.8448601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Ahstract─ This work proposes a robotic manipulator assistant for disabled users and/or for elderly people with limited motor skills. In detail, the interaction among the robot and the user is based on the user gestures recognition. The user chooses an object among those available by moving his/her arm in a specific pose, which is recognized by using an external camera. Then, images of the objects accessible to the robot are acquired via the robot camera, located at the end of the robot arm, and are analyzed by a Support Vector Machine classifier in order to recognize the user selected object. Finally, the manipulator picks the object and places it on the user's hand, whose location in the Cartesian space is determined via the external camera and updated online.