{"title":"Using vision and haptic sensing for human-humanoid joint actions","authors":"Don Joven Agravante, A. Cherubini, A. Kheddar","doi":"10.1109/RAM.2013.6758552","DOIUrl":null,"url":null,"abstract":"Human-humanoid haptic joint actions are collaborative tasks requiring a sustained haptic interaction between both parties. As such, most research in this field has concentrated on how to use solely the robot's haptic sensing to extract the human partners' intentions. With this information, interaction controllers are designed. In this paper, the addition of visual sensing is investigated and a suitable framework is developed to accomplish this. This is then tested on examples of haptic joint actions namely collaboratively carrying a table. Additionally a visual task is implemented on top of this. In one case, the aim is to keep the table level taking into account gravity. In another case, a freely moving ball is balanced to keep it from falling off the table. The results of the experiments show that the framework is able to utilize both information sources properly to accomplish the task.","PeriodicalId":287085,"journal":{"name":"2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RAM.2013.6758552","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Human-humanoid haptic joint actions are collaborative tasks requiring a sustained haptic interaction between both parties. As such, most research in this field has concentrated on how to use solely the robot's haptic sensing to extract the human partners' intentions. With this information, interaction controllers are designed. In this paper, the addition of visual sensing is investigated and a suitable framework is developed to accomplish this. This is then tested on examples of haptic joint actions namely collaboratively carrying a table. Additionally a visual task is implemented on top of this. In one case, the aim is to keep the table level taking into account gravity. In another case, a freely moving ball is balanced to keep it from falling off the table. The results of the experiments show that the framework is able to utilize both information sources properly to accomplish the task.