A Study on Robot-Human System with Consideration of Individual Preferences : 2nd Report, Multimodal Human-Machine Interface for Object-Handing Robot System
M. Jindai, S. Shibata, Tomonori Yamamoto, Tomio Watanabe
{"title":"A Study on Robot-Human System with Consideration of Individual Preferences : 2nd Report, Multimodal Human-Machine Interface for Object-Handing Robot System","authors":"M. Jindai, S. Shibata, Tomonori Yamamoto, Tomio Watanabe","doi":"10.1299/jsmec.49.1033","DOIUrl":null,"url":null,"abstract":"In this study, we propose an object-handing robot system with a multimodal human-machine interface which is composed of speech recognition and image processing units. Using this multimodal human-machine interface, the cooperator can order the object-handing robot system using voice commands and hand gestures. In this robot system, the motion parameters of the robot, which are maximum velocity, velocity profile peak and handing position, can be adjusted by the voice commands or the hand gestures in order to realize the most appropriate motion of the robot. Furthermore, the cooperator can order the handing of objects using voice commands along with hand gestures. In these voice commands, the cooperator can use adverbs. This permits the cooperator to realize efficient adjustments, because the adjustment value of each motion parameters is determined by adverbs. In particular, adjustment values corresponding to adverbs are estimated by fuzzy inference in order to take into consideration the ambiguities of human speech.","PeriodicalId":151961,"journal":{"name":"Jsme International Journal Series C-mechanical Systems Machine Elements and Manufacturing","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Jsme International Journal Series C-mechanical Systems Machine Elements and Manufacturing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1299/jsmec.49.1033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
In this study, we propose an object-handing robot system with a multimodal human-machine interface which is composed of speech recognition and image processing units. Using this multimodal human-machine interface, the cooperator can order the object-handing robot system using voice commands and hand gestures. In this robot system, the motion parameters of the robot, which are maximum velocity, velocity profile peak and handing position, can be adjusted by the voice commands or the hand gestures in order to realize the most appropriate motion of the robot. Furthermore, the cooperator can order the handing of objects using voice commands along with hand gestures. In these voice commands, the cooperator can use adverbs. This permits the cooperator to realize efficient adjustments, because the adjustment value of each motion parameters is determined by adverbs. In particular, adjustment values corresponding to adverbs are estimated by fuzzy inference in order to take into consideration the ambiguities of human speech.