Shoupu Chen, Z. Kazi, M. Beitler, M. Salganicoff, D. Chester, R. Foulds
{"title":"基于手势语音的康复机器人人机界面","authors":"Shoupu Chen, Z. Kazi, M. Beitler, M. Salganicoff, D. Chester, R. Foulds","doi":"10.1109/SECON.1996.510021","DOIUrl":null,"url":null,"abstract":"One of the most challenging problems in rehabilitation robotics is the design of an efficient human-machine interface (HMI) allowing the user with a disability considerable freedom and flexibility. A multimodal user direction approach combining command and control methods is a very promising way to achieve this goal. This multimodal design is motivated by the idea of minimizing the user's burden of operating a robot manipulator while utilizing the user's intelligence and available mobilities. With this design, the user with a physical disability simply uses gesture (pointing with a laser pointer) to indicate a location or a desired object and uses speech to activate the system. Recognition of the spoken input is also used to supplant the need for general purpose object recognition between different objects and to perform the critical function of disambiguation. The robot system is designed to operate in an unstructured environment containing objects that are reasonably predictable. A novel reactive planning mechanism, of which the user is an active integral component, in conjunction with a stereo-vision system and an object-oriented knowledge base, provides the robot system with the 3D information of the surrounding world as well as the motion strategies.","PeriodicalId":338029,"journal":{"name":"Proceedings of SOUTHEASTCON '96","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Gesture-speech based HMI for a rehabilitation robot\",\"authors\":\"Shoupu Chen, Z. Kazi, M. Beitler, M. Salganicoff, D. Chester, R. Foulds\",\"doi\":\"10.1109/SECON.1996.510021\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One of the most challenging problems in rehabilitation robotics is the design of an efficient human-machine interface (HMI) allowing the user with a disability considerable freedom and flexibility. A multimodal user direction approach combining command and control methods is a very promising way to achieve this goal. This multimodal design is motivated by the idea of minimizing the user's burden of operating a robot manipulator while utilizing the user's intelligence and available mobilities. With this design, the user with a physical disability simply uses gesture (pointing with a laser pointer) to indicate a location or a desired object and uses speech to activate the system. Recognition of the spoken input is also used to supplant the need for general purpose object recognition between different objects and to perform the critical function of disambiguation. The robot system is designed to operate in an unstructured environment containing objects that are reasonably predictable. A novel reactive planning mechanism, of which the user is an active integral component, in conjunction with a stereo-vision system and an object-oriented knowledge base, provides the robot system with the 3D information of the surrounding world as well as the motion strategies.\",\"PeriodicalId\":338029,\"journal\":{\"name\":\"Proceedings of SOUTHEASTCON '96\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of SOUTHEASTCON '96\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SECON.1996.510021\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of SOUTHEASTCON '96","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SECON.1996.510021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Gesture-speech based HMI for a rehabilitation robot
One of the most challenging problems in rehabilitation robotics is the design of an efficient human-machine interface (HMI) allowing the user with a disability considerable freedom and flexibility. A multimodal user direction approach combining command and control methods is a very promising way to achieve this goal. This multimodal design is motivated by the idea of minimizing the user's burden of operating a robot manipulator while utilizing the user's intelligence and available mobilities. With this design, the user with a physical disability simply uses gesture (pointing with a laser pointer) to indicate a location or a desired object and uses speech to activate the system. Recognition of the spoken input is also used to supplant the need for general purpose object recognition between different objects and to perform the critical function of disambiguation. The robot system is designed to operate in an unstructured environment containing objects that are reasonably predictable. A novel reactive planning mechanism, of which the user is an active integral component, in conjunction with a stereo-vision system and an object-oriented knowledge base, provides the robot system with the 3D information of the surrounding world as well as the motion strategies.