{"title":"基于自然语音和视觉的智能界面支持机器人行为的获取","authors":"K. Watanabe, C. Jayawardena, K. Izumi","doi":"10.1109/ICSENS.2007.355484","DOIUrl":null,"url":null,"abstract":"Natural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.","PeriodicalId":233838,"journal":{"name":"2006 5th IEEE Conference on Sensors","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Intelligent Interface Using Natural Voice and Vision for Supporting the Acquisition of Robot Behaviors\",\"authors\":\"K. Watanabe, C. Jayawardena, K. Izumi\",\"doi\":\"10.1109/ICSENS.2007.355484\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Natural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.\",\"PeriodicalId\":233838,\"journal\":{\"name\":\"2006 5th IEEE Conference on Sensors\",\"volume\":\"2013 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2006 5th IEEE Conference on Sensors\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSENS.2007.355484\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 5th IEEE Conference on Sensors","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSENS.2007.355484","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Intelligent Interface Using Natural Voice and Vision for Supporting the Acquisition of Robot Behaviors
Natural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.